index
int64
0
18.8k
text
stringlengths
0
826k
year
stringdate
1980-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
600
TEST: A MODEL-DRIVEN APPLICATION SHELL Gary S. Kahn, Al Kepner, and Jeff Pepper Carnegie Group Inc. Pittsburgh, Pa. 15219 Abstract TEST (Troubleshooting Expert System. Tool) is an application shell that provides a domain-Independent diagnostic problem solver to ether with a library of schematic prototypes. TE& fills a design nrche halfway between rule-based and causal-model approaches. This approach has resulted in a design that meets several functional requirements for an effective Voubleshootin a shelt. Most critically, TEST can represent bot the impact of failure-modes on .a machine or system of interest, as well as the heurist!c problem-solving behavior which can lead to rapid conclusions. This paper provides an overview of TEST’s approach to dia nosis. As a special urpose application shell, B EST provides P considerab y more leverage to developers than can be gained through the use of general purpose heuristic classification systems. 1. lntrsduction TEST1 (Troubleshooting Expert System Tool) is an aoolication shell that provrdes a domain-independent d/agnostic problem solver together with a library of schematic prototy es. object ty es and t R These prototypes constitute the T e structure required by each domarn- soecific EST knowledge base. TEST applications for factory floor machines: vehicles, and computers, are currently in development. TEST fills a design niche halfway between rule- based and causal-model a proaches. On one hand, TEST uses a weak causa P model to describe causal links between failure-modes; and on the other, TEST uses rules to constrain and direct diagnostic reasoning. Te:zIt IS, ;tie;pyi retsopects, similar to several other develop problem-solvin architectures suitable to the troubleshooting ii tas [Bylander et a/. 83, Hofmann et al. 861. TEST differs, however, in offering a more differentiated knowledge &&yd a more powerful set of control and inference . ‘TEST is an internal name used at Carnegie Group Inc. TEST is implemented in Mnowledge Craffm. TEST’s a preach has resulted in a design that meets severa P functional troubleshootin ii shell. requirements for an effective represent bot Most critically, TEST can the impact of failure-modes on a machine or system of interest, as well as the heuristic problem-solvin conclusions. 8 behavior which can lead to rapid he underlying representation and the problem-solving method are easily understood by both design engineers and diagnostic technicians. This has had a positive im act Q$!;;rns bu!lt rn TIE !i T on knowledge acquisition. marntarnable ham be;;oEndb;ltbe more Emycin-like [\/anMelle et al. 811 belief rules. using TEST’s approach to diagnosis is explained in the following sections. The first provides a context of previous work in the field, identi 8 ing limitations which motivated the development of TE T. The followin P two sections present overviews of the TEST base and dia nostic know edge roblem solver. The subsequent section descn es TE T’s unique use of rules. 8 z! Many features of the TEST s stem ! K cannot be covered within the sco e of t is comprehensive account of paper. A EST’s functionalrty can be found in [Pepper and Mullins 861; TEST’s approach to repair is described in [Pepper and Kahn 871; and finally, development of a s ecralized workbench is reporte 8 knowledge acquisition in [Kahn 871. Many diagnostic expert s stems have been built over the last several years. 4! ypically, these syst;:; use either evidential or causal reasoning. evidential reasoning systems, such as Mycin 761 and Mud [Kahn and McDermott 861, are \ Shortliffe ru e-based. Each rule represents a belief association between evidential considerations and a conclusion warranted by the evidence. There may be many rules bearing on the same conclusion. A numeric al orithm is used to compose evidence provided by eat a applicable rule. A . . sup art P differential diagnosis, or in the case of a ua itative reasonin iagnosis. Casnet [ bt a simulative approach to eiss et a/. 78 used a probabilistic model of causal relations to d Bayesian analysis. rive an essentially 814 Expert Systems From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Problem solvers which rely on causal models to support differential diagnosis or simulation typically run into three problems when used to reason about machine faults. The first occurs during development, as the task of constructing large models becomes bogged down in complexity and issues of behavioral validation. The second occurs at run-time as these techniques ult in intensive search and, as a ln troubleshooting tasks, a diagnostic conclusion is reached by performing a test or series of tests that to isolate an underlying failure-mode. is less a matter of evaluating the evidence in than of effectively searching for a conclusive e this search can often be explained in terms strategies. base maintenance, ive problem-solving nlike rule-based diagnostic systems, ntic network of schematic objects, cal T uses a rames or !a a &e-to link. At the bottom of the hierarchy, as shown, are failure- modes of individual components, e. . power supplies ( W502, X501, MO5 , the particular tube. 7 or the picture Intermediate failure-modes typically represent functional failures which are causal consequences of component failures, e.g. “Hum in LV classes of failures, e.ge, “LV Power !i ower supply”, or Yl upply Problem”. f intermediate failure-modes are common. ca networks have 4 to 10 levels of concerns, 01% bccur on occasion. paths, as in CASTE determining the occurrence of a failure-mode. Rk?S represent a variety of contingent actions rather than evidence/belief propositions al as is typical in Emycin-like diagnostic systems. $Ts use of rules is described in section 4. Parts provide descri tors of parts that are associated with component fai ures or P with repairs. - DUE-TO -1: A failure-mode hierarchy ?he example used here is based on television troubleshooting as described by Tinnell [Tinnell 74 1. Actual TEST knowledge bases are proprietary to Carnegie Group clients. Kahfl, Kepner, and Pepper 815 Decision-nodes provide a mechanism for integrating conventional diagnostic decision logic into the otherwise failure-mode oriented knowledge base. Although T can generate its own decision logic from the failure- mode knowledge base, domain ex erts often prefer to & rovide the decision lo % ic directly. e his ma be done by uilding a decision-no e network. Each ecision-node 6y represents a test together with branches to other tests contingent on the result of the first. Decision-node networks typically terminate with the failure-modes that could cause the problem associated with the network’s entry point. Knowledge base maintenance is facilitated by clustering information around failure-modes (see figure 2-2). Since the failure-mode is the key concept in most troubleshooting tasks, such aggregates provide an easily understood and readily accessible structure. Inspection of a failure-mode provides direct access to associated tests, repairs and documentation, as well as to forward and backward causal links to other failure- modes in the network. Domain-specific knowledge bases represent pre- compiled search spaces and serve as input to the problem solver. Given the failure-mode hierarchy and other auxiliary information, the problem solver searches for a diagnostic conclusron, interactively prompting a technician, or sampling sensors and databases as necessary to obtain evidence to proceed with the diagnostic session. The search space can be dynamically altered by rules (see below) sensitive to information acquired during a diagnostic session. Knowledge en ineers can also choose the appropriate level of granu arity St for the representation of causal chains, thus constraining the depth of search required prior to hypothesizin preferred order in w ?I a particular failure. Moreover, the ich to consider candidate causes may be easily specified. Figure 2-2: A failure-mode aggregate 16 Expert Systems In general terms, the problem solver pursues a depth-first recursive strate or determined failure. B y starting with an observed t seeks the cause of an occurring failure-mode considering candidate causes (other failure-modes referenced in the due-to slot of this failure-mode. A andidate causes can have three states: confirmed, disconfirmed, and unknown. Failure- modes are confirmed when the problem solver determines that they have occurred. If a candidate cause is disconfirmed, the problem solver moves on to consider another ossibilit candidate cause is confirmed, the pro g y- If .a lem so ver will consequently seek to determine Its causes. This procedure continues until a terminal failure-mode is Identified. Terminal failures, those without instantiated due-to slots, are typically repairable faults. Following the example in figure 2-1, let’s assume that a short raster was observed. In this case, the problem solver would first consider “defective power supply diode’” as the cause of “short raster”. If this were ruled out, it would proceed to consider a “vertical swee failure”. If a vertical swee or was unknown, the pro E failure were to be confirme g , consider its causes -- !em solver would proceed to ‘vertical sweep generator failure” and “vertical output failure.” As new failure-modes come up for consideration, the problem solver chooses a method of confirmation provided by the knowledge base developer. It may be a direct test, a rule-based inference procedure, or the disconfirmatory recognition (modus to//ens) that a necessary consequent of the failure-mode had not occurred. If the failure-mode cannot be confirmed or disconfirmed, the problem solver will nevertheless proceed to examine potential causes. If a failure-mode can have multiple causes, the diagnostic analysis will not terminate until all potential candidate causes are evaluated. Apart from the failure-mode hierarchy, the problem solver can also be driven by decision-nodes, and data- gathering activities. The former are used to represent conventional diagnostic decision logic. Decision-nodes represent steps in a conditional sequence of tests which terminate in a decision to rule-out, confirm, or focus on a failure-mode. TEST’s ability to integrate test- and failure-mode-driven diagnosis has been critical to knowledge acquisition as both approaches are typically prevalent in the procedures used by technicians and referenced by manuals. Data-gathering activities are used when tests should be run as a matter of convenience rather than for immediate diagnostic purposes. For instance, if dismantling is required for a particular test, it may be desirable to run other tests that require similar dismantling before reassembly, even though the latter tests are not of immediate relevance. Additional1 , unsolicited Y TEST allows users to volunteer than e B in ormation, as well as to dynamg;eJli the course of the diagnosis. troub eshooting s is desirable to ta tl stems tend to be highly interactive, it e advantage as much as possible of user input, particularly the human ability to notice diagnostically critical information, even though the system may not be asking for it. Moreover, the hunches of experienced technicians can often prove valuable in re&rcing diagnostic search. Supporting inout for both hunches and observations, makes use of its human partners, but is perceived as being more user friendly and less frustrating to use. Finally, the problem solver supports a belief maintenance s Y stem that is used to provide explanation and an undo acility. The latter provides the ability to selectively modify any prior input. The impact of a modificatron is propagated through the belief system, possibly resulting in a change of diagnostic focus. The troubleshooting task, like any other, can be characterized in terms of standard procedures and default knowled considerations % ~l~hrch must be a!tered when special dynamically than Rules provide the means to e’a knowledge base under specified circumstances. w ules are conditional ex ressions of the form “IF (condition) THEN (action).” Tp he condition is a boolean combination of (schema, slot, value) triples, each of which which re the knowledge base. + resent a piece of information in he action specifies a value or change in value for a schema/slot location. Rules may be characterized as immediate or on-focus. Immediate rules act as demons, firing as soon as their conditions are satisfied. On-focus, or goal-driven, rules are evoked only when the rule is relevant to the current focus of the problem solver. Causal-~odeii~ rules are used to alter the failure-mode f! hierarc v. For instance. when the LV power supply is under consideration, and it is known that the television can emit sound, X501 and R505 can be removed from the due-to slot of possible causes, as these ower sources disable the audio on failure (See figure !i -1 ). Conjunctive causes are similarly modelled with causal-modeling rules. That is, a failure-mode could be added to a due-to list only under the condition that another failure-mode has been determined to occur. roceduaal rules are used to modify the several kinds of procedural and methodological knowledge that mav be reoresented in a TEST knowledae base. Most critical are’the rules used decision-node networks. to modify the fglure-node and Background information acquired durin execution may suggest altering the order in which far ure-modes 9 are considered. This is typically done to more close1 reflect evidential impact on the likelihoods for eat IfI failure-mode. A rule would be used, for instance, to indicate that “a bad picture tube” should be considered prior to “LV power supply problem” when the raster is missing and the picture tube is of a series known to be defective. In the context of machine diagnosis, such rules provide a mechanism for ensuring that failures due to part wear are investigated first for older machines, but only after manufacturing parts problems in the case of newer machines. Kahn, Kepner, and Pepper 817 Procedural rules also facilitate the process of modeling conventional decision logic. Rules overlaid on decision-nodes may alter the transition P ath to a subsequent decision, or respecify which fai ure-modes are confirmed or disconfirmed as new information is acquired at each decision-node. Thus, TEST permits developers to focus on the default decision logic without worrying about working aty ical 8 alternatives into the network. These are easily a ded as special case rules. Finally, relevance rules may be used to filter the knowledge base by deactivating objects. Rules attached to failure-modes, for instance, can remove the failure-mode from consideration dunng a diagnosis. This feature is used, for example, in multiple model knowledge bases when the component part associated with a failure-mode is not actually used in the model (or manufacturing run) represented by the unit presenting the fault. T provides an effective approach to modeling troubleshooting knowledge. Domain-dependent knowledge bases can be built using concepts familiar to diagnostic technicians and design engineers. Default diagnostic strategies as well as s R ecial case heuristics can be easily represented in the nowledge base, with the causal relations that underlie diagnostic reasoning. By structuring the knowledge base around the failure- mode cohcept, si % nificant modularity and maintainability is achieved. TES offers a unique mixture of schematic and rule-based reasoning. Unlike most of its predecessors, TEST provides mechanisms for readily expressing search behavior, as well as for adapting search to newly acquired information. Because search behavior is determined by heuristic rules, TEST’s performance is better than systems which must corn ute alternative hypotheses on the basis of a causal mo cr el. Several features of TEST may car over well to the design of other application shells. These include the distinction between model and roblem solver, as opposed to knowledge base and in erence engine. The P problem solver knows much more about the task domain and the model assumes much more about the problem solver than the knowledge base/inference engine distinction implies. The problem solver is driven by the model, and as such, preference for vanous control strate ies can be expressed in the model. Secondly, TE 8 T’s success as a knowled e engineering tool has de ended on the use of 8 omain-familiar concepts. T is has enabled knowledge engineers to R easily map information from expert sources into the knowled import o the representations used. Finally, a model 7 e base; and to explain to their experts the within which heuristic search constraints may be expressed appears critical to the performance of model- driven systems. References [Bylander et al. 831 B lander, T., Mittal, S., and Chandrasekaran, B. 6 S stems SRL: A Language for Expert 2 for Diagnosis. In Proceedings of the ighth international Joint Conference on Artificial Intelligence. 1983. [DeKleer and Brown 841 DeKleer, J. and Brown, J.S. A Qualitative Physics Based on Confluences. Artificial Intelligence , 1984. [Hofmann et al. 861 Hofmann, M., Caviedes, J., Bourne, J.. Beale. G.. and Broderson. A. Buildina Exoert Systems for Repair Domains. expert Systems 3(l) , January, 1986. [Kahn 871 Kahn, G.S. From Application Shell to Knowledge Acquisition System. In Proceedings of International Joint Conference on Arttficial Intelligence. 1987. [Kahn and McDermott 861 Kahn, G.S., and McDermott, ,llgTghe MUD System. IEEE Expert l(1) , Spring, . [Patil et al. 811 Patil, R.P., Szolovits, P., and Schwartz, W. Causal Understanding of Patient Illness in Medical Dia il nosis. lnternationa In Proceedings of the Seventh Joint Conference on Artificial Intelligence. 1981. [Pep er e and Kahn 861 Pe per, Knowledge Craft: An I! J. and Kahn, nvironment for Rapid Prototyping of. Expert Systems. In Prqceedings of ~~~~l&tellrgence for the Automotrve Industry. , . [Pep er and Kahn 871 Pepper, J. and Kahn, G.S. ii epair Strategies in a Diagnostic Expert System. In Proceedings of International Joint Conference on Artificial Intelligence. 1987. [Pepper and Mullins 861 Pepper, J. and Mullins, Artificral Diagnosis. lntellr In B ence Applied to Audio Systems roceedings of the International Conference on Transportatron Electronics. 1986. [Pople 821 Pople, H. Heuristic Methods for Imposing Structure on Ill- structured Porblems. In Szolovits. ~i~e~t~r , Iv Arti+ial Intelligence in Medicine, pages . estview Press, 1982. [Shortliffe 761 Shortliffe, E. Computer-Based Medical Consultation: Mycin. Elsevier, 1976. [Thompson and Clancey .86] Thompson, T. and Clancey, W. J. A Qualitative Modellin Shell for rr8c&ess Diagnosis. IEEE Software 3( B ) , March, . [Tinnell 711 Tinnell, R.W. Television Symtom Diagnosis. Howard W. Sams & Co., 1971. [VanMelle et al. 811 Van Melle, W., Scott, A.C., Bennett, J.C., and Peairs, M. A. The Emycin Manual. Technical Report, Stanford, 1981. [Weiss et al. 781 Weiss, S., Kerm, K.B., Kulikowski, C.A., and Amarel, S. A Model-Based Method for Corn uter-Aided Medical Decision-Making. Artificial Intel igence , 1978. P 818 Expert Systems
1987
146
601
Scripf-Based easoning For Situatio Sharon J. Laskowski and Emily J. Hofmann The MITRE Corporation C31 Artificial Intelligence Center 7525 Colshire Drive McLean, Virginia 22102-3481 Abstract An expert system that monitors complex activity requires knowledge that is difficult to capture with standard rule-based representations. The focus of this research has been to design and implement script-based reasoning techniques integrated into a rule-based expert system for situation monitoring to address this problem. The resulting expert system, Scripted ANalyst (SCAN), for battlefield monitoring has the capability of reasoning about tactical situations as they develop and pro- viding plausible explanations of activities as inferred from intel- ligence reports. Sequences of events are monitored through the use of script templates which are matched against events and the time relations between events. SCAN detects causal rela- tions between events, generates multiple hypotheses, fills in information gaps, and sets up expectations about time- dependent events--all features a simple rule-based expert system cannot easily provide. 1. IntrocSuction This paper describes the research and development of artificial intelligence paradigms and structures needed to build an expert system decision aid for army tactical intel- ligence staff as they hypothesize about a battlefield situa- tion. In monitoring the situation to support force com- mand and control (C2) d ecision-making, a military intelli- gence analyst must not only interpret the force disposition given reports from multiple sources but must also formu- late a sense of how a situation is developing over a long period of time. An expert system designed to monitor sequences of events can help the analyst keep track of the many possible explanations of ongoing actions and inten- tions and recognize any unusual or unexpected activity. However, such a system must have internal represen- tations to deal with sophisticated time and order relation- ships so that it can generate multiple hypotheses, fill in information gaps, and set up expectations. It must notice trends and shifts in the action and activity and provide clear explanations of its inferencing. This inferencing includes the ability to expect that certain events have occurred based on information about related events without necessarily asserting these through the rule base. All this must be done in an environment where informa- tion is potentially sparse or misleading. This research was supported by the Rome Air Development Center, under Contract No. F19628-86-C-0001. Scripted ANalyst (SCAN) is an expert system with these capabilities made possible by integrating script- based event matching and time reasoning into a previously developed rule-based system, ANALYST, which supplies a situation map of enemy force dispositions and a critical indicator monitoring capability. The main contributions of this research are the script and time representations as coordinated with the rule-based expert system and the construction of an inference mechanism that allows an expert system to reason about causal relationships as recognized from sequences of events. This approach can also be viewed as a first step to plan recognition: detecting the adversary’s goals. The next section discusses the domain, past MITRE developments that are the foundation of our research, and other relevant work. Section 3 outlines the issues that must be addressed for situation monitoring expert sys- tems. Definitions and details of the script representation are presented in Section 4, while the control of the inferencing process is described in Section 5. Finally, we summarize research issues for more extensive applications of script representations and plan recognition to expert systems. 2. ackgrouncl Previously, MITRE researchers developed ANALYST [3,6,9], an expert system that is able to infer real-time situation displays from multiple sensor sources and also processes mission-oriented information requests. ANALYST answers these requests with a rule base of static critical indicators that refer to the force disposition. ANALYST is also part of a project to construct a set of cooperating expert systems perform a portion of the C salled ALLIES [5] designed to reasoning process. ALLIES includes a military operations planning expert system (OPLANNER) d an a object-oriented simulation of the war (Battlefield Environment Model). In the context of ALLIES, it became apparent that ANALYST was not capable of, but had the potential for, in-depth analysis that would give a clearer picture of the adversary’s activi- ties and intentions. The SCAN design was inspired by scripts as applied to natural language processing [lO,ll]. However, there are few situation monitoring expert systems that have been developed for domains as volatile as the SCAN applica- tion. In [4] a plan recognizer with a simple goal detector Laskowski and Hofmann From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. for analyzing aircraft threat is described. Blackboard architectures have been used in domains such as speech processing, but these do not fully address the problems that an expert system must handle in a rapidly changing environment. Fall’s work [8] uses a representation called a “model” similar to scripts to propagate evidence through time for situation monitoring, but without a robust inter- face to a rule-based expert system. The Ventilator Manager (VM) program [7] is an example of a MYCIN-like system with an underlying state transition model for inter- preting data in an intensive care unit, but was found to be to be inadequate for monitoring data continuously over time. The power of SCAN lies in its ability to monitor continuously changing situations with missing, inaccurate and/or deceptive sensor data and analyze multiple adver- saries while at the same time preserving the desirable characteristics of a rule-based expert system. 3. Motivation In any situation monitoring expert system applied to a rapidly changing environment, time becomes an essential element that must be integrated into the knowledge used to reason about the situation. The causal relationships between events and the expectations of events taking place in specific sequences all play a role in painting a picture of a situation given partial data about a set of activities. It is also crucial to have a well-organized history of events to support any conclusions such a system infers. ANALYST is goal-driven by user information requests which are static critical indicators represented by propositions. Answers to the requests have likelihood values (Dempster-Shafer likelihood intervals) that indicate the configuration on the battlefield based on the current situation map (SITMAF’) of military units. ANALYST can neither recognize activity as signifying an action develop- ing over several snapshots of the SITMAP nor “bootstrap” itself into suggesting some explanation of the progress of the activity and how it relates to other past, present, or future activities. While it is possible to place the knowledge about sequence of events into ANALYST rules, the rules would be very complex because the sequences are long and the causal dependencies are not always precisely sequential. Events have duration and may overlap in many different ways. Rules would require long chains of antecedents or long chains of rules to hook up these antecedents. Knowledge engineering and debugging these rules would be non-intuitive and quite difficult, and explanations would be confusing to a user. In other words, the major benefits of using an expert system technology over more traditional software techniques would be lost. For the first design and implementation of SCAN, the research concentrated on the script knowledge representa- tion and control to illustrate that, indeed, the type of knowledge described above could be represented simply. Two major assumptions were made. First, SCAN does no complicated spatial reasoning; interesting areas on the situation map are pre-defined based on the current scenario. This focuses where the major activity is located and simplifies script searching and matching. The second assumption is to use the uncertainty representation (likeli- hood intervals and likelihood probabilities) currently in ANALYST with little modification. These assumptions are re-examined in Section 6. 4. The Script-Based Approach The notion of representing sequences of events as templates or scripts is analogous to representing stereotyp- ical information for natural language processing as explored in Schank’s research [lo]. SCAN is a “goal detec- tor” that could be used to guide the search of a plan recog- nizer similar to the plan understanding described in [ll]. From the SCAN viewpoint, scripts are sequences of events, an event being an activity occurring for a specific duration that is detectable. Each event is described by another script or a proposition and viewed as an indicator that the parent script is occurring. We use this script paradigm to create a knowledge structure that acts as an event template to be matched against a series of time slices comprised of SITMAPs and associated inferences. This matching process is complex for several reasons. Any tactical maneuver unfolds as a sequence of (possibly) overlapping steps or events and they must be fit together like pieces of a puzzle as they are uncovered. Because events are recognized by SCAN as a discrete measurement of continuous occurrences, the start and end times of events are rough approximations. Matching must then take place based on only guesses as to the ordering and durations of the events. Some events might not have been recognized at all. 4.1 The Script Knowledge Structure The representation for script knowledge was designed to be expressive enough not only for monitoring and plan recognition applications, but also for ALLIES planning and simulation purposes where the impreciseness of sen- sors is not a problem. For the purpose of this paper, we illustrate the script knowledge structure and control with a football example. Although a tactical maneuver in foot- ball might last for only a few seconds as opposed to several hours in the military domain, the matching algorithms used for monitoring and guessing the adversary’s actions are similar. A script knowledge base is stored as a set of lists in a file which are then accessed through a frame language. Each script entry has the following format: (defscript script-name script-elements-list script-bindings-list necessary-preconditions-list sufficient-preconditions-list script-analysis) The script-elements-list contains names of sub-scripts which are either pointers to other scripts (and may be used to build up a taxonomy of scripts) or names of proposi- tions which will be monitored as as information requests. 820 Expert Systems Each element in script-elements-list has the format: (nmne type bindimge-list preamnditicona-Bist) The type is used to specify whether the element is another script or a proposition. The bindings-list contains variable names and their values, if present. The bindings are used to describe the context of the instantiation of a script or proposition (for example, location, time, or direction of movement) and the default constants such as the duration and weight of an event. The preconditions-list has predi- cates which may refer to the script-name of other script- elements in the defscript. This preconditions-list is used to specify entry conditions and time relations between the event the element represents and the other events in the parent script. The time predicates are based on Allen’s temporal language (I$]. The script-bindings-list has the same format as the bindings-list of the script-elements-list describing the con- text of the instantiated script. The two preconditions-lists are made up of predicates and allow the distinction between necessary preconditions based on predicates that remain static as opposed to dynamic preconditions. For example, a necessary precon- dition for a particular football play could be the team pos- sessing a ,specific capability such as an extremely strong running back, whereas sufficient preconditions could be field position, yardage to go, and time remaining in the game in the current context. This allows greater efficiency in searching scripts--there is no need to monitor a script in a context that does not meet the necessary preconditions. The script-analysis describes the evaluation function used to determine how well a script matches a current situation, that is, to calculate its likelihood. Typical examples of SCAN football script knowledge are a counter-tray-play and a running-back-fake shown in Figure I. All time units are in seconds. The counter-tray- play contains six events, two of which are sub-scripts--a running-back-fake and a quarter-back-fake. An instantia- tion of the counter-tray-play script has several contextual bindings: the field location of the play, the direction of movement, and the time. The likelihood, actual duration, and currently occurring event are all stored under this context in a script instance frame. 4.2 Time Representration There are several issues in time representation that must be dealt with in developing a script knowledge representation and script matching heuristics. Time must be portrayed in a way that captures the “fuzzinesss’ of the domain. An event must not only be recognized as hap- pening with a certain degree of likelihood, but its start and end times must be approximated as well. Each event has a specific duration--it is not enough to postulate a point in time. The time formalism developed by Allen [I,21 of time intervals and relations between intervals provide a language well-suited to SCAN’s domain. There are two items not in this formalism but required by SCAN: the (defscript counter-tray-play (script-name counter-trsy-play) (script-elements ((end-in-motion prspoeition (> location > direction > time (> duratioaa (snarp-ball progoeition (> locstion > direction > time (> duration ((meets end-in-motion))) (running-back-fake script ( > location > direction > time) ((after snap-bell I))) (quarter-back-fake script (> location > direction > time) ((equsb ~~~~i~g-~~~~-f~~e))) (tackbpuli propositioxn (>locdhm >diaectisn >time (>duratio~1 ((OVdZ3~B quarter-back-fake 0.6))) (guard-pull paopoaition (> llocatioln > direction > time (> duration ((equals tabckle-pull)))) (bindings (> location > direction > time)) (neceessbsy-preconditions t) ccient-precowditione (offeneive-posture short-yardage)) (ecript-a~aiyeie time-abvesa@led-script-BikeBiltaso 2)) 2)) (defmript running-back-fake (script-name running-back-fake) (script-elemerntsl ((~~~~i~g-~~~~-t~r~~ proposition (>loeation >directiow >time (>dwatiow 0.5))) ((running-bss@k-kceversera pssposition (> 10ca&i0~1 > direction > time (> duration 0.5)) ((meete running-back-turns )))) (bindiwge (> location > direction > time)) (neceasarry-preconditions t) ent-precsditions (0 duration of an event--how long (doctrinally) an event is supposed to occur and relative start times between events. Allen’s theory of time is supported by an interval- based temporal logic and a set of properties that can hold over the intervals. In SCAN, the notion of the time of an event is described by a temporal interval, (t , is the start time and t is the end time of t tl t2) where tI e occurrence. There is a basic set of &ations that can hold between tem- poral intervals. “Meets” is a primitive relation such that if interval i meets interval j, i’s end time is equal to j’s start time. Twelve other relations can be described in terms of meet, such as: after, overlaps, equals, meets, and during. For example, in Figure 1, the tackle-pull overlaps the quarter-back-fake by .5 seconds. We have assumed that the script-elements-list in the defscript is ordered by increasing values of the start times of the script-elements and hence some time relations are implicit. As a result, it is not necessary to list all time relations in the preconditions of an element, only those that cannot be inferred from the preceding element’s preconditions and its preceding elements, given the dura- tion information. This simplifies the SCAN scripts and is a Laskowski and Hofmann natural way to present the script knowledge, but the rela- tions could be made entirely explicit if this was desirable for a different application. 6. Control Architecture The contro1 architecture shown in Figure 2 consists of the script inference procedure which includes a script matching function and monitoring facilities. The heart of SCAN lies in the script control. At the end of an ANALYST cycle, that is, reading in a set of reports, crea- tion of a times slice from data fusion, and calculation of the status of the information requests (propositions from script-elements) the script control evaluates the status of the scripts, noting new scripts that have started up and monitoring the likelihood of the scripts already being mon- itored. All rule-based inferencing is done through the request processing under the guidance of the script con- trol. As the SITMAP is monitored, each script must be matched against a particular context and time. These two values are used as a key for storing and retrieving the inferences about a script and its associated sub-scripts and propositions. When a script is matched for the first time, it is instantiated with a likelihood. Script likelihood is used to compare how well a script matches the situation relative to other scripts. The likeli- hood value is between 0 and 1, and a script is typically considered to be occurring if the value is greater than .5. This cutoff value is called the script-occurring-cuto~. As likelihoods change from time slice to time slice, they are stored using the script instance’s context and the current time so that a history is maintained. The start and end times of scripts and propositions, as well, are calculated based on when the likelihood is above the cutoff. As SCAN operates, it maintains two lists of scripts. Monitored-scripts list are script-context pairs whose neces- sary preconditions are met. This list is used to guide the search for scripts that match the current situation. Active-scripts are script-context-pairs with a.11 precondi- tions met and likelihoods greater than the script- occurring-cutoff, that is, the scripts that appear to best assess the situation. Figure 3 shows the control flow of SCAN and how it interfaces with the sensor fusor and request processor. Ini- tially, ANALYST applies its fusion rules to reports creat- ing units on the SITMAP. Then, the monitored-scripts list is constructed by searching all scripts on all pre- defined areas of interest for those scripts whose necessary preconditions are met. As a script is added to the list, all script-elements that are propositions are set up as infor- mation requests whose values will be monitored across time slices. The initial processing of the requests for the first time slice involves backward chaining to calculate all their likelihoods. After the initialization, the active-scripts list is built by searching through the monitored-scripts for scripts whose sufficient preconditions are met, calculating a likeli- hood for each of those scripts and placing a script on the active-scripts list if its likelihood is high enough. When the monitoring phase is entered, a new time slice is built and all information requests are re-evaluated for changes through a forward chaining inference pro- cedure. (Only rules that will alter the likelihoods of the current information requests are fired.) With the updated information available, each active script is re-evaluated to update its likelihood and determine if it should remain active. Because new scripts might be starting up at any point, the monitored-script-list is examined again. At any point in time, the active-scripts can be interpreted as a set of hypotheses that explain the enemy’s intentions. Script likelihood calculation is based on the likeli- hoods of the script-elements, the time relations between elements, the history of likelihood values, and a current- element-pointer denoting the script-element in progress, if (State/ ‘I I I -*-I---, - A J 1 Replies/ I Script Control I Information Requests Request Processing c T‘ I Information Requests T)PI ANNER Replies/Updates _. -. - _. _-. . I I \ Data / Color Fusion Sitmap Display 4 J Status ---m- Reports - - - Environment Model Figure 2 SCAN System Architecture $22 Expert Systems ANALYST Functions Initialization -l Monitored I I I J Scripts I FIgure 3 SCAN Script Control Flow combined to determine a script’s overall likelihood using weights specified in the bindings-list of the script and the function (for example, averaging) specified in the script- analysis. 6. Conclusions and Future Directions Scripts do, indeed, provide a more intuitive knowledge representation for situation monitoring expert systems. SCAN as an implementation of the script mechanism is able to recognize trends on the battlefield. SCAN has a time representation and an interface to the rule-based ANALYST system that allows it to focus on the development of activities over time. However, additional experimentation is needed to explore the knowledge engineering of scripts. SCAN, at present, is very limited in its number of scripts and its matching techniques. The assumptions that have been made for this first version of SCAN need to be re-examined if SCAN is to be more that a toy system. Spatial and terrain reasoning is an important aspect of SCAN’s domain yet has been virtually ignored. The techniques for reasoning about uncertainty are quite ad hoc and should be changed to a representation with a firmer mathematical footing to avoid inconsistency and potential anomalies. A goal for SCAN (and one reason for wanting intui- tive knowledge representations) is to allow the expert to knowledge engineer SCAN directly. A knowledge editor with a sophisticated human-machine interface would be a step closer to achieving this. Finally, SCAN represents only part of the software that is necessary to generate and recognize details of plau- sible plans. Future work includes building a plan recog- nizer based on the planning techniques in the OPLANNER plan generator and guided by SCAN’s script hypotheses to constrain the plan search space. PI PI PI PI PI PI 171 PI PI References Allen, J. F., Towards a general theory of action and time, Artificial Intelligence 23 (1984) 123-154. Allen, J. F. and Hayes, P. J., A common sense theory of time, IJCAI (1985) 528-531. Antonisse, H. J., Bonasso, R. P., and Laskowski, S. J., ANALYST II: a knowledge-based intelligence support system, MITRE Technical Report MTR-84WOO220, April 1985. Azarewicz, J., et. al., Plan recognition for airborne tactical decision-making, AAAI (1986), 805-811. Benoit, J. W. et. al., An experiment in cooperating expert systems for command and control, Expert Sys- tems in Government Conference, October 1986. Bonasso, R. P., ANALYST: An expert system for processing sensor returns, The First 3 rmy Conference on Knowledge-Based Systems for C I, Army Model Management Office, Ft. Leavenworth, November 1981, 219-245. Buchanan, B. G., and Shortliffe, E. H., eds., Rule- Based Expert Systems, Addison-Wesley Publishing Co., 1984. Fall, T. C., Evidential reasoning with temporal aspects, AAAI (1986) 891-895. Laskowski, S. J., Antonisse, H. J., and Bonasso, R. P., ANALYST II: A knowledge-based intelligence sup- port system, Second IEEE Conference on Artificial Intelligence Applications, December 1985, 552-563. [lo] Schank, R. and Abelson, R. Scripts, Plans, Goab and’ Understanding, Lawrence Erlbaum Associated, Inc., 1977. [ll] Wilensky, R, Planning and Understanding: A Compu- tational Approach to Human Reasoning, Addison- Wesley Publishing Co., 1983. Laskowski and Hofmann 823
1987
147
602
Assessing the Maintainability of XCQN-in-RIME: Coping with the Problems of a VERY Large Rule-Base Elliot Sollow ay Department of Computer Science Yale University New Haven, Connecticut 06520 Abstract XCON is a rule-based expert system that configures computer systems. Over 7 years, XCON has grown to 6,200 rules, of which approximately 50% change every year. While the performance of XCON is satisfactory, it is increasingly becoming more difficult to change. With the goal of facilitating maintenance, DEC has developed a new rule-based language, RIME, in which the successor to XCON, XCON-in-RIME, is being written. This paper evaluates the potential for enhanced maintainability of XCON-in-RIME over XCON. I. Introduction: Motivation and Goals The following properties of XCON, an expert system, make it a particularly interesting system to examine: e XCON performs a complex design task: XCON configures computer systems for DEC; XCON is used in a production mode, day in, day out -- it has been used since January 1980. e XCON is a very large rule-based system: currently there are approximately 6,200 rules in XCON, which draw on a database of approximately 20,000 parts. o XCON undergoes constant change: 5070 of the rules in XCON are changed each year. While there is no problem with XCON’s performance, DEC nonetheless decided to redesign XCON: as we will describe below, it has become increasingly more difficult to change XCON. Since XCON must continually be updated to reflect new products and new computing concepts coming out of DEC, it was deemed desirable to develop a rule-based architecture that would be more supportive of this type of activity. In this paper, then, we will present an assessment of the redesigned XCON, called XCON-in-RIME, from the perspective of maintainability; we will mount two types of arguments (an in principle argument and an in practice argument) to support the view that XCON-in-RIME will be more maintainable. While the discussion here necessarily will be focused on XCON and‘ XCON-in-RIME, we feel that the issues we raise will become increasingly more relevant --- and familiar --- as expert systems grow in size and complexity. ’ The following are trademarks of Digital Equipment Corporation: XCON, RIME, XCON-in-RIME, DEC. 824 Expert Systems Judy Bachant and Keith Jensen Digital Equipment Corporation Intelligent Systems Technology Group Hudson, Mass. 01749 II. Problems With XCQN’s Current Rule-Based Architecture XCON started as a relatively small, rule-based system (about 700 rules) (McDermott, 1982). It has grown to over 6,200 rules to meet the needs of DEC. Frankly, there is no end insight: XCON will continue to expand and change. Unfortunately, the problems of continually updating such a large system do not grow linearly; moving from 700 rules to 6,200 rules, with 50% of the rules changing every year, makes for an exceedingly difficult software enhancement problem. In particular, two basic properties of production rules give rise to these difficulties: Dynamic properties OJ rules: As the number of rules grows _-- and as different programmers work on the same rule-base, with different levels of understanding of what is in the rule base and why --- inadvertently, rules that are not appropriate become triggered, resulting in unwanted and undesirable interactions among the rules. In OPS5, control of rule firings is either implicit, in the domain-independent, conflict resolution strategies (e.g., recency), or it is explicit, but buried in rules themselves (e.g., special tricks are used to cause one rule to fire over another.) Static properties 0J rules: There are no language restrictions on the number of functions a particular rule can perform. For example, in Figure 1, we see an Englishified XCON rule that performs a number of functions (i.e., actions on the right- hand side of the rule). This open-endedness causes problems as the rule-base grows. In particular, a typical strategy for extending the rule base to handle a new device is to copy the rules that worked for a similar device and then edit them to handle the new device. Unfortunately, in the editing process, one isn’t always sure what the rationale for all the functions are. The result is that one often inadvertently changes a function, and causes run-time problems; alternatively, one doesn’t change the functions, but keeps them in the new rules --- not feeling all that confident about why they are there. In software engineering terms (Brooks, 1975) what happens to a large rule base as it changes over time is a “degradation in integrity:” what may once have been a coherent rule base, turns into a rat’s nest of special rules, tightly coupled rules, etc. While software engineers have been able to label this problem, e.g., see (Soloway, 1987) they have not presented a general solution to the problem. Note that by “degradation” we do not mean that the performance of the system is From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. necessarily impaired, e.g., XCON continues to function quite productively. However, from a rule-developer’s perspective, the rule-base no longer has its initial unity of structure, of coherence, thus making additional changes increasingly more problematic. Name: rl-unmounted-ubx-options LHS: Describes certain types of cabinet mountable disk drives and information necessary to place one in a cabinet, cable it, and create output RHS: * Marks the drive "temporarily configured", * marks the placement in the cabinet "used", * identifies all of the information for connecting the drive to it's controller, + identifies the containing information between the drive and cabinet, * and creates output labeling Figure 1: An XCON Rule proposing to go off to another problem space.) 8 ELIMINATE Step: then, there are domain-specific rules that evaluate the appropriateness of the candidate operators and prune the operators down to one, e.g., in Figure 2 we present a rule that decides among the slots being proposed. 8 APPLY Step: and executed. the selected operator is activated o EVALUATE Step: finally, the goal is reviewed, and if it has been achieved, the problem space can exited; if the goal was not achieved, then the difficulty is handled by going through the problem space once again, or by going to another problem space. III. A New Language for Rewriting XCON: RIME Note that within each step the actual order of “rule firing” XCON-in-RIME is the successor to XCON; it will perform or activation is irrelevant. Control is realized either by the the same function as XCON but it is intended to be more maintainable, i.e., its integrity should be easier to preserve over time. RIME is the language in which XCON-in-RIME is being written. In turn, RIME produces OPS5 code. The major advance of RIME over, say OPS5, is that one can more easily make explicit domain knowledge, both in structuring of the rules themselves and in controlling the firing of the rules. (See also (van de Brug, et al., 1985, Chandrasekaren, 1983, Neches, et al., 1984, Clancey, 1983, Clancey & Letsinger, 1981).) Below, we identify the more important language features of RIME: domain-independent steps in a problem solving method or by the domain-specific task level control of entering another problem space. Subgroup - In order to help insure “one function, one rule,” there is an additional Dewey-Decimal-like, domain-specific, classification imposed on the rules: each class makes explicit the function that the rule is performing. For example, in Figure 3 we present three rules, each of which performs a single function, along with the subgroup classification scheme. Note that this classification scheme is not related to control and implies nothing about the order of rule activation. Problem Space - provides a domain-specific “bucket” into which to throw rules that have a common purpose. For example, in XCON-in-RIME, there are 40 problem spaces, each dealing with one functional aspect of the configuration problem, e.g., CONFIGUREMODULE, SELECT-MODULE, SELECT-CONTAINER. Some problem spaces are hierarchically organized, e.g., SELECT-MODULE and SELECT-CONTAINER are functions that must be done in order to effectively CONFIGUREMODULE. RuIe Type - To help insure the creation of rules consistent with the categories of permisable rules, there are rule templates that serve as guides for rule creation. With this necessarily brief description of XCON and XCON-in-RIME, we can now proceed to assess the impact of XCON-in-RIME’s new architecture on its maintenance. Problem Solving Method - a domain-independent sequence of steps to solve a type of problem; each problem space uses one problem solving method. Of the 6 current methods, the most frequently used one is PROPOSE/APPLY, which is, for example, the method used for achieving CONFIGURE MODULE, SELECT-MODULE, and SELECT-CONTAINER. In effect, methods explicitly acknowledge that there are problem solving algorithms. For example, in the PROPOSE/APPLY method there are the following steps (note the following is a simplified description): @ PROPOSE Step: first an operator (or operators) is suggested that might be relevant to the achievement of the current goal, e.g., in Figure 2 we present two rules that suggest a slot that might be used in finding a place for a drive. (Operators typically either represent objects, as in the example above, or actions, as in the case of IV. The Problems of Software Maintenance: In General and In XCON In a software maintenance task there is an existing body of code that must be augmented in some manner. Typically, the augmentation is readily understood --- the programmer knows what needs to be done. However, the problem is in understanding the existing body of code, and then knowing where and how to add the augmentation so as not to disturb the rest of the code. Thus, on the one hand, the maintainer’s job will be facilitated if the code is “readable,” while on the other hand, the code will remain in a readable state if the programming language facilitates “good programming practice. 11 In effect, reading and writing are duals of each other, with the goal being “maintaining readable code.” The question, then is, what will enhance the Soloway, Bachant, and jensen $25 Rule Name: -- select-drive-space:propose: llOf:lowest-drive-slot LHS: Identifies the lowest numbered drive slot in the current cabinet RHS: Proposes that slot readability/intelligibility of code? Two identified that directly influence this issue: properties can be * Homogeneity. a small number of readily discernible plans are used over and over again to accomplish the various, desired goals. A plan is a sequence of language constructs used to accomplish some stereotypic (i.e., oft occurring) goal (Rich, 1981, Soloway & Ehrlich, 1984). In contrast, non-homogeneous code contains idiosyncratic, different solutions to similar goals. For example, in the configuration task, the code of laying out a cabinet for different cabinets and different computers, should still have some common appearance. Afterall, the goals that need to be achieved are similar. Moreover, a reader should not be able to tell who wrote a particular chunk of code to realize a particular cabinet layout; different programmers should be using the same set of programming plans to realize comparable goals. Rule Name: -- select-drive-space:propose: llOj:exclusive-rackmount-drive-space LHS: Identifies a drive slot in which only certain types of drives can be mounted the current drive is one of those types RHS: Proposes that slot Rule Name: -- select-drive-space:eliminate: 340c:prefer-exclusive-space LHS: Two proposed slots, one of which has restricted use RHS: Eliminates the other slot Figure 2: Sample XCON-in-RIME Rules Rule Name: -- configure-device:apply: 200a:mark-device-configured LHS: Unconfigured device chosen for RHS: Marks it's status "configured" activity e Predictability: (1) the reader knows where to look next for an answer to a question, (2) the reader is not surprised by what he comes upon in the code, and (3) the reader can trust that nothing untoward is being done behind the scenes. For example, if one rule (in the case of production rule programming) serves more than one function, then point (3) may be violated. Similarly, if rules that are intended to serve a related function are distributed over the rule base, the reader may not realize he needs to look in a non-local region for key rules, and hence (1) may be violated. Clearly, homogeneity and predictability are related : by definition, predictable code will be homogeneous, and vice Rule Name: -- configure-device:apply: 420a:update-contained-number LHS: versa. Homogeneity focuses on a property of the code itself, while predictability focuses on a property of the use of the code. The current device has a "position-on-bus" identified RHS: That is the number used to identify this device on the output by filling in "contained-number" Name: Rule configure-device:apply: 430a:update-containing-info LHS: The fact that the device being configured belongs in a cabinet and the previously chosen cabinet RHS: Identifies that the device is contained in this cabinet # LEVEL 1 LEVEL 2 LEVEL 3 200 update-status-or-phase component 420 update-containership contained number 430 update-containership containing Figure 3: Sample Subgroup Schema -- Rule Type: Apply Those who have had to maintain XCON have repeatedly observed: (1) that XCON grows continually more non- homogeneous, and (2) that predictability in the XCON rule- base is exceedingly difficult. Why? The basic problem seems to be the fact that what a code reader needs to know about a subset of rules, say, in XCON is not explicit in the rules; a code reader needs to talk to the person who created the rules and/or tap into the “institutional memory” of how the rules evolved to where they are. For example, XCON rule developers use various tricks to force rules to fire in a particular sequence. And still further, rule developers use certain rules for more than one purpose. Thus, rule developers are often uncertain as to what XCON rules are really doing, and therefore they are afraid to modify the rules, lest some unwanted behavior might result. The problem, in a nutshell, then, is that a rule developer needs to understand at least a major portion of the rules before he can effectively make some change to the rule base. We hasten to point out that XCON rule developers are not 826 Expert Systems malicious individuals, purposely trying to undermine the project with their non-homogeneous, idiosyncratic code! Rather, the problem is that there have been few external, explicit mechanisms to capture the otherwise implicit knowledge. For example, the OPS5 language encourages the style of programming that has evolved, e.g., there are no effective language constructs to aid the rule developer in creating rules that do not have some order dependence. Also, coding practices have evolved without clear guidelines as to how rules should be written. Again, this is not really a fault of the rule developers; the issues of homogeneity, predictability, and nature of the task (a very large rule-base that continually is modified) were not apparent when XCON started to evolve. In fact, the lessons learned in working on XCON directly lead to XCON-in-RIME --- where weaknesses of the sort identified here are meant to be addressed. The bottom line is this: it is not surprising that XCON is very hard to maintain (e.g, change, add, delete rules): the language in which it is written, the architecture of the system itself, and the coding guidelines do not facilitate rule change. In what follows, we present a rationale for why XCON-in- RIME does address the specific weaknesses of XCON and thus why XCON-in-RIME should be more maintainable than XCON. V. In Principle: Why XCON-in-RIME Should Be More Maintainable Than XCON Over the years, the configuration group at DEC has had the need to “push around” a rule-base architecture, e.g., (Bachant & McDermott, 1984). This extensive experience has led directly to the design of RIME and to XCON-in-RIME. In what follows, we identify two major factors in which RIME/XCON-in-RIME differs from OPS5/XCON. A. RIME as a Higher-Order Language In order to appreciate the evolution of RIME/XCON-in- RIME from OPS5/XCON, one needs to look to the history of the development of programming languages. That is, programming languages have continued to evolve towards more problem-specific applications: e.g., FORTRAN (FORmula TRANslation) was considered a major improvement over assembly language, because it allowed scientists to write in their own, natural language: mathematical equations. Similarly, APL, the new crop of spreadsheet languages (e.g., LOTUS, MULTIPLAN), etc. have all been specifically crafted to allow domain specialists to talk to the computer in a language natural to the domain. It would not be a distortion to view OPS5 as at the “assembly language level: ‘I afterall, OPS5 is an almost totally domain independent programming language, which allows the programmer considerable control, and hence leeway. In contrast, RIME has been designed specifically to reflect what has been learned about configuration, and about writing and changing large rule bases. For example, as mentioned before, XCON rule developers forced rules to fire in specific orders and still attempted to reuse subsets of rules for multiple goals. In contrast, RIME attempts to understand this need explicitly, and has created explicit language constructs to deal with this type of situation. For inst#ance, notions such as problem space, problem solving method, method step, rule type, have been created to help the rule developer in making explicit the heretofore implicit procedural relationships between rules. Thus, RIME can be viewed as more towards the “spreadsheet end of the problem independent/dependent language continuum. ‘1 As such, then, RIME could be considered a “higher-order language” in comparison to OPS5, much as FORTRAN is considered to be a “higher-order language” relative to assembly language. The next question is this: what predictions can be made about maintaining XCON-in-RIME, written in RIME, on the basis of experience gained in maintaining systems written in other higher-order languages ? In particular, how do higher- order languages help with respect to homogeneity and predictability of code? 8 Homogeneity: The constructs of a higher-order language can be viewed as techniques for realizing oft occurring goals in the problems towards which the language is directed. Thus, similar problems in a domain will have similar solutions, which in turn makes for more homogeneous and less idiosyncratic code. e Predictability: Given that the language constructs are more directed towards problems in the domain, the decomposition in the code tends to reflect the decomposition in the problems more explicitly. Thus, it should be easier to identify where subgoals are achieved, and hence where code can be changed. Given the positive effects promised by the use of higher-order languages, it would be remiss on our part not to point out that horrendous looking code has been written in higher-order languages. Nonetheless, while hard numbers are few and far between, the overwhelming sense of the software engineering community is that the use of higher-order languages has had a positive impact on maintenance, e.g., (McGarry, 1982). Thus, on these grounds alone, it is quite reasonable to predict that XCON-in-RIME, written in RIME, a higher-order language, should be significantly easier to maintain than XCON, written in a arguably lower-level language. B. The Programming Environment: SEAR and Coding Guidelines Language constructs are not enough to ensure that rule developers use the constructs in the desired fashion. SEAR is a tool being developed that will directly interpret RIME code. Currently, SEAR provides on-line enforcement of coding guidelines, e.g., there are templates for each rule type which guide the creation of rules. The coding guidelines, and their enforcement via SEAR, correspond to “structured programming” practices advocated by the software engineering community as leading to more readable code. However, unlike these vaguely worded practices, SEAR’s can be tuned to the specifics of the problem. Soloway, Bachant, and Jensen 827 VI. In Practice: Providing Empirical Support For The Enhanced Maintainability of XCON-in-RIME While an in principle argument needs to made, one would like to see at least some glimmers of evidence for the veracity of those in principle claims. In this section, then, we present empirical evidence that does bolster the in principle claims. A. Data Collection Methodology Our goal was to get a sense of strengths and weaknesses of XCON-in-RIME from a user’s perspective. We interviewed, on a daily basis, 8 rule developers who rotated into the XCON-in-RIME project for a short period of time (l-2 weeks). These sessions were recorded on audio-tape. Interview data of this sort does not provide “statistical evidence” pro or con. However, anecdotal evidence of this sort has been found to be quite insightful and reliable, e.g., (Lewis, 1982, Littman, et al., 1986). Frankly, it does not seem appropriate at this stage to go to all the trouble of carrying out a methodologically rigorous, controlled-study --- the costs would be to high, and the benefits are not clear. Note that the observations described below were made on the basis of interviewing only 4 rotaters. However, the observations made by the additional 4 rotaters were in almost unanimous accord with those by the initial group of rotaters. B. Observations On and Interpretations Of Rotaters’ Experiences with XCON-in-RIME The following is a distillation of comments made at the various debriefing sessions with the rotaters. In carrying out such a distillation, there is always the danger of oversimplifying or misrepresenting someone’s comments. We have, of course, attempted to be as “fair” as we could in our interpretations. In what follows, we break the rotaters’ comments down with respect to the issues of homogeneity and predictability of XCON-in-RIME code. Comments on Homogeneity: Observation rules. I’ of the Rotaters: “I can’t tell who wrote the Interpretation: The rotaters all agreed that the rules they read in XCON-in-RIME had a certain homogeneity. In contrast, the rotaters all agreed that, by and large, they could tell who wrote a rule in XCON, i.e., that rules could differ substantially as a function of who wrote them. We feel that the homogeneity in the rules in XCON-in-RIME, in contrast to XCON, is quite telling: a reasonable interpretation of this difference is that XCON-in-RIME provides constraints and guidelines on the rule developers so that they tend to write similar looking rules. The similarity of the rules across different rule developers leads directly to the enhanced readability: when rule developer X sits down to read the current rule base, he will feel more confident that he has accurately assessed the content of the rules if the rules have a homogeneous nature. One of the major readability problems with the current rules in XCON is that rule developers have significant difficulty in figuring out what is being implied by the rules --- since different rule developers have different styles of writing rules. Observation of the Rotaters: Each rotater had developed special rule writing ‘I tricks” for creating rules in XCON. Interpretation: We asked the various rotaters if they had developed any special techniques for coding rules in XCON. Each said they had. For example, one rotater introduced a mini-context mechanism by including a very general rule at the end of a set of rules; this general rule, then set a marker, which, in turn would allow another set of rules to fire. Another rotater included extra conditionals in his rule in order to insure that that rule would fire at a special time. Thus, this point is similar to the last point: the rules in XCON were often coded by rule developers using idiosyncratic styles ---- thus, making the XCON rule base less homogeneous and less readable by other rule developers. Observation of the Rotaters: The tricks the rule developers were using typically permitted them to control the order in which rules were firing. Interpretation: While in their “pure” state, production rules are not meant to have this almost algorithmic character, the reality is that problems may require this type of procedurality. In XCON-in-RIME this procedurality is explicitly acknowledged, and the problem spaces, steps, etc. allow a rule developer to explicitly encode the sequentiality that they wanted --- that they were using implicitly in XCON, and doing so with various coding tricks. Again, readability can only be enhanced if rule developers are given tools --- the explicit vocabulary of problems spaces, steps, etc. --- to help them in writing rules. The use of this explicit vocabulary facilitates the development of a homogeneous rule set. Comments on Predictability: - Observation of the Rotaters: base to add some new rule. ” “I know where to go in the rule Interpretation: A comment made almost universally by the rotaters was they felt that they could pinpoint where they needed to make a change in XCON-in-RIME’s rule base. In contrast, a major problem with XCON’s rule base was the difficulty in locating the place where the change needed to be made. Observation organized. ‘I of the Rotaters: “The rules are more Interpretation: By and large the rotaters all said something like the above statement. In unpacking what it means to be “organized, ” it appeared that the rotaters felt that rules had $28 Expert Systems specific places to be, i.e., those familiar with the configuration task found the problem spaces, subgroup classification scheme, etc. to be natural, organizational units. Again, this observation reflects both a broader understanding of the task of configuration as well as the encoding strategy dictated by the design of XCON-in-RIME, i.e., the fact that there are multiple classification levels using explicit criteria. VII. Concluding Remarks Based on the two types of arguments just presented, there is clearly a prima facie case that: XCON-in-RIME should be easier to maintain than XCON. While that difference should be readily observable, it would nonetheless be more than academic to gather data on two types of measures: e Human performance: How long does it take to change/add a rule(s)? How many bugs are made? How long does it take to identify and fix bugs? GB Assessing readability status of rule base: Does the rule base degrade as new rules are added/changed? How homogeneous are the rules after 6 months, 12 months, etc.? However, in order to capture such data, we would first need to define some metrics (e.g., how does one quantify homogeneity?). Moreover, we should not expect that all the maintenance problems will be alleviated by XCON-in-RIME; afterall, there are many problems yet to be discovered (e.g., what happens when the rule base hits 18,000 rules? 27,000 rules?). Finally, economic reasons dictate that an evaluation, of the sort described here, be carried out before one undertakes a redesign/reimplementation of a system of the magnitude of XCON. Moreover, as expert systems continue to become more of an engineering enterprise, we will need to develop a range of evaluation tools: evaluation is an integral part of an engineering effort. Thus, besides evaluating XCON-in- RIME’s design, we have attempted to articulate one strategy for carrying out a design evaluation: in principle and in practice type arguments. Acknowledgements We would like to thank Diane Muise and Michael Grimes for their continuing contributions to the RIME Project, and Virginia Barker and Dennis O’Connor for their unflagging support, encouragement, and above all, patience. Bachant, J., McDermott, J. Rl Revisited: Four Years in the Trenches. AI Magazine, 1984, S(S), . Brooks, F. The Mythical Man-Month. Addison-Wesley Publishing Co., 1975. Chandrasekaren, B. Towards a Taxonomy of Problem Solving Types. Al Magazine, 1983, 4(l), . Clancey, W. The Advantages of Abstract Control Knowledge in Expert System Design. Proceedings of the AAAI National Conference on AI, Washington, DC, 1983. Clancey, W., Letsinger, R. NEOMYCIN: Reconfiguring a Rule-based Expert System for Application to Teaching. Proceedings of the Seventh IJCAI Conference, 1981. Lewis, C. Using the “Thinking-aloud n Method in Cognitive Interface Design. Technical Report RC 9265, IBM Watson Research Center, Yorktown Heights, NY., 1982. Littman, D., Pinto, J., Letovsky, S., Soloway, E. Software Maintenance and Mental Models. In Soloway, E., Iyengar, S. (Eds.), Empirical Studies of Programmers, Ablex, Inc., 1986. McDermott, J. Rl: A Rule-based Configurer of Computer Systems. Artificial Intelligence, 1982, 19, . McGarry, F. What We Have Learned In The Past 6 Years: Measuring Software Development Technology. Proceedings of the Seventh NASA/Goddard Workshop on Software Engineering, Md., 1982. Neches, R, Swartout, W., Moore, J. Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development. Proceedings of the IEEE Workshop on Principles of Knowledge-based Systems, Denver, CO, 1984. Rich, C. Inspection Methods in Programming. Technical Report AI-TR-604, MIT AI Lab, 1981. Soloway, E. “I Can’t Tell What In The Code Implements What In The Specs n. Proceedings of the Second International Human-Computer Interaction Conference, Honolulu, Hawaii, 1987. Soloway, E., Ehrlich, K. Empirical Studies of Programming Knowledge. IEEE Transactions on Software Engineering, 1984, SE-10(5), 595-609. van de Brug, A., Bachant, J., McDermott, J. Doing R1 With Style. Proceedings of the Second IEEE Conference on AI Applications, Miami, FL, 1985. Soloway, Bachant, and jensen $29
1987
148
603
huis I. stein AI/VLSI Project Computer Science Department Rutgers University New Brunswick, NJ 08903 Abstract Underlying any system that does design is a model of the design process and a division of labor between the system and the user. We are just beginning to un- derstand what the main alternative models are, what their strengths and weaknesses are, and for which do- mains and tasks each is appropriate. The research reported here is an attempt to further that under- standing by studying a particular model, the model of design as top down refinement plus constraint prop- agation, with the user making control decisions and the system carrying them out. We have studied this model by embodying it in VEXED, a design aid for NMQS digital circuits, and by experimenting with this system. Our primary conclusion is that this model needs further elaboration, but seems like a good basic model on which to build such systems. The task of designing something, e.g. a circuit) a program, or a mechanical device, is both intellectually challenging and economically import ant. It also requires large amounts of knowledge of a number of different kinds. Thus it is an important domain for AI, both in terms of building useful systems and in terms of understanding basic principles. A number of researchers have focussed on developing useful systems to aid in some specific task in some specific domain. These include [Parker and Knapp, 1986, Bushnell and Director, 1986, Brewer and Gajski, 1986, Kowalski, 1985, Joobani and Siewioriek, 1985, Kim and McDermott, 19831. However, underlying any such system there is either implicitly or explicitly a model of the design process, i.e. of the stages a design goes through between initial givens and final product, and of the operations that move it from stage to stage. We are just beginning to understand what the main alternative models are, what their strengths and weaknesses are, and for which domains and tasks each is appropriate. The work reported here, like that of [Brown et al., lThis work is being supported by NSF under Grant Number DMC-8610507, and by the Rutgers Center for Computer Aids to In- dustrial Productivity as well as by DARPA under Contract Numbers N0001481-K-0394 and N0001485-K-0116. The opinions expressed in this paper are those of the author, and do not reflect any policies, either expressed or implied, of any granting agency. 1983, Tong, 19871, is an attempt to extend this under- standing by explicitly studying a particular model of the design process. This model can be summarized by the equation, DESIGN = TOP-DOWN REFINEMENT 9 CONSTRAINT PROPAGATION Ideally, in designing a complex structure, one would like to use top-down refinement: first decompose the structure into a few main pieces and completely define the interfaces between the pieces, so that the design of each piece be- comes a totally independent sub-problem. Each can be de- signed separately, and the pieces simply plugged together to solve the original problem. TJnfortunately, until we ex- plore the space of possible designs for the pieces, it is often impossible to know exactly what the interfaces should be. One solution to this is common practice among human designers, and has also been used by Stefik in the Molgen system [Stefik, 19811: 1 eave the interfaces only partially specified. As you proceed with the design, decisions you make while working on one piece will further constrain what the interfaces of that piece must be, and thus con- strain the alternatives for designing other pieces. We refer to this process of inferring how decisions at one place put constraints on options elsewhere as “constraint propaga- tion”. In addition to a model of the design process, any de- sign aid involves a &&ion of I&or between the system and the user. In systems that are to be fully automatic the di- vision is simple: the system does it all. We, however, have been focussing on interactive systems. In particular, OUT approach has been to leave control decisions in the hands of the user, but leave all other processing to the system. That is, the user chooses which piece to refine next, out of all those still needing further refinement, and also chooses which way to refine it, out of all the alternatives that the system knows about. The system keeps track of which pieces need refining and what the alternative refinements are for a given module, and ahs does constraint propaga- tion. This division of labor seems to build on the strengths of each party, making the computer responsible for com- pleteness and consistency and the human responsible fox strategy. This model and division of labor are quite appealing, but also quite simple. Indeed, it soon became clear that they are too simple, and would have to be augmented to 830 Expert Systems From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. handle realistic tasks. However, our research strategy has been to stay with this model as much as possible, to see how far we can push it, to see where it fails, and whether the failures can be fixed by further elaboration of the model or whether they require starting over with an entirely dif- ferent model. to t2, and another module, which uses this stored value at time t2 to compute the output.3 The IF part of the rule describes the class of modules that this refinement method applies to. The THEN part describes how to do the refinement: the submodules, their initial specifications4, and how they are connected. It is important to note that these refinement rules describe le- gaZ, cowed implementations, but not necessarily optimal or even preferred implementations. They define the “legal moves” in the search for possible circuit implementations, but not a strategy for choosing among alternatives. It is also worth noting that in VEXED refinement in- volves structural decomposition, breaking a module into its pieces, while in Molgen[Stefik, 19811 refinement involves going from a more abstract operation to a more specific one. We first tested the model by using it as the basis for a specific design aid, VEXED2, in a specific domain, digital circuit design. More recently, we have extended the test by using the same model (and indeed almost entirely the same code, but with different knowledge bases) to build a design aid in another domain, mechanical design[Langrana et al, 19861. This paper discusses what we have learned from implementing and testing VEXED. The next section describes VEXED further, and the final section discusses our results and conclusions. First we will describe the way VEXED embodies this model of design: how it represents the circuit being de- signed, how it does refinement and how it does constraint propagation. Then we will show an example of VEXED’s use. Finally we will discuss the implementation status of VEXED and describe the experiments we have done. A. of esign To embody our model of design, VEXED must represent both the structure and operation of the partially-designed circuit, and must be able carry out refinement and con- straint propagation. We will deal with these issues in that order. VEXED represents the structure of a circuit in a fairly standard way. A module represents either a single compo- nent or a group of components being viewed as a functional block. A data-path similarly represents either a single wire or a group of wires. The operation of a circuit is repre- sented in a somewhat less standard way. The signal on a given data-path is called a “data-stream”, and is thought of as a sequence of “data elements”, e.g., a sequence of bits or characters. An individual element is referred to by its “subscript”, i.e. its position in the sequence. Ele- ments have a number of “features”, including Type (e.g. Boolean), Data-Value (e.g. FALSE), Encoding (how the abstract data-type is encoded as voltages), and various timing-related features. For a further discussion of these representations, see [Kelly, 19851. VEXED’s knowledge of refinement methods is em- bodied in a set of “refinement rules”, e.g., INCLUDE- MEMORY: IF the output at time t2 depends on an input at time tl, THEN one way to refine the module is into a memory, which holds the value from tl %VEXED stands for Vlsi Expert EDitor. Constraint propagation in VEXED is done by the CRITTER system[Kelly, 19851. Critter does two kinds of propagation. s Firstly, CRITTER d oes a form of goal regression. Given a specification on the data-stream output by a module, and given the behavior of this module, CRIT- TER can determine what must be true of the inputs to the module to ensure that the output specification will be met. a Secondly, CRITTER d oes a form of symbolic evalu- ation. Given a (possibly partial) description of the behavior of a module’s inputs, and given the mod- ule’s behavior, CRITTER can infer a description of the module’s outputs. Because of our representations, constraint propaga- tion is simply a matter of symbol substitution (see [Kelly, 19831). However, this process results in very large, complex expressions. Therefore, CRITTER also has an expression simplifier that uses a set of rewrite rules to simplify the resulting expressions as much as possible. Finally, CRIT- TER is capable of verifying that the specifications on a data-stream are satisfied by that data-stream’s behavior. Again, this is done by a process of symbol substitution and simplification. Figure 1 shows the user interface5 for VEXED, at the be- ginning of a typical design session. The circuit being de- signed is one bit of a content-addressable memory, and is referred to as the CAM-CELL. The screen is divided into several regions, or windows. The largest window is the re- gion in which the circuit will be designed, and initially con- tains a large rectangle representing the CAM-CELL to be designed. Th e user has already entered the specifications for this circuit. These specifications include a description 3Th& of course, is an English paraphrase of the formal notation. 4To be augmented later by constraint propagation. 5 VEXED is implemented for Xerox Pnterllisp-D machines using the Strobe object-oriented programming system from Schuimberger-Doll Research. Steinberg 831 Figure 1: The VEXED Interface of the inputs and outputs of the CAM-CELL, as well as a description of the function to be implemented. Figure 2 gives part of these specifications: the value of the output OUT at each time must equal some expression based on the values of the inputs at that and previous times, and for this output the boblean values TRUE and FALSE are represented by low (0 volts) and tristate (high impedance), respectively. Attached to the main window is a list of commands and a list of pending tasks. As shown in the figure, the only pending task at this point is to refine the CAM-CELL. This list of pending tasks will be updated as the design proceeds, and new circuit submodules are introduced. In general, the user controls which portion of the design to focus on next by selecting one of the pending tasks from this list. In this case, the user selects the (REFINE CAM- CELL) task, and the system then considers its collection of rules to determine which ones apply to this module. In this case, the advice offered by the system is that there are eight rules which suggest alternative methods for refining the CAM-CELL. The user may select one of these rules to be executed or, alternatively, may elect to ignore the system’s advice, and manually edit the circgt. Figure 3 shows the result of the user selecting ((I (ALL I>> (EQUAL (DATA-VALUE 0uT I> (EQUAL (DATA-VALUE MATCH 11 (DATA-VALUE DATA-IN (PREVIOUS 1 J (EQUAL (DATA-VALUE LOAD J> (QUOTE HIGH)) I>>>> (EQUAL (ENCODING err I) (NMOS-BOOLEAH (FALSE TBISTATE) (TRUE LOW)))) Quit Do Agenda Item Do Selected Rule Show Hierarchy Check Specifications Backtrack Rl+Jy Make Primitive Combine Jump Vexed Editor create Rule create CIRCUIT Figure 3: Result of Executing the Memory-Rule INCLUDE-MEMORY for the system to carry out. (This is the rule paraphrased above.) Execution of this rule has lead to a refinement of the CAM-CELL, which includes a memory module (called MEM:A0059), as well as sec- ond module (GMOD:A0062). Both modules have specifi- cations given in the same representation as for the origi- nal CAM-CELL specifications. The MEM:A0059 specifica- tions require that it store the value of the DATA-IN signal, whereas the specifications of GMOD:A0062 require that it produce an output depending upon a comparison between the output of MEM:A0059, and some of the inputs to the CAM-CELL. The list of pending tasks has also been up- dated so that the new tasks include refining MEM:A0059 and GMOD:A0062. Refinement of the circuit continues in this fashion. The user directs the focus of attention by selecting which I Include a memory Use inverter loop d Uae & of peas networks to implement Foniunction of boolean f,uncti,ona * I \ Use Figure 8 for Use paae tranaiator for”@ statement $ &I compare -I- T Figure 2: Part of the Specifications for CAM-CELL Figure 4: The Design Hierarchy 832 Expert Systems module is to be refined next. The system examines its rule the current circuit, and applies them to the current base to determine applicable rules, and presents these to module. To the extent that the refinement operations the user. The user may then select one of these, or may used previously are general, and apply in somewhat ignore this advice and elect instead to refine the module new circumstances, this is a way to reuse the idea3 by editing it manually. Figure 4 shows the hierarchy of of a previous design even when the specific circuit is refinement steps which lead to a final circuit-level imple- not applicable. See [Mostow and Barley, 19861 for a mentation. further discussion of this facility. 6. Status of VEXED There are three points to make about the status of VEXED. First of all, VEXED has been fully implemented, and has about 50 refinement rules. These cover most of the standard NMOS design techniques for boolean functions, and also a for few latches. Work has recently started on a set of rules for CMOS circuits. Secondly, VEXED has been used by students in our VLSI design class to do a homework assignment. The as- signment was done by about ten teams of students, mostly two students per team. Each team designed one of three small circuits; one circuit was a full adder, and the others were of about the same size. Thirdly, VEXED has had a number of capabilities added to it beyond refinement and constraint propagation. Q One facility any real system needs is a backtrack or “undo” facility that allows the user to retract decisions that turn out not to have the desired effect. VEXED has a chronological backtracking facility that allows the user to return the circuit to the state it was in at any previous time. e It turns out that when a module is refined into sub- modules, a sub-module may occasionally need a signal as input that was not originally among the inputs of the parent module. Typically this happens with sig- nals like clocks, ground, etc. To handle this situation, VEXED has “Get Signal” tasks, which are automat- ically entered on the task agenda when needed, and are handled by the user manually specifying where the needed signal should come from. A facility has been added for “Module Combining Rules”. These specify how two modules can be com- bined into one simpler one, and provide for a kind of peephole optimization. For instance, two invert- ers in series can be combined into just a simple wire (as long as this change does not violate some timing constraint). Since it is always appropriate to try to combine modules, and since the circuit can be consid- ered complete even if no combinations are done, these tasks do not go on the agenda. Rather, the user can point to a module and request that an attempt be made to combine it with each of its neighbors. There are currently only a few such rules, and this facility was not used by the VLSI students. o Finally, there is now a “replay” facility for VEXED. This takes the sequence of refinements applied previ- ously to some other circuit, or even to other parts of . AS discussed above, we began with a model of the design process and of the division of labor between he user and the system, and we implemented VEXED to test these models. Our results can be seen as answering two broad questions: CB First, can a design aid embodying these models be implemented ? Is it possible for a system to have a sufficient body of refinement methods, to find those applicable to a given module, to carry out the one selected by the user, and to do the constraint propa- gation? o Secondly, if such a system were implemented could de- signers, especially those with no AI or even computer science background, use it to produce designs? The concern here was both whether the users could under- stand and use this design process, and also whether they could learn our specification language, which is quite different from standard hardware specification languages in its LISP-like syntax, in its data-flow style semantics, and in its representation of a data-stream as a sequence of values. As the next two sections will describe, the answer to both of these is, “Yes, but.” The fact that VEXED has been brought to the point where students in our regular VLSI class could successfully use it is evidence that it has indeed been implemented. Two issues remain: the size and coverage of the set of refinement rules, and the cost of constraint propagation. As noted above, the current refinement rules cover most boolean combinational circuits for the NMOS circuit technology, and some latches. A truly useful system would require more complete coverage of combinational circuits and latches, as well as rules for a number of other kinds of circuits, e.g. multiplexers, and rules for higher level data-types such as integers and characters. However, in principle there seems no reason why these rules could not be added to VEXED. Based on the number of current rules and the coverage they give, we estimate that a version of VEXED that would be useful for real designers would need less than 1000 rules, and so would be within the scope of current technology for building and maintaining rule-based systems. Remember also that user can step in and do a refine- ment manually whenever the system does not have a rule for the desired refinement method. This helps in two ways. Stein berg 833 First of all, it means that there need not be as many rules before the system is useful; it probably takes far fewer rules to cover 90% of the refinement steps in each of a range of designs that it would take to cover 100% of the steps. Sec- ondly, since the rules do not have to contain any control information, i.e. any information on which of the locally plausible refinements to actually do in a given design, it turns out that it is relatively easy to observe the user do- ing such manual refinements, and infer general rules. We are building a system called LEAP[Mitchell et al., 19851 which will do just this. The first version of LEAP is al- most completed. Finally, VEXED uses an indexing structure to find relevant rules for refining a given module without testing the left hand sides of every’rule, so the time to find relevant rules should grow less than linearly with the number of rules, and the time to find relevant rules is currently fairly short. Thus we do not expect the time to find relevant rules to be a major problem even with many more rules. While the size of the rule set does not seem to be a problem, the cost, both in terms of memory space and in terms of time, to do constraint propagation does seem to be a major issue. In a circuit such as a full adder described at the transistor level, with about 20 modules, it takes five to ten minutes on a Xerox 1109 (DandeTiger) to do the constraint propagation after each refinement. The cost of constraint propagation seems to grow slightly less than linearly with circuit size, based on some initial impressions, but the delay for a full adder is barely tolerable and so to design anything much larger it will be necessary to reduce this cost. One simple answer, of course, is to optimize our code, which is currently not very optimal, or to get a faster machine. In particular, the task of constraint propaga- tion seems inherently parallel, since each constraint can be propagated along each path more or less independently; thus it would seem a natural application for a parallel ma- chine. Another answer is to find a way to do less propaga- tion. At the moment, VEXED propagates every constraint everywhere it can as soon as it can. Perhaps limiting or delaying some of this propagation can reduce the cost. We are currently looking in to this possibility. B. Can VEXED be Used? Given that VEXED can be implemented, can it be used? Can non-AI types learn our specification language, and can they successfully do design with such a design aid as VEXED? Again, the answer is, “Yes, but.” About half of the class were students from the Electri- cal Engineering Department with no AI background and indeed relatively little Computer Science background, and even the Computer Science students included some who had not had any AI courses. The students were given no more documentation and other help (lecture, hands on 6Minor in the sense that we were able to quickly fix them. help, etc.) than they are typically given for any other de- sign aid used in the course. Never the less, they did succeed in specifying and designing their circuits. The few who did not finish were those who were halted by one or another of the minor’ bugs left in VEXED. On the other hand, the circuits some students de- signed were wildly sub-optimal. They took many more transistors than were necessary. That is, when they chose which refinement rule to use, they did not choose wisely. Partly this may be due to their inexperience as VLSI de- signers in general. Partly it may be due to their difficulty in understanding what each rule did. Each rule had a canned English description that said what its effect was, and another that tried to give advice on when to use it, but a major complaint from the students was that it was hard to understand this documentation and to figure out what the rules did. We are beginning to look into the whole area of how a system like VEXED could explain the rules and the state of the design to the user. Finally, the difficulty in choosing rules may be inher- ent in the structure of a system like VEXED. I am a bet- ter designer than the students, and I understand the rules quite well, and thus I can get much better designs out of VEXED. However, I have to think very hard to do so. The problem is that VEXED’s constraint propagation tells you the effects of previous refinement decisions in limiting the choices for the current decision, but it does not show you how each current alternative will limit the choices you will have on later decisions. To get a good circuit out of VEXED, the user has to have a clear global strategy in mind, and has to weigh each decision in the light of how it will contribute to that strategy. Perhaps VEXED could try the constraint propagation that would result from each alternative, and inform the user what the effects of each would be on the remaining alternatives elsewhere. However, given the cost of con- straint propagation, this may not be practical. The basic problem seems to be that since VEXED leaves the control issues entirely up to the user, it has no internal represen- tation of the goals and plans that go into a strategy for designing the circuit, and thus cannot offer the user any support in deciding which module to work on next or which refinement to make. The DONTE system being developed in our research group by Chris Tong[Tong, 19871 is an at- tempt to study some of the issues of how a system based on top down refinement and constraint propagation might also make these control decisions. In addition to the problems with choosing the right rule that the students actually had, there are two prob- lems that did not come up but might have had they been designing larger circuits. One is that certain kinds of cir- cuit are quite difficult to specify in our language. These are the circuits whose output at a given time depend on the entire past history of their inputs, or at least on an un- bounded set of past inputs. These are not easy to express in a data flow oriented form. The solution here is either to find a more algorithmic specification language that can be translated into the data flow form, or to find a way to do 834 Expert Systems constraint propagation directly with the more algorithmic language. The second potential problem with larger circuits is that design really does involve more kinds of operations than just refinement and constraint propagation, and even more than get-signal, backtrack, replay, and combining modules. Examples include inserting “sub-goal” modules to fix conflicts, e.g. when one module produces output in serial and the next needs parallel input, you can put in a serial to parallel converter. Also, some operations are best viewed not as a refinement of a module into sub- modules, but rather as a recasting of the specification into a semantically equivalent but structurally different form, e.g. turning an complex boolean expression into sum-of- products form. And there are a few other such examples. However, all of them seem to be the kind of thing that can be added on top of the basic VEXED model, much as the module-combining rules have been added. Of course, further work is needed to be sure that they really can be added. In summary, then, if ways can be found to help the user choose the right rules, and if the cost of constraint propagation can be controlled, and if the additional kinds of operations can indeed be added to the system, the VEXED model of design will indeed prove to be a good one on which to base interactive, knowledge-based design aids. Both the programs and the ideas presented here are the work of many people in the Rutgers AI/Design group. I particularly want to thank Tom Mitchell, Jack Mostow, Chris Tong, Jeff Shulman, Tim Weinrich, Mike Barley, Atul Agarwal, and Sunil Mohan. Finally, I want to thank Chun Liew for help with text formatting. [Brewer and Gajski, 19861 F. Brewer and D. Gajski. An expert-system paradigm for design. In Proceedinga of the 23rd Annual Design Automation Conference, June 1986. [Brown et al., 19831 H. B rown, C. Tong, and G. Foyster. Palladio: an exploratory environment for circuit de- sign. In IEEE Computer Magazine, December 1983. [Bushnell and Director, 19861 M. Bushnell and S. Direc- tor. Vlsi cad tool integration using the Ulysses envi- ronment. In Proceedings of the 23rd Annual Design Automation Conference, June 1986. [Joobani and Siewioriek, 19851 R. Joobani and D. Siewior- iek. Weaver: a knowledge based routing expert. In Proceedings of the 22rd Annual Design Automation Conference, June 1985. [Kelly, 19831 V. Kelly. The CRITTER System: Auto- mated Critiquing of Digital Hardware Designs. Tech- nical Report WP-13, Rutgers AI/VLSI Project, November 1983. also appearing in the Proceedings of the Design Automation Conference, 1984. [Kelly, 19851 V. Kelly. The CRITTER System - An AT- tificial Intelligence Approach To Digital Circuit De- sign Critiquing. PhD thesis, Rutgers University, New Brunswick, New Jersey, January 1985. [Kim and McDermott, 19831 J. Kim and J. McDermott. Talib: an ic layout design assistant. In Proceedings of AAAI-83, pages 197-201, 1983. [Kowalski, 19851 T Kowalski. An artificial intelligence ap- proach to VLSI design. Kluwer Academic Publishers, Boston, 1985. [Langrana et a!., 19861 N. Langrana, T. Mitchell, and N. Ramachandran. Progress Toward A Knowledge-Based Aid for Mechanical Design. Technical Memo CAIP- TM-002, Center for Computer Aids for Industrial Pro- ductivity, Rutgers University, January 1986. [Mitchell et al., 19851 T. M. Mitchell, S. Mahadevan, and L. Steinberg. Leap: a learning apprentice for vlsi de- sign. In Proceedings of IJC’AI-85, Los Angeles, CA., August 1985. [Mostow and Barley, 19861 J. Mostow and M. Barley. Re- use of design plans. In International Conference on Engineering Design, Boston, MA., September 1986. Abstract accepted for ICED87. [Parker and Knapp, 19861 A. Parker and D. Knapp. A de- sign utility manager: the adam planning engine. In Proceedings of the 23rd Annual Design Automation Conference, June 1986. [Stefik, 19811 M. Stefik. Planning with constraints (mol- gen: part 1). In A&ficial Intelligence 169, pages 111-140, May 1981. [Tong, 19871 C. Tong. Goal-directed planning of the de- sign process. In The 3rd IEEE Conference on AI Ap- plications, February 1987. also appears as Rutgers AI/VLSI Project Working Paper No. 41. Stein berg 835
1987
149
604
David Servan-Schreiber Robotics Institute Carnegie Mellon University and Western Psychiatric Institute and Clinic Pittsburgh, PA 15213 Abstract Building on the successes and shortcomings of previous experiences with computerized psychotherapy, we have attempted to extend the paradigm of intelligent tutoring systems to the domain of therapeutic interaction. Based on canonical examples, I present three dimensions of the task of tutoring systems: teaching problem-solving vs. domain knowledge; teaching isolated domains vs domains where students have prior misconceptions; teaching with the use of functional models of the domain vs no functional models. I then show how implications of these dimensions have helped us determine the specifications of a tutoring system for sexual therapy. Our approach has consisted of engaging patients in a tutoring dialogue driven by the identification of problem areas and their associated misconceptions. A diagnostic module, implemented as a traditional expert system, uses an extensive bug library to derive an internal model of patients. A dialogue driver relies on a hierarchy of dialogue plans and demons in order to preserve a logical grouping of related topics while remaining flexible to adapt itself, at each level of the dialogue hierarchy, to the unfolding case. Pntroduction Over the last decade, research on computer assisted instruction (CAI) has moved from frame-based systems (traditional CAI) to Intelligent Tutors. The purpose of intelligent tutors is to provide a learning environment that is more sensitive to a particular student’s strengths, weaknesses and preferred style of learning, emulating the quality of a private human tutor. Computer-based tutors separate the subject matter they teach from the format of instruction. Their instructional actions are based on an internal model of the student and a set .of teaching procedures-- strategies and tactics-- to select from on the basis of the model. Several programs have been developed, spanning areas such as computer programming, medical diagnosis and geography. While most systems are still experimental, some have been formally evaluated and compare very favorably to class-room instruction (e.g, the work of J. R. Anderson and his co-workers on the LISP tutor (1985a)). In parallel, there have been several attempts to develop computer programs that could deliver psychotherapy. Most of recent research has focused on automating the presentation of psychotherapeutic techniques rather than on the process of therapeutic dialogue. Por instance, Lang et al (1970) successfully used a computer to carry out systematic desensitization and to monitor progress in a group of female snake phobic undergraduates. A computerized “dilemma counseling” system, PLATO DCS, has been developed (Wagman, 1980) and been shown to be effective with university students. A portable calculator-size computer system designed to provide immediate feedback concerning caloric intake was found to promote weight loss in obese female volunteer subjects. Gosh et al. (1984) and Selmi (1983) have presented via computer relatively standard self help interventions for phobias and depression (cf. Servan-Schreiber (1986) for a more elaborate review). Although such computerized interventions appear promising, in terms of outcome, most also appear to require substantial additional therapist contact to promote compliance with treatment. This requirement parallels the results of many psychotherapeutic “bibliotherapy” studies carried out with a variety of psychological problems. None of these programs has attempted to base the format and content of therapeutic interventions on an internal model of the psychological situation of patients. While the absence of an internal model does not necessarily preclude treatment effectiveness, we believe that an intelligent and individualized dialogue is necessary to increase acceptability, motivation and compliance. How can psychotherapy be cast in an intelligent tutoring system? In this paper, Ipropose to review some canonical examples of intelligent tutoring systems from which principles can be derived to guide application to a new domain area such as psychotherapy. I then present a prototype of a system designed to lead patients suffering from sexual dysfunctions through a therapeutic dialogue. . ssms from Tutoring Syste A. SOPHIE (E3mvn et 1982) is one of the earliest intelligent tutors. Its task is to moiitor a student attempting to debug a simulated electronic circuit. The student can ask questions about circuit components, perform measures at different locations (voltage, intensity, etc...) and make hypotheses about the malfunction. SOPHIE contains a module that can identify the fault in the circuit based on functional specifications of electronic circuits and its own problem solving strategy. When the student performs measurements or proposes a hypothesis, SOPHIE can evaluate the student’s strategy and critique it according to its own solution. The important characteristics of SOPHIE for our purposes is that it is attempting to teach problem solving in a well defined, isolated domain, and that it can rely on af&ctional model of the domain area GUIDON (Glancey, 1982) teaches a medical student attempting to solve a case of bacteremia or meningitis. The student is presented with some symptoms of a patient and can gather further data or make a diagnosis. Me can also ask for help or for the relevance of particular information. To assess the student’s knowledge, GUIDON uses as an “ideal student” model a specially designed version of the MYCIN ex which can solve the case. Rules of the expert system are as known by the student or not, according to the student’s questions and hypotheses; this results in an “overlay model”. 66 Al & Education From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Several “tutoring rules” use this model and the context of the lesson to decide on the appropriateness and content of instructional interventions. The expert system is not based on a functional model of the domain (the human body) but rather on judgments about empirical information such as the fact that a particular symptom is associated with a particular type of infection. It is the clear specification of an expert’s knowledge and problem solving strategy that serves to assess the student’s behavior. GUIDON’s emphasis is on teaching domain knowledge as well as some general (not domain specific) strate@es and the domain can be considered to be isolated in that students do not come to the task with a burden of misconceptions acquired from previous experience. 3. I.&Is 8 Rather than e studen ge as a subset of an expert’s knowledge --the premise of overlay models-- urton, 1978), LMS (Sleeman, 1982) and 1982) assume that students misrepresent e. They use false principles and procedures as well as incorrect facts. Stevens, Collins and Goldin, using protocols of human tutoring sessions recognized that tutoring activities often revolved around students’ bugs. They claimed that: “Much of a teacher’s skill depends on knowledge about the types of conceptual bugs students are likely to have, the manifestations of these bugs and the methods them.” (Stevens et al., 1982) was designed to teach students about y systematically probing students’ knowledge for misconceptions and missing reasoning steps. WHY engages the student in a Socratic dialogue guided by heuristics such as: If the student gives as an explanation of causal dependence one or more factors that are not necessary ?%en select a counter-example with the wrong value of the factor and ask the student why his causal dependence does not hold in that case These heuristics rely on a functional model of the domain organized in a script-like fashion that lets the system evaluate questions and answers provided by the student. Thus, WHY does not need to have a “bug library” of students’ tions since. bugs can be detected and (partially) students’ answers to the model of does not even have licit representation of bugs ver, as the authors o have stressed, this procedure identifies only extra and missing sub- steps in the scripts. Misconceptions which do not fall into these categories are not recognized by the system. This is the price that must be paid to avoid an extensive library of common bugs. A key lesson that can be derived from WHY is that tutoring dialogues can be driven by bug identification and correction rather than by comparing the student to an ideal model. Experiments wi also made explicit the role of local and global strategic s to control the tutoring dialogue, that local management of the interaction based on recognition of a misconception is not sufficient and “discourse knowledge” is necessary EQ provide the sy ith a global perspective on the dialogue. To summarize, attempts to teach primarily domain kPsowledge, in a domain that students approach with a priori miscorxeptions, and where a functional model of the domain can be used to evaluate students’ questions and hypotheses. Awdessoniana Tutors John Hp. Anderson and his co-workers have developed several tutors that attempt to teach a new cognitive skill such as LISP programming or proving geometry theorems (Anderson et al., 1985a; Anderson et al., 1985b). The student is presented with a problem to solve and proposes a solution step by step (e.g, types a LISP expression on the keyboard). The tutor monitors every step and compares it to a production system that solves the problem in parallel with the student. If the student is found to be using an adequate production the tutor says nothing; if the production that matches the student’s behavior is a “buggy production”, a pre-stored intervention is generated to get the student back on the right track. This “model tracing” requires very precise models of the particular problem solving activity -- in the form of production rules-- and a considerable library of buggy rules in order to follow the student step by step. This level of precision is attainable in domains that are quite isolated from previous knowledge and where the bugs that occur most commonly stem from the student’s experience with previous concepts of the domain itself (e.g, the confusion between “append” and “list” in LISP). As a result of this fine grained analysis, concerns for dialogue are minimal: since the tutor always knows where the student is and what her knowledge- state is, there is no need to carry on a dialogue that would yield a “cognitive diagnosis” of the student like does. Also, it is interesting to note that these tutors do not rely on functional models of the domains even though such models are readily available (e.g, running LISP expressions). Rather, like GUIDON, they use a model of expert problem-solving. Thus, Andersonian tutors teach a problem-solving skill, in isolated domaipts, and do not use a functional model of the domain but rather a functional model of the expert problem-solving process. cations that one c existing tutoring systems is that a very large effort is ng spent clarifying and formalizing the domain knowledge and problem solving strategies that ought to be taught. Whether these are represented as a production sy like in GUIDON or as scripts and semantic nets like in more knowledge of the domain has TV be implemented than what is traditionally considered to be adequate for expert systems. A second and more puzzling conclusion is that tutors cannot rely on highly general, domain independent principles to teach their subject matter. It is not enough to recognize overgenerahzations, overdifferentiations and teach how to form and test hypotheses and collect enough information. In particular, as soon as prior misconceptions play an important role in the student’s approach to the domain, what is required is a direct recognition of misconceptions and application of specific corrections. Bugs are not domain independent. Finally, three dimensions of the task domain seem to have particular implications in terms of the style of interaction, modeling of the student and representation of domain knowledge that a tutor can use: the LISP tutor focus primarily.on teaching the student how to use tools or knowledge to complete a task. Con emphasize the acquisition of new determines where the tutor should stand on a continuum from morsitoring-- Or “coaching”-- of the student involved in a problem solving task, to engaging the student in 8 dialogue about a case-- be it a patient or the presence of rain in Oregon. By extension, this also determines how much “discourse knowledge” a tutor should possess. The further the student is from active problem solving, the more the tutor should know about the structure of teaching dialogues. task domains that are sufficiently different from stude%s’ previous experience for them to carry relatively few preconceptions to the new domain. Servan-Schreiber 67 On the other hand, when analysing the knowledge states of students who claimed to know nothing about the causes of rainfall, Stevens and Collins found that each student harbored a host of beliefs and misconceptions. It is not that these students did not know about processes of evaporation, condensation etc..., but that th y “knew” it incorrectly. In fact, this dis P ‘nction is more blurred than it might seem at first. Brown and Burton have stressed that students develop mental models of electronic circuits that can be incomplete or wrong and that SOPHlE is limited in its instructional capacity by its inability to deal with this phenomenon directly. What this dimensi6n influences is whether m&lels of students should be attempted in terms of a sub-set of an expert or “ideal student” model, or rather as a collection of incorrect facts and erroneous procedures that should be diagnosed and corrected. The more pre-conceptions might “infect” the domain area to be taught (e.g, causes of rainfall) the more active debugging of the student’s approach is necessary. On the other hand, when misconceptions are less likely to influence the acquisition of new knowledge (e.g, facts about meningitis), the tutor can rely more heavily on an ideal student model. 3. Functional models All the systems we have reviewed encompass a functional model of either the domain knowledge --electronic circuits for SOPHIE, scripts of rainfall for WHY-- or a functional model of the expert problem solving process --production systems of GUIDON and Andersonian tutors. Thus these tutors can evaluate the student’s behavior -- and by extension her knowledge-- by comparison with a functional model. In particular, they can identify bugs in the student’s conceptions by referring to a functioning or ‘debugged” model. The existence of such models avoids the difficult problem of having to compile extensive “bug libraries” associated with correction procedures. However, as we have seen, these madels have their limits and Andersonian tutors are augmented to include an extensive library of typical errors. II. Applications to Psyc A. Psychotherapy as Pntelligewt Tutorin Not all forms of psychotherapy are equally amenable to the intelligent tutoring approach. However, one in particular, cognitive psychotherapy, insists on the logical scrutiny of cognitions and uses “error libraries” of common cognitive distortions which characterize particular forms of psychopathology. Interestingly, cognitive therapists stress their role as teachers and actively instruct their patients to recognize and overcome their maladaptive misconceptions. In addition, cognitive therapy sessions have a well .defined and consistent format. The purpose of cognitive psychotherapy is to work with a patient suffering from a circumscribed problem such as depression or marital difficulty by going over the patient’s view of the domain in which the problem is rooted, looking for lack of information, misconceptions and maladaptive thoughts. In this sense the task of the therapist leans more toward reviewing domain knowledge rather than coaching a specific problem solving skill. A therapeutic tutoring system would thus more naturally fit in a dialogue framework than in a monitoring paradigm. In addition, the knowledge addressed by such as system is overwhelmingly not “isolated”. Patients have typically lived with their problems for a signifacant amount of time before they come for consultation and they have developed their own model of the domain, most often out of partial information and misleading experiences. Their model is thus bug-ridden and patients need to unlearn as much as they need to learn. As a result, a natural representation of the patient is to match his beliefs against an “error library” rather than attempting to develop an overlay model of an “ideal patient”. Finally, and unfortunately, there are no extensive functional models of the type of social interactions in the context of which patients’ problems arise, neither do functional models exist for human reasoning in the domains in which these problems occur. This description of the task of psychotherapy, which reflects the theoretical commitment of cognitive psychotherapy, led to direct conclusions about the kind of tutoring system that we could plan to build: 1. the system had to be able to lead patients through a therapeutic dialogue and would thus require elaborate “discourse knowledge”; 2. the dialogue had to be driven by identification of misconceptions; 3. the absence of functional models would force the development of extensive error libraries associated with typical remedies to constitute the knowledge base of the tutor. Bearing this analysis in mind, we attempted the development of a tutoring system for the domain of sexual dysfunctions. We chose the domain of sexual dysfunctions for several reasons. First, the common cognitive distortions about sexual fnnctioning have been well described. Second, sexual problems are a relatively well defined area of psychological difficulty for which well worked out therapeutic interventions exist. For some dysfunctions such as premature ejaculation or primary anorgasmia the appropriate interventions are also highly successful. Third, there is a strong tradition of self help among individuals suffering from sexual problems that is well accepted and even encouraged by sex therapists. We hoped that this would facilitate the acceptance of a new therapeutic modality. Finally, the relative anonymity and non-judgemental approach that could be offered by an intelligent computer-therapist has been shown to facilitate disclosure of personally sensitive information such as sexual problems. Sexpert is organized around two components: a diagnostic module and a dialogue driver. We will discuss them in turn. 1. % iagnostic Module In Sexpert, the elementary step on which all instruction is based consists of the identification of a problem or misconception from a set of questions asked of the partners. We have compiled a large number of misconceptions from the literature and from our domain experts. Typical examples are: Misinformation: for example that anesthetic creams am useful to help delay ejaculation (in a sense they are, but they partially anesthetize the female partner too); False expectations: for example, males thinking that all women are turned on if their breasts are fondled, or females thinking that a partial loss of erection during intercourse indicates loss of interest. Once such an error library is available to the program, the process of identifying bugs becomes one of traditional “classification problem solving” as defined by Clancey (1984). The program goes from data (answers to questions) to data- abstractions (internal representations) and performs a heuristic match from data-abstractions to bug categories. Finally it refines the bug category to a particular misconception or faulty procedure. Once a bug is identified, it is added to the model of the couple and can be used to diagnose bugs of a higher level of complexity which encompass several simple bugs. Por example, after having determined that the couple suffers from a particular kind of premature ejaculation, the program might find out that the couple has reduced the duration of their foreplay. Sexpert may interpret this latter fact as an attempt on the part of the 68 Al & Education couple to keep the male arousal low prior to penetration (which in most cases does not work and results in both short foreplay and short intercourse). This approach to diagnosis can readily be implemented with the methodology of expert systems and we are using a traditional rule-based inference mechanism to diagnose simple bugs of the kind illustrated. However, certain misconceptions only emerge when a therapist integrates a large number of related simple problems. Our approach to this problem has been to break down the analysis of the interaction pattern into core components that leave most of the details out and results in a gross, first-path representation. We then determined the most frequently occurring patterns at the detailed level and implemented them as possible add-ons to the coarse representation. Using this technique, if the couple falls into one of the common patterns, the analysis generated by Sexpert includes most of the details they have mentioned. If not, the program is still able to rely on a “general idea” of their situation. 2. The Dialogue Driver We have seen how Sexpert relies on identification of problems, faulty procedures and bugs as its primary teaching step. However, teaching is a structured process and does not consist of sequences of unrelated actions. To quote Stellan Ohlsson’s insightful analysis of tutoring systems: “A tutcrlng effort is struchued; it coordinates the individual teaching actions, subsumes them under a plan for how to teach the relevant knowledge. The moment-to-moment behavior of the tutor originates in the execution of that plan, rather than in successive decisions about what to do next. If the student model is to be useful, it has to contribute in some way to the construction and execution of instructional plans.” (Ohlsson, 1986) Once we had convinced ourselves that it was possible to drive the psychotherapeutic process around identification and correction of bugs, we had to organize the interaction with patients in a meaningful way. To capture the gist of this type of therapeutic dialogue, we created a hierarchy of dialogue plans in which each level successively refmes the actions of the system. Only abstract specifications of the topic to be discussed are implemented at the top level, while an intermediate level specifies the issues to be raised and their order and the lowest level determines the exact order and content of questions or explanations to be presented. This idea of hierarchical dialog plans is inspired by the concept of hierarchical planning developed by Sacerdoti (1974) and the structure of the MENO- TUTOR of Woolf and McDonald (1984). For example, the main dialogue plan of the first session consists of the following goals: gather background data, get presenting complaints, identify and formulate problems, investigate contributing factors, relate contributing factors to symptoms, formulate and propose a treatment program. Within each of these categories, local dialogue plans are generated to structure and focus the discussion on the relevant topics. For instance, in order to investigate factors contributing to primary anorgasmia, Sexpert dynamically creates a plan to discuss physical health, sexual history, sexual fears and anxieties, sexual attitudes, etc... Within each of these categories, a more specific dialogue plan is again generated (e.g, selecting and ordering issues of sexual history) and so on until particular information is elicited (see figure I.). Complete introduction (background data) Obtain presenting complaint Identify and formulate problem(s) Investigate contributing factors Physicalhealth Sexual history Familial influence Sex education Previous experiences Sexual trauma Sexual fears and anxieties Sexual attitudes Environmental factors Relationship factors Repertoire factors Relate contributing factors to symptoms Prepare and propose treatment plan Figure 1. This figure illustrates the hierarchical structure of dialog plans generated dynamically during the session. At first, the top level plan that organizes the entire session is generated. The session then proceeds by generating and refining plans under each heading in due time. For illustration, we have expanded only one such heading: the investigation of contributing factors. Note that in order to generate this plan, the program needs to have identified a dysfunction. This example assumes that primary anorgasmia has been identified. A plan is generated to discuss several categories of factors that may have contributed to cause or to maintain this particular dysfunction. Again, we have expanded only one of the categories asserted by this plan: sexual history. Another, more precise, plan is created that specifies issues of sexual history to be addressed. Finally, for each of these issues, a terminal level plan specifies questions to be asked directly of the users. When an issue or categoy has been fully investigated, the program backs up to the next higher level and proceeds with the next topic on the plan at that level. For example, after sexual trauma has been investigated, the program addresses sexual fears and anxieties. Set-van-Schreiber 69 While this hierarchical structure allows the program to group questions in relevant contexts, it also leaves a lot of freedom to reorganize plans to fit particular situations at run-time. More importantly, perhaps, is that each level of dialogue plan is independent of the specifics of lower levels (the lowest level specifics being particular questions and explanations). This provides the system with the ability to reason at a conceptual level on the diqgue itself. The system knows what concepts have been discussed or remain to be discussed independently of particular questions or answers. For example, it can tell whether a problem has been identified and formulated independently of what problem it is or of how many problems have been found. This ability becomes particularly useful when several dysfunctions are present simultaneously and the system has to alternate or mix the discussion of each of them throughout the session. Example of dialogue on recent chances in a case of premature ejaculation: The following two auestions take vlace in the context of a diagnostic p&a for premature ejaculation. They help determine whether the dysfunction is primary (the problem was always there) or secondary (there has been a period of normal functioning). They also discriminate between two forms of secondary premature ejaculation (better with previous partners or with current partner in the past). John , were you able to exert better control over your ejaculation earlier in your relationship with Mary? -->no Did you have more control over your ejaculation with previous partners ? --ryes I see (...I Later, when the active goal is to inquire about contributing factors of premature ejaculation, a plan is generated to inquire about changes in the relationship which may help explain that the dysfunction is secondary and to give a summary of the relevant findings after all questions have been asked (rather than an explanation after each question). This triggers an even more specafic plan which specifies the or&r and nature of questions to be asked and the following dialogue takes place: IS the frequency of intercourse with Mary markedly lower than what you were used to with previous partners ? --240 Are you using intercourse positions with Mary you did not use with previous partners when you had more control ? --xi0 Are you generally more tired when you try to make love now than when you had better control ? -+no John , do you feel more anxious when you are having sex with Mary than with previous partners ? -->yes Was your non-sexual relationship better when you had better control ? -->yes %ile the implementation of this dialogue hierarchy gives a logical and adaptable structure to the interaction, the possibility remains that the system goes down a wrong path and that some backtracking is necessary. For example, at some point in the dialogue, information provided by one of the patients may be inconsistent with previous answers or with prior conclusions derived by the program. If the inconsistency is recognized, the patients are asked specific questions to clarify the situation and all previous answers and conclusions are reconsidered in light of the modifications. As a result the dialogue may take a completely different orientation with new plans and questions being generated while all other, still valid, previous answers remain available. Unfortunately, only the most predictable inconsistencies can be recognized by Sexpert. We have found that in most cases it is better to rely on the patients themselves to determine when the program is heading in the wrong direction. Thus, at all times, they have the option to change any of their previous answers that they feel were responsible for the current misled focus of the dialogue. Finally, it is also important for the program to be able to follow through when a sensitive issue has been raised which requires immediate attention at the expense of the more general line of inquiry of the dialogue. For example, at different occasions during the dialogue, it might be discovered that the woman is pregnant (e.g, when discussing contraception). In that case, it is important to react immediately even though the rest of the information might not be relevant to what Sexpert wants to know at that point. To implement this “noticing” mechanism within the general hierarchical goal structure of the dialogue, we use demons that immediately trigger a dialogue plan which takes precedence over the current focus, any time their activation conditions are met. When the execution of the plan is completed, control returns to the last active goal. 3. Current Status of Sexpert A functioning prototype of Sexpert consisting of twelve hundred rules and approximately one hundred and seventy pages of text currently operates on a personal computer. This prototype includes: an introductory section which explains the uses and limits of Sexpert and gathers background information concerning the users; a diagnostic section which makes decisions and gives individualized feedback concerning primary and secondary premature ejaculation, primary anorgasmia, and a variety of other orgasmic concerns; a contributing factors section which evaluates and discusses over fifty possible factors which may contribute to the above problems including an evaluation of sexual repertoire and the couple relationship; and a fifteen session treatment section for premature ejaculation. Prelimary results, based on the reaction of fifteen unscreened volunteer couples, are encouraging. All the couples thought that the dialogue was logical, appropriate and intelligent. Several spontaneously remarked that Sexpert was “smart” and appeared to really understand. None complained about the length of the session (60-90 minutes) or the amount of text to be read. Interestingly , almost all couples were highly sensitive to the wording of the texts, noticing and reacting strongly to differences such as: “your difficulty with duration of intercourse” rather than “your concern over duration of intercourse”. We are currently systematically studying subjects’ evaluation of the program and the degree of attitude change and belief revision related to the first session with Sexpert. At this stage of the evaluation phase, it is difficult to draw definitive conclusions about the value of our approach. Some of the most salient limitatioms include the restriction of the interface to yes/no and multiple-choice modes, the pronounced domain- specific nature of the tutoring strategies and the absence of models for “deeper misconceptions (e.g, where do bugs come from?). Prior experiments with computer-based psychotherapy seem to suggest that a clinically significant therapeutic effect can take place in spite of these limitations. If this proved to be the case, the methodoly of intelligent tutoring systems understood and applied according to the framework we have explored promises to make psychotherapy of some well-defined emotional problems more accessible and affordable. 70 Al & Education Acknowledgments This project would not have been possible without the dedication and insight of Professor Irving Binik who provided virtually all the domain knowledge used by Sexpert, and of Simon Freiwald who designed the programming environment and inference engine on top of which Sexpert was built. I also wish to thank Benoit Mulsant for many stimulating discussions and his helpful review of this paper, and Ted Shortliffe for his comments and constructive criticisms of an earlier draft. References a] Anderson JR, Boyle CF, Reiser BJ. Intelligent Tutoring Systems. Science. 228:456-462 [Anderson et al, 1985b] Anderson JR, Boyle CF, Yost 6. The Geometry Tutor. In Proceedings of the International Joint Conference on Am$%&.zl Intelligence. (Los Angeles, calij?ornia) [Brown & Burton, 19781 Brown JS, Burton RR. Diagnostic Models for Procedural Bugs in Basic Mathematical Skills. Cognitive Science 2: 15% 192 [Brown et al., 19821 Brown JS, Burton RR and DeKleer J. Pedagogical, Natural Language and Knowledge Engineering Techniques in SOPHIE I, II and III. In Sleeman D and Brown JS (Eds) Intelligent Tutoring Systems Academic Press: New York [Clancey, 19821 Clancey WJ. Tutoring Rules for Guiding a Case Method Dialogue. In Sleeman D and Brown JS (Eds) Intelligent Tutoring Systems Academic Press: New York [Clancey, 19841 Clancey WJ. Classification Problem Solving. In Proceedings of the National Conference on Artificial Intelligence. (Austin, Texas) pp. 49-55 [Gosh et al., 19841 Gosh A, Marks IM, Carr AC. Controlled Study of Self-Exposure Treatment for Phobics: Preliminary Communication. JR Sot Med 77:483-487 [Lang et al., 19701 Lang PJ, Melamed BG, Hart J, A. Psychophysiological Analysis of Fear Modification Using an Automated Desensitization Procedure. Journal of Abnormal Psychology. 76:220-234 [Ohlsson, 19861 Ohlsson S. Some Principles of Intelligent Tutoring. hstructional Science 14293-326 [Sacerdoti, 19741 Sacerdoti ED. Planning in a hierarchy of abstraction spaces. Artificial Intelligence 5: 115- 135 [Selmi, 19831 Selmi P. Computer-Assisted Cognitive-Behavior Therapy in the Treatment of Depression. Ph.D Thesis, Univ. of Wisconsin, Madison [Servan-Schreiber, 19861 Servan-Schreiber D. Artificial Intelligence in Psychiatry. Journal of Nervous and Mental Disease 174:191-202 [Sleeman, 19821 Sleeman D. Assessing Aspects of Competence in Basic Algebra. In Sleeman D and Brown JS (eds) Intelligent Tutoring Systems Academic Press: New York [Stevens et al., 19821 Stevens A, Collins A, Goldin SE. Misconceptions in Students’ Understanding. Jn Sleeman D and Brown JS (IUs) Intelligent Tutoring Systems Academic Press: New York wagman, 19801 Wagman M. PLATO DCS: An Interactive Computer System for Personal Counseling. Journal of Counseling Psychology.27,16-30. Woolf & McDonald, 19841 Woolf B, McDonald DD. Context- dependent transition in tutoring discwrse~ Proceedings of the National Conference on Am~cial Intelligence (Austin, Texas) pp. 355-361 Servan-Schreiber 71
1987
15
605
rehminary Raymond Reiterl Department of Computer Science University of Toronto Toronto, Ontario, Canada M5S-lA4 Johan de Kleer Intelligent Systems Laboratory XEROX Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, California 94304 ABSTRACT In this paper we (1) d e fi ne the concept of a Clause Man- agement System (CMS) - a generalization of de Kleer’s ATMS, (2) motivate such systems in terms of efficiency of search and abductive reasoning, <and (3) characterize the computation affected by a CMS in terms of the concept of prime imp1icants.l 1. A Problem-Solving Architecture Figure 1 illustrates an architecture for a problem solving system consisting of a domain dependent Reasoner cou- pled to a domain independent Clause Management System (CMS). For our present purposes, the Reasoner is a black box which, in the process of doing whatever it does, oc- casionally transmits a propositional clause2 to the CMS. The Reasoner is also permitted to query the CMS any time it feels so inclined. A query takes the form of a proposi- tional clause C. The CMS is expected to respond with every shortest clause S for which the clause S V C is a log- ical consequence, but S is not a logical consequence, of the clauses thus far transmitted to the CMS by the Reasoner. In Section 2 we show why obtaining such S’s is important for many AI systems. For example, for abductive reason- ing 1s will be an hypothesis, which, if known, sanctions the conclusion C. For efficient search 1s defines a most general context in which C holds. A traditional ATMS/TMS is a restricted CMS in which (1) the clauses transmitted to the CMS are limited to be either Horn (i.e., justifications) or negative (i.e., nogoods), ’ l?ellow of the Canadian Institute for Advanced Reserch. This research was funded by the Canadian National Science and Engineer- ing Research Council under grant A0044. a In actual fact, the reasoner may transmit an arbitrary predicate calculus clause (containing variables for example), but this clause would be treated propositionally by the CMS. In other words, dif- ferent atomic formulas are treated as different propositional symbols by the CMS. ‘and (2) the queries (C) are always literals. The funda- mental TMS problem is to identify the contexts in which a given singleton clause C holds - this is equivalent to querying the CMS for the shortest clauses S of the pre- ceding paragraph, as the negation of each such S implies C. Minimal Supports for c REASONER I CLAUSE MANAGEMENT SYSTEM Figure 1 : A problem-solving architecture 2. Motivation and Formal Preliminaries We shall assume a propositional language with countably infinitely many propositional symbols and with the logical connectives V, 7 The connectives A, 1 are defined in terms of V, 1 in the usual way, as itre the formulas of the language. The definition of the entailment relation, I=, is also standard: If S is a set of formulas and w a formula then S f= w just in case every assignment of truth values to the propositional symbols of the language which makes each formula of S true also makes w true. Reiter 183 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 2.1. Definitions A literal is a propositional symbol or the negation of a propositional symbol. A clause is a finite disjunction Lr V . . . V L, of literals, with no literal repeated. We shall often represent a clause by the set of its literals. The empty clause, denoted by {} is the clause with no literals. A clause is a tautology iff it contains a propositional symbol and the negation of that propositional symbol. Let C be a set of clauses, and C a clause. A clause S is a support for C with respect to C iff C /& S and C b S U C. S is a minimal support for C with respect to C iff no proper subset of S is a support for C with respect to C. We can think of the CMS as a repository for C - some (not necessarily all) of the conclusions derived thus far by the Reasoner3. A support clause S for C with respect to C has the properties: 1. CbSUCi.e. Cb+3C. 2. C p S i.e. C U (4’) is satisfiable. Property 1 tells us that the conjunction of literals + is an hypothesis which, if known to C (and hence to the Rea- soner) would sanction the conclusion C. Property 2 pre- cludes hypotheses inconsistent with C since these would sanction any conclusion whatsoever. Finally, a minimal support clause S defines a shortest hypothesis +’ which sanctions C c follows. or, as it were, a simplest _- conjecture from which We are now in a position to specify the task which a CMS is to achieve. Recall that a’ CMS receives clauses transmitted to it by the Reasoner. Let C be the set of such clauses. Recall also, that the Reasoner may query the CMS with a clause C. The task of a CMS is to determine all minimal support clauses for C with respect to C. Ex<ample: C = W4 4, rl, -h -cd, b, -9, {-P, q, 4, {q, r, -t}} Minimal supports for {p} : {}. Minimal supports for {} : none. Minimal supports for {q} : {s}, {r, -t}, {-a). Minimal supports for {p, q} : {} Minimal supports for {s, r} : (q}, (1s}, (Tr). It is important to observe that S being a minimal sup- port clause for C is relative to C. In other words, 4 is a simplest conjecture from which C follows with respect to what the CMS has been told about the knowledge avail- able to the Reasoner. -S need not be a simplest conjecture so far as the Reasoner is concerned, since the Reasoner may have information relevant to this question of simplic- ity which it has failed to transmit to the CMS, or perhaps the Reasoner has failed to derive such relevant informa- tion. Why should a Reasoner find this notion of a minimal support clause of any value to it all? There are at least two reasons: 2.2. Abductive Reasoning Imagine a reasoning system with some knowledge base KB 3 Remember that the Reasoner decides, on its own pragmatic grounds, which conclusions it transmits to the CMS and which it withholds. 184 Automated Reasoning which, for simplicity of exposition, we take to be a set of first order sentences. Imagine further that the Rea- soner has some goal formula g which it hopes to establish by a back-chaining inference procedure using KB as its premises, but that these premises ,are insufficient to prove g. Suppose the Reasoner recognizes this by its inability to expand any of the leaf nodes in the search tree of Figure 2, which we shall use by way of an example4. From this the Reasoner can conclude: ) An “or branch” } “and branches” Figure 2 : A back-chaining search tree. KBkp//q~r~gi.e.KB+~pv~qV~-Vg KB k=pAqxgi.e. KB bpv~qvg MB klqAr3gi.e. /=qVvVg Now, suppose the reasoner is concerned with perform- ing abduction, which is to say that it is seeking an explana- tion for g. Perhaps g is some observation of the world and KB, the Reasoner’s current theory of the world, is inade- quate to explain g (i.e., KB p g). The explanation which the Reasoner seeks is an hypothesis which, together with its background knowledge KB, entails g. Trivially, for the example of Figure 2, there are three such explanations im- mediately at hand: p A q A r, up A q and lq A r. But these are not the simplest possible explanations. It is the job of a CMS to provide such simplest explanations. Accordingly, the Reasoner transmits to the CMS the three clauses it inferred from Figure 2. The CMS now contains the set of clauses C = &P, 1s --, d, {P, -a d, {a, Try dl- If the Reasoner now queries the CMS with the clause {g} the CMS returns three minimal support clauses of {g} with respect to C, namely: (lg), {p, lq}, (lr>. This means C /= g 1 g and hence KB /= g 1 g, CklpAq3gandhenceKB /==pAqXg,and C b r I g and hence KB /= r 1 g. Thus, aside from the trivial explanation g, there are two simplest explanations for g, namely up A q and r. Notice that we have in mind here quite specific no- tions of “exnlanation” and “simplest.” Explanations are conjunctions of ground literals. A simplest explanation is one for which no proper sub-conjunct is an explanation. Finally, we insist that explanations be consistent with C, 4 We are assuming here that g is a ground literal, and that, al- though KB is a set of first order sentences, the leaf nodes of Figure 2 are-all ground literals. for otherwise we could explain anything! Notice also that a CMS, as defined, is capable of pro- viding simplest explanations only for q’s which are disjunc- tions of ground literals. This is clearly not as general as one might like. For example, the Reasoner could have two observations g1 and g2 of the world for which it wishes simplest explanations i.e., it wishes minimal conjuncts e such that C /= e 1 gi A g2 and C p le. Our CMS is not defined to handle this setting. In the full paper we show how a CMS can. Finally, with reference to Figure 2, notice that we have taken the Reasoner to generate abductive inferences by a back-chaining mechanism which terminates with leaves of the search tree which cannot be expanded further. While this is one possible mechanism, others are also possible. For example, the Reasoner may have defined some distin- guished set of literals which, in a back-chaining search, are never expanded. For the Reasoner, such literals de- fine a class of acceptable assumptions which the Reasoner is prepared to make. Back-chaining is not essential; one can define resolution theorem-provers with suitable termi- nation conditions. The unresolved literals of the uncom- pleted refutations can support abductive inferences (e.g., [Cox and Pietrzykowski, 19861). Again, such unresolved literals may be determined by a prespecified class of as- sumptions acceptable to the Reasoner. There are many systems and proposals for abductive reasoning along the lines sketched above. Representative examples are residue resolution [Finger, 19851, the THE- ORIST system of [Poole, 19861, the hypothesis genera- tion formalism of [Cox and Pietrzykowski, 19861, and the NLAG system for learning by analogy by [Greiner, 19861. 2.3. Efficient Search By exploiting the CMS to organize and control search, much of the computation of the Reasoner can be avoided. Consider the following sequence of statements (from [de Kleer, 861): A:z~{o,i} .l? : a = cl(x) c : y E {O,l} D: b= e2(y) E : x E (0,l) F:c=eg(z) G:b#c H:a#b The functions ei require expensive computations, for ex- ample, ei(z) = (z + lOOOOO)!. Suppose that the Reasoner is based on chronological backtracking: it processes the statements A through Hone at a time until an inconsistency is detected in which case it backtracks to the most recent vcariable assignment it can change. The sequence of steps it might follow to End the two solutions are as follows: 1 : Let 5 = 0, compute a = er(0). 2 : Let y = 0, compute b = ez(0). 3 : Let z = 0, compute c = es(O). As b = c backtrack to R 4 : Let z = 1, compute c = es(l), b # c but a = b so backtrack to 2. 5 : Let y = 1, compute b = ez(1). 6 : Let z = 0, compute c = es(O), b # c, a # b, solution. 7 : Let z = 1, compute c = es(l). As b = c backtrack to 1. 8 : Let x = 1, compute a = er(l). 9 : Let y = 0, compute b = e2(0). 10 : Let z = 0, compute c = es(O). As b = c backtrack to 10. 11 : Let z = 1, compute c = es(l), b # c, a # 6, solution. 12 : Let y = 1, compute b = en(l). 13 : Let z = 0, compute c = es(O), b # c, as a = b back- track to 13. 14 : Let z = 1, compute c = es(l), as b = c stop. Notice that this approach requires 14 expensive computa- tions and 6 backtracks. Now consider how a CMS might be used to improve this search. The CMS propositional symbols all represent equalities (e.g., ‘x = 1’). The new search is the same a; the chronological one with the following changes. Every time the Reasoner does some computation, it constructs a clause representing it (e.g., the computation of a = ei(x) from x = 0 is represented by x # OVa = el(0)) and conveys this to the CMS. Before performing any computation, the Reasoner checks to determine whether the computation has been done previously. Before choosing (indicated by a ‘Let’ in the trace) a value for a variable, the Reasoner first queries the CMS to see whether the variable is determined by the current choices. If the variable is determined, no choice is necessary and processing proceeds. If the vari- able is not determined, it chooses a value which can be consistently added to the current choice set. The resulting problem-solving trace is: 1 : Let x = 0, transmit x = 0 V x = 1, x # 0 V a = el(0). 2 : Let y = 0, transmit y = 0 V y = 1, y # 0 V b = ez(0). 3 : Let z = 0, transmit z - 0 V 2 = 1, 2 # 0 V c = es(O), b # e2(0) V c # es(O). Th e current choice set is now inconsistent, so backtrack to 3. 4 : z = 1 follows, transmit a # el (0) V b # e2(0). The current choice set is inconsistent, so backtrack to 2. 5 : y = 1 foll ows, transmit y # 1 V b = ez(1). 6 : T,et z = 0, solution. 7 : Let 2 = I, transmit 2 # 1 V c = es(l), b # ez(I) V c # es(l). Th e current choice set is inconsistent, so backtrack to 1. 8 : Let x = 1, transmit x # 1 Vu = cl(l). 9 : Let y = 0. 10 : 2 = 1 follows, solution. 11 : Let y = 1, transmit a f el(l)Vb # ez(I). The current choice set is inconsistent, so stop. From this example we can see some of the advantages of a CMS-guided search. Intuitively, the CMS is functioning as an intelligent cache. For this example, CMS-approach requires 6, not 14 expensive computations, 3, not 5 back- tracks, and 8, not 14 choices. Note that this particular Reiter 185 search example exploits only a few of the capabilities of a CMS - we present it only as an illustration of how a Unfortunately, the converse of Theorem 2 is false, as the following example shows: CMS could be utilized. It is relatively simple to invent a strategy for this particular problem which achieves the same efficiency, however, the CMS provides a general fa- cility that achieves these advantages for <any Reasoner. The CMS performs many of the functions of a conven- tional TMS [Doyle, 791 [Doyle, 831 [McAllester, 801. Their advantages (and disadvantages) have been extensively dis- cussed elsewhere (e.g., [de Kleer, 861). c = {{Pl, 4&l, Pa, c2)) c = lc*.c3J I -I a, The prime implicants of c are: {Pi9C1)7 -IPisP2,C2), {Pl,Tl}, {p2,7p2}, etc. The prime inlplicant II = {pl, ~2, c2} satisfies II n c + (1, but II - C = {pl,p2} is not a minimal support clause for C with respect to C. 3. Prime Implicants Definition. A prime implicant of a set C of clauses is a clause C such that There is, however, an important partial converse of Theorem 2: 1. C j= C, and 2. For no proper subset C’ of C does C k C’. The concept of a prime implicant arises in solving the problem of two-level Boolean minimization of switching circuits [Birkoff and Bartee 1970, Ch. 61. In this set- ting, one is required to synthesize a given Boolean function in sum-of-products form using the fewest total number of and-gates and or-gates. Our definition of prime implicant is the dual of that used in Boolean minimization, basically because for us, the Boolean function is represented by C, a set of clauses, and hence is in product-of-sums form. De- spite this difference, we shall use the same terminology “prime implicant” since formally both concepts share the same properties modulo the duality between V and A. Notice that if C p p and C b lp for some propo- sitional symbol p, then the tautology p V lp is a prime implicant of C. The following result is straightforward: Proposition 1. If C is a set of clauses and C a clause, then C /= C iff there is a prime implicant of C which is a subset of C. Theorem 2. Suppose C is a set of clauses and C a clause. If S is a minimal support clause for C with respect to C then there is a prime imp&ant II of C such that IInC # () andS==H-C. Proof. We know that C /= S U C. Moreover, by the minimality of S, we know that SnC = (1. By Proposition 1, there is a prime implicant II of C such that II C S U C, say II = S’ U C’ where S’ C S and C’ C C. We prove first that C’ # {} f rom which it follows that II n C # {}. For if C’ = {} th en II C S and since C + II it must be that C k S which contradicts S being a support clause for C with respect to C. Finally, we prove that S’ = S, so that II = S U C’ and since S n C = {} and C’ C C it will follow that S = II - C. To prove S’ = S we assume the contrary and obtain a contradiction. So, suppose S’ is a proper subset of S. Since C F S then C k S’. Moreover, since C k II and C’ C C then C /= S’ U C. But then S’ is a smaller support clause for C with respect to C than is S, which contradicts the minimality of S. QED. Theorem 3. Let C be a set of clauses and C a non-empty clause. If lI is a prime implicant of C such that C c SI, then II - C is a minimal support clause for C with respect to c. Proof. We must prove that C p II - C, which is obvious, and that C b (II - C) UC which is equally obvious. QED. Definition. A unit clause is a clause with just one literal. A simple consequence of Theorems 2 and 3 is the fol- lowing: Corollary 4. Let C be a set of clauses and C = (!) a unit clause. Then S is a minimal support clause for C with respect to C i$ there is a prime implicant lI of C such that e E IfI and S = II - {L}. Corollary 4 completely characterizes the minimal sup- port clauses in the case of unit queries issued by the Rea- soner to the CMS. As we shall see in Section 5, this result provides a characterization of de Kleer’s [1986] Assump- tion Based Truth Maintenance System. Moreover, it will allow us to generalize his system considerably. Notation. When C is a set of clauses and C a clause, qc,c> = {II - q-I is a prime implicant of C and JJnc # 01 MIN-SUPPORTS(C, C) = {SlS E n(C, C) and no clause ofn(C, C) is a proper subset of S}. Theorem 5. (Characterization of minimal support clauses.) MIN-SUPPORTS (C, C) is the set of all minimal support clauses of C with respect to C. 186 Automated Reasoning Proof. By Theorem 2, if S is a minimal support clause of C with respect to C then S E n(C, C). We must prove that no proper subset of S is in n(C, C). Suppose to the con- trary, for some proper subset S’ of S, that S’ E n(C, C). We shall prove that S’ is a support clause for C with re- spect to C, contradicting the minimality of S. Clearly, since C tfr S and S’ C S, C i# S’. It remains to show that C+S’uC. NowS’=I-I - C for some prime implicant II. Thus, S’UC = (fl-C)UC > II. Since C /= II, C k S’UC. Hence, S’ is a support clause for C with respect to C. Now suppose S E MIN-SUPPORTS (C, C). We must prove that S is a minimal support clause for C with respect to C, i.e., that 1. c t&s, 2. Ct=SuC,and 3. No proper subset ,of S has properties 1 and 2. Proof of 1: Since S = II - C for some prime implicant H of C such that II r) C # {}, S is a proper subset of II. Because H is a prime implicant of C, C /# S. Proof of 2: Since S = II - C for some prime implicant II, S U C = (IT - C) u c z, II. s ince C b II, C + S U C. Proof of 3: Assume to the contrary that S has a proper subset S’ with property 2, i.e., that C k S’ U C. By Proposition 1, C has a prime implicant II’ C S’ UC. Since S = II - C for some prime implicant H of C, S fl C = {}. Since S’ G S, S’ 17 C = {}. Hence, since H C S’ U C, II - C C S’ which is a proper subset of 5’; since II’ - C E n(C, C), S $! MIN-SUPPORTS (C, C), contradiction. QED. 4. Interpreted vs. Compiled There are two natural ways the CMS can store information and process queries issued to it by the Reasoner. 4.1 The Interpreted Approach The simplest storage mechanism is to encode the Rea- soner’s clauses just as they are, possibly indexed by the literals they contain for more efficient content addressable access. Thus, updating the CMS’s database with a new clause is quick and simple. The price one pays for this sim- plicity of storage is a high retrieval cost. To find all min- imal support clauses for C with respect to C, the CMS’s database requires computing MTN-SUPPORTS (C, C) by Theorem 5, and this can be an expensive conlputation.5 If the Reasoner is expected to issue many CMS updates but few queries, then this interpreted approach will be war- ranted. In the full paper we shall describe and justify an algorithm for computing MIN-SUPPORTS (C, C). 4.2 The Compiled Approach Under this approach, the CMS does not store the clauses transmitted to it by the Reasoner. Tnstead, it stores all 5 Jn fact, it is easy to show that the general problem is NP-hard. the prime impliccants of these clauses. This is potentially an explosive approach. It can be shown that there are Boolean functions in n variables with exponentially (in n) many prime implicants. Moreover, CMS updates can be very expensive since, if C is the CMS’s current database (consisting of the prime implicants of all clauses issued by the Reasoner thus far), and K is a new clause issued by the Reasoner, we must compute all the prime implicants of CU {K}. The reward for the high space and time complexity of this approach, by Theorem 5, is that retrieval of minimal support clauses is cheap. The first thing we must show is that there is no loss of information in representing a set C of clauses by PI(C) the set of its prime implicants, i.e., that C and PI(C) are logically equivalent. Theorem 6. Suppose C is a set of clauses. Then C and PI(C) are logically equivalent in the sense that if C E C, then PI(C) b C, and if C E PI(C) then C /= C. Proof. Trivial. Theorem 6 justifies the compiled approach of storing only the prime implicants of the Reasoner’s clauses in the CMS database. In the full paper we shall describe and justify an al- gorithm for updating a compiled CMS database i.e., for computing the prim e implicants of C U {K} assuming we already have all prime implicants of C. 5. De Kleer’s ATMS: A Reconstruction De Kleer’s 119861 A ssumption-Based Truth Maintenance System (ATMS) is a CMS constrained to process so-called Horn clauses. Moreover, the ATMS requires that the propo- sitional symbols have a distinguished subset called assump- tions. From the standpoint of the Reasoner, an assumption might be one of the distinguished propositional symbols which it is prepared to propose as part of an hypothesis to explain an observation in abductive reasoning (Section 2), or one of the propositional symbols forming part of a proposed solution to a constraint satisfaction problem (Section 2). Definition. A Horn clause is a clause in which at most one propositional symbol occurs unnegated. The general form of a Horn Clause is 1~1 V -a .V ~p,Vp for propositional symbols p, pl, . - .p,, n 2 0, or lpi V a . . V 744 2 0. Recall that, for the purposes of de Kleer’s ATMS, there is a distinguished subset of the propositional symbols called assumptions. We denote assumptions by upper-case A’s, usually subscripted, non-assumption propositional sym- bols by lower case p’s, and when the distinction + unim- portant by lower-case CX’S. In de Kleer’s approach, the Reasoner is constrained to transmit to the ATMS only Horn clauses. De Kleer Reiter 187 calls such transmitted Horn clauses justifications. When a clause has the form 1~1 V - - e V T(Y~ V (Y, cx is called the consequent of the clause, and (~1, - * -(Y, the antecedents of the clause. If n = 0, the consequence cx is called a premise. When formulated in our terms, the task of the ATMS is the following: Given J, the set of justifications transmitted thus far to the ATMS by the Reasoner, and cr, a propositional sym- bol (which may or may not be an assumption), compute (Al A. . . A A,] (~AI, . . ., lAn} is a minimal support clause for {a} with respect to J}. This set is what de Kleer calls a consistent, sound, com- plete and minimal label for Q. Corollary 4 immediately provides the following: Theorem 7. (Characterization of de Kleer’s ATMS) Suppose that J is the set of justifications transmitted to the A TMS by the Reasoner, and that {(;Y} is a query, where cy is a propositional symbol (which may or may not be an assumption). Then the answers to this query are given by {Al A.. - r\A,[k>O andTA1V...VTAkVcr is a prime imp&cant of J). In the full paper we characterize the algorithm used by de Kleer’s ATMS, and prove its correctness with respect to Theorem 7. 6. Generalizing the ATMS We can immediately see various ways to generalize de Kleer’s ATMS. To begin, justifications need not be Horn clauses. Thus we can define a justification to be any clause of the form fa1 v * * ’ V fa, V cy, where n > 0 and each cy is a propositional symbol which may or may not be an assump- tion. Moreover, the consequence (Y need not be atomic. We can allow la! as a consequence, or more generally, fcvr V - - - V fak can be taken to be a consequence. Finally, queries can be arbitrary clauses, not necessarily, as in de Kleer’s ATMS, unit clauses. In the full paper, we elaborate on such possible generalizations. Notice that Theorem 5 character- izes query evaluation for any such generalization. 7. A Word on Computing Prime Pmplicants The results of this paper rely on computing all, or some, prime implicants of set C of propositional clauses. In the theory of switching circuit Boolean minimization, prime implicants are computed using the consensus method [Birk- hoff and Bartee, 1970, Ch. 61. Since our notion of prime implicant is the dual of that for switching theory, we are concerned with the dual of the conserbsus method, which turns out to be resolution [Robinson, 19651. A brute force way of computing all prime implicants of C is to resolve pairs of clauses of C, add the resolvents to C, delete sub- sumed clauses and repeat until no fresh clauses are ob- tained. The resulting clauses are all of the prime impli- cants of C. Obviously, WC prefer a more disciplined ap- proach to computing prime implicants. There are a few such approaches in the literature, e.g., [Minicozzi and Re- iter, 19721 [Sl gl a e et al., 19691). The full paper considers the appropriateness of these and other algorithms for de- termining prime implicants. References [Birkhoff and Bartee, 19701 G. Birkhoff ‘and T.C. Bar- tee. Modern Applied Algebra. McGraw-IIill, New York, 1970. [Cox and Pietrzykowski, 19861 P.T. Cox and T. Pie- tryzkowski. Causes for events: their computation and ap- plications. In Proc. 8th Int. Conf. on Autom. Deduction and Lecture Notes in Computer Science 2.90, pages 608- 621, Springer-Verlag, 1986. [de Kleer, 1986j J. de Kleer. An assumption-based TMS. Artificial Intelligence 28 127-162, 1968. [Doyle, 19791 J. Doyle. A truth maintenance system. Artificial Intelligence 12 231-272, 1979. [Doyle, 10831 J. Doyle. Some theories of reasoned as- sumptions: An essay in rational psychology. CS-83-125, Department of Computer Science, C.M.U., 1983. [Finger, 19851 J.J. Finger. Residue: a deductive ap- proach to design synthesis. Technical Report Stan-CS-85- 1035, Knowledge Systems Laboratory, Stanford University, 1985. [Greiner, 19861 R. Greiner. Learning by understand- ing analogies. Technical Report CSRI-188, Department of Computer Science, University of Toronto, 1986. [McAllester, 19801 D. McAllester. An outlook on truth maintenance. AIM-551, Artificial Intelligence Laboratory, M.I.T., 1980. [Minicozzi and Reiter, 19721 E. Minicozzi and R. Re- iter. A note on linear resolution strategies in consequence- finding. Artificial Intelligence 3 175-180, 1972. [Poole, 19861 D. Poole. Default reasoning and diagno- sis as theory formation. Department of Computer Science Techical Report CS-86-08, University of Waterloo, 1986. [Robinson, 19651 J.A. Robinson. A machine-oriented logic based on the resolution principle. J. ACM 12 23-41, 1965. [Slagle et al., 19691 J.R. Slagle, C.L. Chang, and R.C.T. Lee, Completeness theorems for semantic resolution in con- sequence-Ending. In Proceedings IJCAI-69, pages 281--285, Washington, D.C., 1969. 18% Automated Reasoning
1987
16
606
RECIGGNING IN THE PRESENCE OF INCONSISTENCY Fangzhen tin Department of Computer Science Hua Chiaa University, Fujing, P-R-China ABSTRCaCT In this paper, we propose a logic which is nontrivial in the presence of inconsistency. The logic is based on the resolution principle and coincides with the classical logic when premises are consistent. Some results of interesting to Automated Theorem Proving are a sound and sometimes complete three-valued semantics for the resolution rule and a refutation process which is much in the spirit of the problem reduction format. 1. Introduction There are at least two considerations in Computer Science and Artificial Intelligence force us to study nontriviaa reasoning in the pre5ence of inconsistency. In database system, we certainly do not wish our system to be wrecked by a single contradiction offered by the user and we oftem need to draw some conclusions about objects which are irrelevant tc the contradiction because a contradiction is often difficult to be detected and corrected. In AI, there are many efforts to formalize common sense reasoning, for example9 CMcCarthy, 19GQp%9G&3, KReiter, 19803. CI general rule for explaining common sense reasoning may fail smet i mes. For example, aecordi ng to closed world assumption ~Reiter(f97Gb1, a positive literal ir not true if it is not a consequence of the facts in a database, so if we have a database expressed by the formula P(aI\/P(b), then we will run to contradiction by using the closed world assumption for we can infer -P(a)/\-P(b). Ther reasoning in the presence of inconsistency 5eems necessary in common sense reasoning (the above contradiction caused by closed world assumption can be avoided by using circumec CMcCarthy, 19E463, but as avis shcpwed in [Davisi, 19803 that circumscription can al so cause inconsistency) . Systems that are not wrecked by contradictions have been studied by Phi 1 osophers , Logicians and Computer Scientists for years Cda Costa, 89‘;749 Belnap, 197b9 Priest, 1979 and Martins and Shapiro, 19863. In this paper, WI propose a logic satisfying the following three conditions: in the following, L is the new Iogic, G I+ II means & can be fnfered from G in L and G D- A means A cam be infered from G according to classicall logic, where A is a formula and G is a set af formulas. (A) if G is a consistent set of formulas in the sense of first order logic, then for any formula A9 G 1s ba if9 G I- A. (%I the problem of deci ing whether G I-+ 6l is true for finite G is partial solvable, that is, the set C(G9Be\) I G 94 A3 is recursively enumerable. (Cl for any d4 such that finite 0 I4 A G9 is ther@ is a d not true. ormul a The reasons for the three ccnditiorts are as follows. First, the condition 4421 means that the deductive relation “i+” is nontrivial in every possible ca5e. regulate the condition (A) becau5e we ,wish the new logic L to be a faithful extension of the first order logic. Finally, the requirment QIB) is necessary foF the Iogic L to be implemented by a computer program. The systems in EPriest, 19793 and KMrrtirns apiros 698$3 satisf condition 1 and (Cl but not ( asa exampl the logic L that s CA) and (Cl but not (I31 B define G 14 ISI if9 63' I- A for every maximal subset G’ of G such that 6' is consistent in the sense af the first order logic. It is easy to see that (AI and (Cl hol but tl3) is false for the problem of cidiwg whether a finite set of formula5 is ate that the i rect From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. (19 i-f Ci is in ST then cu(Ci) i <Ci>. extension of a relation defined in CRescher , 19643 in the propositional language to the first order one. Our main point is that various contradiction tolerant systems can be constructed by restricting the classical rule of "reduction to absurdity". kcordinq to the rule, if we can infer a contradiction from CI and -33, then we assert that B can be infered 9rom A, From this it is easy to see that every thing can be infered from a contradiction. So we must restrict the rule as: A infers B whenever we can deduce a contradiction from Ca and -33 by using some information from -B. The idea is certainly not a new one, for exampPe, [Dunn 0 19761. What make BUF work new is just a new way 09 formalizing the statement: using some information from -B. In Section 2, we consider the resolution principle as a general rule of "reduction to absurdity" and define a deductive relation satisfying the conditions (A9 to (Cl above based on the resolution principle. In Section 3, we show some connections of the results obtained in Section 2 with CIutomated Theorem Proving. Section 4 contains some concluding remarks,, 2. Propositional Resolution In this section, we shall f ecus our attentions on the propositional logic, we consider how to use the propositional resolution to obtain a logic suitable #or the reasoning in the presence o=#’ inconsistency. Our terminology are those in CChang and Lee, 19733. In particular, a deduction of a clause C from a set of clauses S is a sequenee Clr...rCnr where CR is C and Ci is either in S or a resolvent of clau5es preceding Ci. A deduction 09 0 (the empty clause9 from S is called a refutation. Proof by resolution principle is a complete rule Pf “reduction to absurdity" in the sense that for any formulas A and B, CI I- B iff the union of Sl and S2 can be F@buted by the resolution rule, where /\§I and AS2 are conJunctive normal forms of A and - , respectively. Therefore, in order to prevent every thing from being infered from a contradiction, we have to restrict refutations of the union of SP and S2, this motivates the 9ollowinq definitions. De-finition 1. Suppose S ie a set cf clauses, Ci,C2V,... Ck is a deductim #ram Sp for any Ci, cu(Ci9, the set of clauses used in inhering Cilp is d&ined inductively BB follows: (29 if Ci is a resolvent of Cm and Cn, m,n < i, then cu(Ci) is the union of cu(Cm9 and cu(Cn9. Note that in De. 1, if Ci is both in S and a resolvent of Cm and Cn for some natural numbers m,n < i* or Ci is a resolve& of more than one pair of clauses preceding it, then by Definition lp there are more than one way to compute cu (Ci 9. In order to avoid ambiguities, in the following, when we write down a deduction Cl,C2,...,Ck f~-cm SW we shall attach a +ixed cu(Ck9 with it, so a deduction from S is in fact a deduction from S with a fixed way for computing cut.). Ther&ore Ll\/-L2, L2, Ll with cu(L19 = CLl\/-L2, L23 and Lf\/-L29 L29 Ll with cu(LiI = CL13 are considered as two different deductions 09 Ll from S = ELI\/--L2, L2, Ll). Dedinition 2. Suppose tSl,S21 is a pair of set5 of clauses, a sequence Cl, C2 . . ..Ck is a refutation of (Sl,S29 i9 it is a refutation of the union of Sl and S2 and there is a clause C which is both in S2 and cu(Ck9. It is conventional to transsborm a formula into a set of clauses. kJe can furthermore suppose the proc trans-formation is unique so that for any formula we can say the set ob clauses corresponding to the formula. The function sf De.2 is illustrated by the following definition Definition 3. Suppose G i5 a s +ormuPas and F a formula, 81 and S2 are the sets of clauses corresponding to B and -FB respectively. F can be finbe!-ed 9rom El (by contradiction tolerant reasoning 9 , writtarn 6 II- F. if9 (S19S29 can be refuted according to k-2. FQF the convenience of express, in this paper o all propositions about 8’ I I-” are stated in term ob the refutationness of a pair of sets of clauses. The transformation is obvious. Two propositions come directly from the definitions. Proposition 1. Suppose (Sips29 is a pair of sets of clauses. If Sl is consistent then a sequence of claures is a refutation of (Sl,S29 iff it is a refutation of the union of Sl and S2. Proposition 2. Suppose tSl,S29 is a pair of ret9 of clau5iesi, if 91 and S2 have no common predicate and -function symbolsi, then (Sl ,S29 can be refuted if f S2 can be Fef mted. 140 Automated Reasoning Proposition I and Proposition 2 correspond to the properties (A9 and (C9 in Sec.1, respectively. IR ordelr t0 see when (Sl,S29 can be refuted in the ease that Sl is inconsistent, we need some more definitions. In the following, for any literal Lo we write -L as the literal such that if L is the atom PI, then -L is -ho and if L is the negation of the atom PIN then -L is A. The following lemma about the rescllution principle will play an important role in this paper. Lemma 1. Suppose S is a set of clauses, ClrC2ZJ.mm, Cm-0 ir a refutation of S. For any C=Ll\/... \/Lk in cu<Cn9 q there is a deduction of -Li from S for any i=1,2g...gk. %y the 1 emma, it is easy following theorem i s true. ta see the Theorem 1. Suppose Sl and S2 are two siets of clausess. (Sl ,S29 can be refuted iff there is a clause C=Ll\/...\/Lk in S2 such that for any i=l,...VkV there is a deduction of 4. from the union of Si and $52. In order to further our study, we introduce a semantics such that the resolution rule is always soundo and sometimes complete in the semantics. The semantics is a three-valued valuation, we define it for “-*I and “\/” other connectives are defined by iefiniti*ns: A/W = -t-&\/-B), A -> 18 = -AN/B* The three values are t (trueb, f (false) and p (79. There is no fixed meaning for “p”r sometime it can be understood as true and sometime false. The above truth tables are self-explanatory. CI (three-valued) valuation v is a mapping from atoms to Ct,f ,p>. It Is conventional to extend the domain of a valuation to the set of formulas. For any set S of f ormuals and formula F, F is a (three-valued) semantic consequence of Sg written S I= F, iff for any valuation v, if for any member Cs of S, v(cI9 ir not 8, then v(F) is not f either. Example 1. CLl, -Ll\/L23 13 L2 i.5 true9 but Ll := Ll\/L2 is not true, where Ll and L2 are different literals. Theorem 2. Suppose S is a set of cl au5es o C is a clause. If there ie a deduction of C from 5, then 8 I= C, Theorem 2 shows that the resolution rule is sound within our (three-valued9 semant i cs . The converse (completeness9 of the theorem is also true if the clause C in the theorem is a literal. Theorem 3. Suppose § is a set of clausesi, L a literal. If 8 I= L, then there is a deduction of L from S. In terms of ” I I-” 4 Theorem 2 and Theorem 3 correspond to the following theorem. Theorem 4. Suppose G is a set of formulas and C a clause,. 8 is the set of clauses corresponding to G. e havcq G II- -G if8 s I= -c. Note that the result of the theorem is not true if we replace S I- -@ by B I= -C, that ifsi, the process of transforming a formula to its conjunctive normall form is not truth preserving acccrdfng te our three-valued semantics. In fact, the problem is the distribution Bawe5, it is easy to see that I= A\/ (B/\G9 (-2 (h\/B) /\tE\\/C9 is not true. In a sense, our three-va’B ued semantics eakening of the conventional lued semantics. For any sets of formulas G and formula F, it is easy to see that if G I= F., then G 9- 6, but the converse is not true. It is cb interesting to note that the three-valued semantics can be further-more weakened . If we just change the truth table for “\P” above so that the truth-value of A\/B is p not f hen A is f and B is p or A is p and i5 9, then we get a semantics which is exactly the one in tPriestq 19793. It can be proved that for any set G of Bcrmmllas and formula F4 if F is a semantic consequence of 0 according to the new semantics (with t and p design then G II- F, but the converse is not true. =+‘J, let'5 see how to extend the above ullts to the first order Bevel. Suppose 81 and S2 are sets of clauses- Closed(Sl,S29 is the pair (ClosedSl ,ClosedS29 9 where CllosedSi = CC I c= Cl (tl ,...,tn9, and tl ,...,tn are terms in th domain of the union o i-f ,2. For any sets S1 and 82 o-f clauses, (SloS29 can be refuted iff Closed(Sl,S29 can be refuted accord to De.2. So for any ~orrn~~a~ & and Lin 141 I I- B iff (Sl,S29 can be refuted, where Sl and S2 are the sets of clauses corresponding to A and -B, respectively. The three conditions in Sec.1 are still true when the relation 'It+" there is replaced by "II-" here. Condition (449 is easy. Note that for condition (C) to be true, we must assume that our language be infinite, for if the language is finite, for example, there is only one predicate P(x), then it is easy to see that for any formula B, (x9 (P(x)/\-P(x)9 I I- E is true. FOP the condition (B9g note that Closed (Sl ,S29 can be refuted iff there are two finite sets Sl' and S2’ such that Si' is included in Closed(Si 9 and (S1’ ,S2'9 can be refuted, i=1,2. Finally, before we concluding this section, we would like to pointed out that relevant logics of similar spirits as the one developed in this section can be obtained by other formalisms than resolution. FOP example, as one of the reviewers has pointed out that the set of support theorem-proving strategy (included in MESON format, see CLoveland and Stickel, 197319 is a convenient formalism. The other frarmalism we have used is 'coupled tableaux* system CLin, 19873. It is certainly of interesting to establish some connections among varies relevant logics which satisfy the conditions (cS9 to (C9 above and are based on diffrent formalisms. But this is still an open problem. s. Some bpplications Theorem Proving In Automated It is of interesting to notice that the results obtained in Sec.2 motivate a refutation process which is in the spirit of the problem reduction format and its extension: lYlESON format CLoveland and Stickel, 19733. Theorem 5. Suppose S is a set of cl ausee, L is a literal and Sb is the subset of S such that -L does not occur in any member of Sl, then there is a deduction of L from S iff there is a deduction of L from Sl. Note that Theorem 5 corresponds to the repeated goals deletion rule CLoveland and Reddy, 19811. In fact, we consider it as the most general form of the repeated goals deletion rule in clausal form. A refutation process motivated by Theorem 5 is as follows: (19 S can be refuted iff there is a clau,se C = Ll\/... \/Lk in S such that for any i=l,..., k, there is a deduction of -Li from S. (29 For any literal L, there is a deduction of L from S iff there is a clause C = Ll\/ . ..\/Lk in $31 such that for any i-l ,...,k, there is a deduction of -Li from the union of Sl and S2, where Sl = CC I L\/C in S and -L not in C3 and S2 = CC I C in S and neither L nor -L in CD. (39 For any literal L, if L is in S, then there exists a deduction of L from S. Example. S = C-P\/-Q\/R, P\/R, W/R, -R> This is Example 6.1 in CChang and Lee, 19733. Chang and Lee used this example to show the necessarity of introducing mechanisms for reducing the useless clauses generated by the general resolution rule. Let’s refute S by using the process described above: S can be refuted if there is a deduction of R from S if there is a deduction of -Q from C-P\/-Q, P1 Q3* if there is a deduction of -P from C-P>, but by (39 above, there is indeed a deduction of -P from C-P), so S can be refuted. Again note that the results we have obtained in this section can be easily extended to the first-order level. Let’s see an example Example. S = C(1~,(29,(3~,(49,~59,(69,~793, where (19 = -E(x9\/W(x9\/S(x,b(x99, (21 = -E(x)\/V(x)\/C(f (x9 9 v (39 = P(a) o (49=E(a9 (51 = (79 = -S -P (a,y9 \/P(y) (x9\/-CCX). , (6) = -P(x)\/-V(x) 1 This is Example 5.22 in CChang and Lee, 19731. A refutation process for S when the rules (19 to (3) above are suitably extended to the first-order level looks like: (in the following, for any formula F(x), F(x) IXx=tf,...,tk3 will mean that F(tf9 ,,... ,F(tk) have been used in the resolution process and need not being used again). S can be refuted if there is a deduction of -P(a) from S, if there is a deduction of V(a) from ~(1),(29,(4),(5),(69lCx=a3, (71 I Cx=a3, -C(a) 3, if there are deductions o+ E(a) and -C(b (a) 9 from Sl=C(l)ICx=aB, (29 ICx=a3, (49, (59, (61 I Cx=a3, (79 I {x-a), -C (aI 1 -E(a)\/S(a,f (a))), if there is a deduction of -Ctf(a99 from Si, if there is a deduction of P(f(a99 from C(l) ICx=a>, (29 ICx=a3, (49 o (59 s (69 I4x=a3, (79 lCx=a,f (a)>, -C(a) V 142 Automated Reasoning -E(a)\/ S(a f(a))3 if there is a deduction o$ S(a,f(a)) from f(l)lcx=a3, (21 I Cx=a>, (41, (59 I Cx=f (aI 3 I (6) I Cw=a3 I (7) ICx=a,f (a)), -C(a), -E(aI\/S(a,f(a))3, if there is a deduction of E(a) from C(l)l<x=a>, (21 I <%=a), (4) s (51 I Ix=f (a) 3 ‘I (69 I Cx=a3 ., (7)ICx=a,f(a)>, -C(a)), but (4) = E(a), so S can be refuted. 4. Concluding Remarks Intuitively, as Hallpern said in Halpern(l9861, reasoning in the presence of inconsistency is an issue which need to be considered eventualIy in the design of knowledge bases for it is always possible to receive contradictory information from users. In practice, we think, few reasoning systems can infer everything from a contradiction, for example, in most Prolog implementations, a program P (which is a set of Horn clauses) answers a question ?- L (L is a literal9 with "yes" if f the union of P and <-L3 can be ref utad by using linear input resolution with -L as the top clause iff (P,-LI can be refuted iff P : :- L, according to our definitions. There-f: ore, the logic proposed in this paper may be considered as a formalization of the logic used by some practical reasoning systems. Conversely, we hope the results obtained in this paper would be useful in the design of practical reasoning systems. Acknowledgement I am grateful to Joe Halpern, Donald W.Lovel and, Graham Priest and two reviewers for helpful comments on an earlier draft of this paper. References Belnap N.D., 1977, "A useful four-valued logic". in Modern Uses of Multiple-valued Logics teds. G. Epstein and J.M.Dunn) Reide1,1977 Chang,C.L. and Lee,R.C.T. (1973) Symbolic Logic and Mechanical Theorem Proving, CIcademic Press, 1973 da Cosda, N. (19749 1 “On the theory of inconsistent formal systems", Notre Dame Journal of Formal Logic XV(l9741, 497-509 Dunn o J.M.(1976), "Intuitive Semantics for First-Degree Entailments and 'Coupled Trees’“, Philosophical Studies 29(1976) 149-168 Davis,M.(19801, “Notes On the Mathematics of Non-Monotonic Reasoning", Artificial Intelligence fS(1980) Halpern, J.Y. (19861, "Reasoning about Knowledge: an overview", in proceedings of the Conference on Theoretical Aspects of Reasoning about Knowledge, ted. J.Y. Halpern) Morgan Kaufmann, 1986. Lin, Fangzhen, (19879 o “Tableau 8y%tems for Some Paraconsistent Logics", delived to the Journal of Philosophical Logic. Love1 and,, D.W. and Stickel, M.E. (1973) p "Ca hole in general tree, some guidance from resolution theory”, in IJCAI-1973. Loveland, D.N. and Reddy, C.R. (19811, "Deleting repeated goals in the problem reduction format”g J.&CM9 U.28(1981) g 646-661. Martins, J.P. and Shapiro, S.C. (19861, "Theoretical foundations #or belief revision", in proceedings of the Conference on Theoretical Aspects of Reasoning about Knowledge, ted. J.Y. Hal pern 1 Morgan Kaufmannp 1986 McCarthy, J. (19GQlo "Circumscription - A form of non-monotonic reasoning”g Artificial Intelligence 13(1980) McCarthy, J. (19869 g "flpplications of circumscription to formalizing common-sense knowledge”, firtificial Intelligence 28(1986) 89-116 Mitchell, J.C. and O'Donnell, M.J. (1986j4 "Realizability semantics for error tolerant logics", in Proceedings of the Conference on Theoretical Aspects of Reasoning about KnowledgeV ted. J.Y. Hal pern 1 Morgan Kaufman!?, 1986. Priest, G. (19799, “‘The logic of paradox", J. of Philosophical Logic G(1979) 219-241. Reiter, R. (19783, "On closed world data bases", in Logic and Data Bases teds. H. Gallaire and J. Minker) Plenum Press, 1978. Reiter, R. (1980)V "A logic for default reasoning", Artificial Intelligence 15(1980) Rescher ,N. (1964jV Hypothetical Reasoning, North-Holland, 1964. Lin 143
1987
17
607
V. Nageshwara Rao, Vipin Kumar and K. Ramesh Artiticial Intelligence Laboratory Computer Science Department University of Texas at Austin Austin, Texas 787 12. ABSTRACT This paper presents a parallel version of the Iterative-Deepening-A* (IDA*) algorithm. Iterative-Deepening-A* is an important admissible algorithm for state-space search which has been shown to be optimal both in time and space for a wide variety of state-space search problems. Our parallel version retatins all the nice properties of the sequential IDA* and yet does not appear to be lim- ited in the amount of parallelism. To test its effec- tiveness, we have implemented this algorithm on Sequent Balance 21000 parallel processor to solve the 15-puzzle problem, and have been able to obtain almost linear speedups on the 30 processors that are available on the machine. On machines where larger number of processors are available, we ex- pect that the speedup will still grow linearly. The parallel version seems suitable even for loosely coupled architectures such as the Hypercube. 1. INTRODUCTION Search permeates all aspects of AI including problem solving, planning, learning, decision making, natural language understanding. Even though domain-specific heuristic knowledge is often used to reduce search, the complexity of many AI programs can be attributed to large potential solution spaces that have to be searched. With the advances in hardware technology, hardware is getting cheaper, and it seems that parallel processing could be used cost-effectively to speedup search. Due to their very nature, search programs seem naturally amenable to parallel processing. Hence many researchers have attempted to develop parallel versions of various AI search programs (e.g., Game Tree search [KANAL 811 [LEIFKER 851 [FINKEL 821 [FINKEL 831 [MARS- LAND 821, AND/OR graph search [KUMAR 841 [KIBLER 831, State-Space Search [RAO 871 [IMAI 791 [KORNFELD 811). Even though it may seem that one could easily speedup search N times using N processors, in practice, N processors working simultaneously may end up doing a lot more work This work was supported by Army Research Office grant #DAAG29-84-K-0060 to the Artificial Intelligence Laboratory and by the Parallel Processing Equipment grant from OMR to the Department of Computer Science at the University of Texas at Austin. than a single processor. Hence the speedup can be much less than N. In fact, early experience in exploiting parallelism in search was rather negative. For example, Fennel and Lesser’s implementation of Hearsay II gave a speedup of 4.2 with 16 processors [FENNEL 771 (Kibler and Conery mention many other negative examples in [CONERY 851). This early experience led to a pessimism that perhaps AI programs in general have very limited effective parallelism. We have developed a parallel version of Iterative- Deepening-A* (IDA*) [KORF 851 that does not appear to be limited in the amount of parallelism. To test its effectiveness, we have implemented this algorithm to solve the 15-puzzle problem on Sequent Balance 21000 parallel processor, and have been able to obtain almost linear speedup using upto 30 processors that are available on the machine. On machines where larger number of processors are available, we expect that the speedup will still grow linearly. Iterative-Deepening-A* is an important admissible state-space search algorithm, as it runs in asymptotically optimal time for a wide variety of search problems. Further- more, it requires only linear storage. In contrast, A* [NILS- SON 801, the most known admissible state-space-search algo- rithm, requires exponential storage for most practical prob- lems [PEARL 841. From our experience in parallelizing IDA* and A*k[RAO 871 we have found that IDA* is more amenable to parallel processing than A* in terms of simplicity and over- heads. The parallel version of IDA* is also efficient in storage. In Section 2, we present an overview of IDA*. In Sec- tion 3, we discuss one way of parallelizing IDA* and present implementation details. In Section 4, we present speedup results of our parallel IDA* for solving the 15-puzzle problem on Sequent Balance 21000. Section 5 contains concluding remarks. Throughout the paper, we assume familiarity with the standard terminology (such as “admissibility”, “cost- function”, etc.) used in the literature on search [NILSSON 801 [PEARL 841. 2. ITERATIVE-DEEPENING-A” (IDA*) Iterative Deepening consists of repeated bounded depth-first search (DFS) over the search space. In each itera- tion, IDA* performs a cost-bounded depth-first search, i.e., it cuts off a branch when its total cost (f = g + h) exceeds a given threshold. For the first iteration, this threshold is the cost (f-value) of the initial state. For each new iteration, the threshold used is the minimum of all node costs that exceeded 178 Automated Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. the (previous) threshold in the preceding iteration. The algo- rithm continues until a goal is expanded. If the cost function is admissible, then IDA* (like A*) is guaranteed to find an optimal solution. For exponential tree searches, IDA* expands asymptoti- cally the same number of nodes as A*. It is quite clear that the storage requirement of IDA* is linear with respect to the depth of the solution. For a detailed description of IDA* and its properties, the reader is referred to [KORF 851. In the fol- lowing figure we give an informal description of IDA*. Fig. 1 IDA*(startstate,h,movegen) /* h is an admissible heuristic function for the problem */ /* movegen(state,fun) generates all sons of state and returns them ordered according to heuristic function fun. Such an ordering is not essential for admissibility, but may improve performance in last iteration */ /* cb is cost bound for current iteration “I /* nextcb is cost bound for next iteration */ nextcb = h(startstate) ; while (not solutionfound) cb = nextcb ; nextcb = +m ; PUSH(startstate,movegen(startstate,h)) ; depth = 1 ; while (depth > 0) if there are no children in the TOP element of the stack POP ; depth = depth - 1 ; /* BACKTRACK “/ else remove nextchild from TOP ; if (nextchild.cost I cb) if nextchild is a solution solutionfound = TRUE ; Qurr; PUSH(nextchild,movegen(nextchild,h)) ; depth = depth + 1 ; /* ADVANCE */ else nextcb = MIN(nextcb,nextchild.cost) ; P POP, PUSH and TOP are operations on the DFS stack */ /* The elements of the stack are state-children pairs */ /* The children are ordered according to h. This ensures that the the children of a node are explored in increasing h order */ P The cost function used is f(n) = g(n) + h(n) */ { End of Fig. I} 3. A PARALLEL VERSION OF IDA” (PPDA”) 3.1 Basic Concepts We parallelize IDA* by sharing the work done in each iteration ( i.e., cost-bounded depth-first search) among a number of processors. Each processor searches a disjoint part of the cost-bounded search space in a depth-first fashion. When a process has finished searching its part of the (cost- bounded) search space, it tries to get an unsearched part of the search space from the other processors. When the cost- bounded search space has been completely searched, the pro- cessors detect termination of the iteration and determine the cost bound for the next iteration. When a solution is found, all of them quit. Since each processor searches the space in a depth-first manner, the (part of) state-space available is easily represented by a stack (of node-children pairs) such as the one used in IDA* (see Fig. 1). Hence each processor maintains its own local stack on which it performs bounded DFS. When the local stack is empty, it demands work from other processors. In our implementation, at the start of each iteration, all the search space is given to one processor, and other processors are given null space (i.e., null stacks). From then on, the state-space is divided and distributed among various processors. The basic driver routine in each of the processors is given in Fig. 2. Fig. 2 PROCESSOR(i) while (not solutionfound) if work is available in stack[i] perform Bounded DFS on stack[i] ; else if (GETWORK = SUCCESS ) continue ; else if (TERMINATION = TRUE) P determine cost bound for the next iteration */ cb=MIN [ nextcb[k] / 1 SklN } ; /* k varies over set of processors */ initialize stack depth,cb and nextcb for the next iteration { End of Fig. 2 } Since the cost bounds for each iteration of PIDA* are identical to that of IDA*, the first solution found by any pro- cessor in PIDA* is an optimal solution. Hence all the proces- sors abort when the first solution is detected by any processor. Due to this it is possible for PIDA* to expand fewer or more nodes than IDA* in the last iterationl, depending upon when a solution is detected by a processor. Even on different runs for solving the same problem, PIDA* can expand different number of nodes in the last iteration, as the processors run asynchronously. If PIDA* expands fewer nodes than IDA* in the last iteration, then we can observe speedup of greater than Rao, Kumar, and Ramesh 179 N using N processors. This phenomenon (of greater than N speedup on N processors) is referred to as acceleration ano- maly [LA1 831. In PIDA* at least one processor at any time is working on a node n such that everything to the left of n in the (cost bounded) tree has been searched. Suppose IDA* and PIDA* start an iteration at the same time with the same cost bound. Let us assume that IDA* is exploring a node n at a certain time t. Clearly all the nodes to the left of n (and none of the nodes to the right of n) in the tree must have been searched by IDA* until t. It can be proven that if overheads due to parallel processing (such as locking, work transfer, ter- mination detection) are ignored, then PIDA* should have also searched all the nodes to the left of n (plus more to the right of n) at time t. This guarantees absence of deceleration anomaly (i.e., speedup of less than 1 using N>l processors) for PIDA*, as PIDA* running on N processors would never be slower than IDA* for any problem instance. 3.2 Pmplementation Details. As illustrated in Fig. 2, PIDA* involves three basic pro- cedures to be executed in each processor: (i) when work is available in the stack, perform bounded DFS; (ii) when no work is available, try to get work from other processors; (iii) when no work can be obtained try to check if termination has occured. Notice that communication occurs in procedures (ii) and (iii). The objective of our implementation is to see that (i) when work is being exchanged, communication overheads are minimized, (ii) the work is exchanged between processors infrequently; (iii) when no work is available termination is detected quickly. Fig. 3 illustrates the bounded DFS per- formed by ‘each processor. This differs slightly from bounded DFS performed by IDA* (Fig. 1). Fig. 3 Bounded DFS (startstack,movegen,h) /* Work is available in the stack and depth, cb, nextcb have been properly initialized. */ excdepth[i] = -1 ; while ((not solutionfound) and (depth > 0)) if there are no children in the top element of the stack POP ; depth[i] = depth[i] - 1 ; /* BACKTRACK */ if (depth c excdepth[i] ) lock stack[i] ; excdepth[i] = depth[i]/2 ; unlock stack[i] ; else remove nextson from TOP[i] ; if ( nextchild.cost < cb ) * PIDA* expands exactly the same nodes as IDA* upto the last but one iteration, as all these nodes have to be searched by both PIDA* and IDA”. if nextchild is a solution solutionfound = TRUE ; send quit message to all other processors ; c?-Jn ; PUSH[i] ( nextchild, movegen(nextson, h) ; depth[i] = depth[i] + 1 ; /* ADVANCE */ excdepth = MAX(depth[i]d, excdepth[i]) ; else nextcb[i] = MIN(nextcb[i], nextson.cost) ; { End of Fig. 3 } To minimize the overhead involved in transferring work from one processor to another, we associate a variable excdepth[i] with the stack of processor i. The processor i which works on stack[i] permits other processors to take work only from below excdepth[i]. (We follow the convention that the stack grows upwards). The stack above excdepth[i] is completely it’s own and the processor works uninterrupted as long as it is in this region. It can increment excdepth[i] at will, but can decrement it only under mutual exclusion. Access to regions under excdepth[i] needs mutual exclusion among all processors. A deeper analysis of the program in Fig 3 shows that this scheme gives almost unrestrained access to each processor for it’s own stack. The rationale behind keeping excdepth = depth/2 is to see that only a fraction of the work is locked up at any time by proces- sor i. In a random tree with branching factor b, if the stack i has depth d then the fraction of work exclusively available to processor i is l/(b**ld/21), which is quite small. But this work is big enough for one processor so that it can keep .working for a reasonable amount of time before locking the whole stack again. This ensures that work is exchanged between proces- sors infrequently. Fig. 4 GETWORK for (i = 0;j c NUMRETRY ; j++) increment target ; if work is available at target below excdepth[target] lock stack[target] ; pick work from target. unlock stack[target] ; return (SUCCESS) ; return (FAIL) ; /* When GETWORK picks work from target the work avail- able in target stack is effectively split into 2 stacks. We need to copy path information from startstate in order to allow later computations on the two stacks to proceed independently *I { End of Fig. 4 } 180 Automated Reasoning The procedure GETWORK describes the exact pattern of exchange of work between processors. The processors of the system are conceptualized to form a ring. Each processor maintains a number named target, the processor from which it is going to demand work next time. (Initially target is the processor’s neighbour in the ring). Starting at target, GET- WORK ties to get work from next few processors in a round robin fashion. If no work is found after a fixed number of retries, FAIL is returned. The termination algorithm is the Ring termination algo- rithm of Dijkstra [DIJKSTRA $31. This algorithm suits our implementation very well and it is very efficient. Due to lack of space, we omit the exact details of the algorithm here. 4. lPE We implemented PIDA* to solve the 15-puzzle problem on Sequent Balance 21000, a shared memory parallel proces- sor. We ran our algorithm on all the thirteen problem instances given in Korf’s paper [KORF 851 for which the number of nodes expanded is less +han two million2. Each problem was solved using IDA* on one processor, and using PIDA* on 9,6 and 3 processors. As explained in the previous section, for the same problem instance, PIDA* can expand diffemt number of nodes in the last iteration on different runs. Hence PIDA* was run 20 times in each case and the speedup was averaged over 20 runs. The speedup results vary from one problem instance to another problem instance. For the 9 processor case, the aver- age speedup for different problem instances ranged from 3.46 to 16.27. The average speedup over all the instances was 9.24 for 9 processors, 6.56 for 6 processors and 3.16 for 3 proces- sors (Fig. 5). Even though for the 13 problems we tried, the average speedup is superlinear (i.e., larger than N for N), in general we expect the average speedup to be sublinear. This follows from our belief that PIDA* would not in general expand fewer nodes than IDA* (otherwise the time sliced ver- sion of PIDA* running on a single processor would in general perform better than IDA*). Our results so far show that PIDA* does not appear to expand any more nodes than IDA* either. Note that the sample of problems we used in our exper- iment is unbiased (Korf generated these instances randomly). Hence in general we can expect the speedup to be close to linear. To study the speedup of parallel approach in the absence of anomaly, we modified IDA* and PIDA* (into AIDA* and APIDA*) to find all optimal solutions. This ensures that the search continues for all the search space within the costbound of the final iteration in both AIDA* and APIDA*; hence both AIDA* and APIDA* explore exactly the same number of nodes. In this case the speedup of APIDA* is quite con- sistently close to N for N processors for every problem instance (Fig. 6). For 9 processors, the speedup is 8.4, for 6 processors it is 5.6, and for 3 processors it is 2.8 . We also solved more difficult instances of 16-puzzle (requiring 8 to 12 million nodes) on a sequent machine with 30 processors. As 2 Two million nodes was chosen as the cutoff, as the larger prob- lems take quite a lot of CPU time. Besides, we were still able to get 13 problems, which is a reasonably large sample size. shown in Fig. 6, the speedup grows almost linearly even upto 30 processors. This shows that our scheme of splitting work among different processors is quite effective. The speedup is slightly less than N, because of overheads introduced by dis- tribution of work, termination detection etc.. 5. CONCLUDDG REMARKS. We have presented a parallel implementation of the Iterative-Deepening-A* algorithm. The scheme is quite attractive for the following reasons. It retains all the advan- tages of sequential IDA*, i.e., it is admissible, and still has a storage requirement linear in the depth of the solution. Since parallel processors of PIDA* expand only those nodes that are also to be expanded by IDA*, conceptually (i.e., discounting overheads due to parallel processing,) speedup should be linear. Furthermore the scheme has very little overhead. This is clear from the results obtained for the all solution case. In the all solution case, both sequential and parallel algorithms expand exactly the same number of nodes; hence any reduc- tion in speedup for N (> 1) processors is due to the overheads of parallel processing (locking, work transfer, termination detection, etc.). Since this reduction is small (speedup is “0.93N for N upto 30), we can be confident that the overhead of our parallel processing scheme is very low. The effect of this overhead should come down further if PIDA* is used to solve a problem (e.g. the Traveling Salesman Problem) in which node expansions are more expensive. Even on 15- puzzle, for which node expansion is a rather trivial operation, the speedup shows no sign of degradation upto 30 processors. For large grain problems (such as TSP) the speedup could be much more. Even though we implemented PIDA* on Sequent Bal- ance 21000 (which, being a bus based architecture, does not scale up beyond 30 or 40 processors), we should be able to run the same algorithm on different parallel processors such as BBN’s Butterfly, Hypercube [SEITZ 851 and FAIM-1 [DAVIS 851. Parallel processors such as Butterfly and Hyper- cube can be easily built for hundreds of processors. Currently we are working on the implementation of PIDA* on these two machines. The concept of consecutively bounded depth first search has also been used in game playing programs [SLATE 771 and automated deduction [STICKEL 851. We expect that the tech- niques presented in this paper will also be applicable in these domains. We would like to thank Joe Di Martin0 of Sequent Com- puter Corp. for allowing us the use of their 30-processor sys- tem for conducting experiments. REFERENCES [CONERY 851 Conery, J.S. and Kibler, D.F. , “Parallelism in AI Programs”, CA1 -85, pp.53-56. [DAVIS 851 Davis, A.L. and Robison, S.V. , “The A ture of FAIM-1 Symbolic Multiprocessing System”, -85, pp.32-38. Wao, Kumar, and Ramesh 1 [DIJKSTRA 831 Dijkstra, E.W., Seijen, W.H. and Van Gasteren, A.J.M. ,“Derivation of a Termination Detection Algorithm for a Distributed Computation”, Information Pro- cessing Letters, Vol. 16-5,83, pp.217-219. [FENNEL 771 Fennel, R.D. and Lesser, V.R. , “Parallelism in AI Problem Solving: A Case Study of HearsayII”, IEEE Trans. on Computers , Vol. C-26, No. 2,77, pp .98-l 11. [FINKEL 821 Finkel R.A. and Fishburn, J.P. ,“Parallelism in Alpha-Beta Search”, Artificial Intelligence , Vol. 19,82, pp.89-106. [FINKEL 831 Finkel R.A. and Fishburn, J.P. ,“Improved Speedup Bounds for Parallel Alpha-Beta Search”, IEEE Trans. Pattern. Anal. and Machine Intell. , Vol. PAMI- 1,83, pp89-91. [IMAI 791 Imai, M., Yoshida, Y. and Fukumura, T. ,“A Paral- lel Searching Scheme for Multiprocessor Systems and Its Application to Combinatorial Problems”, IJCAI -79, pp.416- 418. [KANAL 811 Kanal, L. and Kumar, V. ,“Branch and Bound Formulation for Sequential and Parallel Game Tree Searching : Preliminary Results”, IJCAI -8 1, pp.569-574. [KIBLER 831 Kibler, D.F. and Conery, J.S. , “AND Parallel- ism in Logic Programs”, IJCAI -83, pp.539-543. [KUMAR 841 Kumar, V. and Kanal, L.N. ,“Parallel Branch- and-Bound Formulations For And/Or Tree Search”, IEEE Trans. Pattern. Anal. and Machine Intell. , Vol. PAMI- 6,84, ~~768-778. [KORF 851 Korf, R.E. ,“Depth-First Iterative-Deepening: An Optimal Admissible Tree Search”, Artificial Intelligence , Vol. 27,85, pp.97- 109. [KORNFELD 811 Kornfeld, W. ,“The Use of Parallelism to Implement a Heuristic Search”, IJCAI -8 1, pp.575-580. [LA1 831 Lai, T.H. and Sahni, S. , “Anomalies in Parallel Branch and Bound Algorithms”, 1983 International coufer- ence on Parallel Processing , pp. 183-190. [LEIFKER 851 Leifker, D.B. and Kanal, L.N. ,“A Hybrid SSS*/Alpha-Beta Algorithm for Parallel Search of Game Trees”, IJCAI -85, pp. 1044-1046. [MARSLAND 821 Marsland, T.A. and Campbell, M. ,“Paral- lel Search of Strongly Ordered Game Trees”, Computing Surveys , Vol. 14,no. 4,pp.533-551,1982. [NILSSON 803 Nilsson, N.J. , Principles of Artificial Intelli- gence, Tioga Press,80. [PEARL 841 Pearl, J.,Heuristics, Addison-Wesley, Reading, M.A, 1984. [RAO 871 Nageshwara Rao, V., Kumar, V. and Ramesh, K. , “Parallel Heuristic Search on a Shared Memory Multiproces- sor”, Tech. Report , AI-Lab, Univ. of Texas at AustinA TR87-45, January 87. [SEITZ 851 Seitz, C. , “The Cosmic Cube”, Commun.ACM , Vol28-1,85, pp.22-33. [SLATE 771 Slate, D.J. and Atkin, L.R. ,“CHESS 4.5 - The Northwestern University Chess Program”, In Frey, P.W. (ed.), Chess Skill in Man and Machine, Springer-Vet-lag, New York, pp.82-118,1977. [STICKEL 851 Stickel, M.E. and Tyson, W.M. ,“An Analysis of Consecutively Bounded Depth-First Search with Applica- tions in Automated Deduction”, IJCAI-85, pp. 10731075. Number of processors Fig 5: Avg speedup vs Number of processors for PIDA* Fig 6: Avg. speedup vs Number of processors for APIDA* (all solution case) 182 Automated Reasoning
1987
18
608
Mathematical Institute, Oxford University Oxford, England OX1 3LB s&ract Multiple possible solutions can arise in many domains, such as scene interpretation and speech recognition. This paper examines the eficiency of multiple- context TM%, such as the ATMS, in solving a scene rep- resentation problem which we call the Vision Constraint Recognition problem. The ATMS has been claimed to be quite eficient for solving problems with multiple possible solutions, even for problems with large databases. Rowever, we present evidence that for large databases with multiple possible solutions (which we argue occur frequently in prac- tice), such multiple-context TMSs can be very o’neficient. We present a class of problems for which using a multiple- context TMS is both intrinsically interesting and ideal, but which will be computationally infeasible because of the ex- ponential size of the database which the TMS must explore. To circumvent such infeasiblity, appropriate control must be exerted by the problem solver. 1 The TMS is one of the most important general AI algo- rithms developed, and has been applied to a wide range of areas, including qualitative process theory-[4]; circuit analysis- [6]; analog circuit design-SYN [5]; and vision- 1719 PI. l In this paper we examine more closely the performance of multiple-context TMSs ([2], [10],[12]) on certain prob- lems which generate a large number of contexts. Problems with a large number of contexts and multiple possible so- lutions are not artificial, and can arise in many domains, such as scene interpretation ([1],[7], [8]) and speech recog- nition/understanding [9]. In vision, one is typically deal- ing with noisy, ambiguous data with complex local/global constraint interactions. In text understanding, each sen- tence may, on its own, have many different interpretations, and one is attempting to piece together many such local- ized interpretations to develop an holistic meaning. Many equally plausible solutions arise in the presence of ambigu- ous constraints, giving rise to multiple possible local inter- pretations for each such constraint. And typically, these lThe author gratefully acknowledges the support of a Scholarship from the Rhodes !bust, Oxford. local interpretations interact in complex manners to pro duce many feasible global interpretations. We investigate the use of the TMS in solving high- level vision problems as a means of better understand- ing multiple-context TMSs. High-level vision is an ideal domain for studying multiple-context TMSs, and specifi- cally the ATMS ([2],[3]) b ecause of the ubiquity of multi- ple simultaneous interpretations. It is precisely this abil- ity to generate multiple simultaneous solutions that has prompted the use of the ATMS in a variety of areas, e.g. [4], 161. S in d e-context TMSs, also known as Justification- based TMSs (JTMSS), e.g. [ll], are less well-suited to solving such problems because their strict adherence to a single consistent context (interpretation) represents an in- adequate method of attacking the problem. Regarding the ATMS, de Kleer, in [2] states: “ the ob- served efficiency of the ATMS is a result of the fact that it is not that easy to create a problem which forces the TMS to consider all 2n environments without either doing work of order 2n to set up the problem or creating a problem with 2n solutions.n We present the Vision Constraint Recogni- tion System (VCRS) [13] (1) as a novel means of solving certain high-level vision problems, but (2) also as evidence that there naturally exist domains in which multiple con- text TMSs are forced to consider an exponential number of solutions. Also, we state results of a complexity analysis of multiple context TMSs corroborating the VCRS’s evi- dence that, for complex visual recognition problems, such TMSs are often forced to explore a number of contexts ex- ponential in the size of the database, this number of con- texts generated by problems with an exponential number of either final or partial solutions. As a consequence, such TMSs will be inefficiently slow in solving such problems. The rest of this paper is organized as follows. In Sec- tion 2, we briefly describe the VCRS, discussing the reasons for and advantages gained by using an ATMS for a visual recognition system which instantiates a figure in an im- age consisting of overlapped rectangles. Then, we conduct a simple combinatorial analysis of the effect of nogoods in reducing the search space explored by multiple-context TMSs, and hence comment on the efficiency of such TMSs. From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 2 vision Constraint System (VC Perception can be considered an interpretive process, and a key problem is interpreting descriptions computed for a scene against a (typically large) database of models. Many examples such as Rubin’s vase, Necker’s cube etc. teach us that a single image (e.g. perfect line drawing) can have several equally plausible perceptual interpretations. The problem we are solving exemplifies these characteristics. We use a multiple-context TMS precisely because of its ability to generate multiple possible interpretations. The specific high-level vision problem we have stud- ied is called the Constraint Recognition problem, and is a generalization of the PUPPET problem, first studied by Hinton [8]. The problem solved by the VCRS is as follows: given a set of (2-D) randomly overlapping rectangles and a relational and geometric description of a figure (as de- scribed by a set of constraints over the overlap patterns of k of these rectangles), find the best figure if one exists. Our use of a TMS generalizes Hinton’s integer relaxation- based techniques by recognizing that the set of justifica- tions which the TMS maintains for any database assertion is isomorphic to an explicit perceptual interpretation for that assertion. Our plan for applying problem is as follows: a multiple-context TMS to this A TMS generates a justification structure for each node, the structure indicating how that node was assigned a label. This justification structure corre- sponds to a perceptual structure (e.g. rectangle A is seen us a trunk, because rectangle B is seen as a neck) by appropriate spatial relationships, etc. Different perceptual interpretations correspond to dif- ferent contexts. Locally plausible visual fragments can be interpreted in many ways and each interpretation is accorded a context. Taken together, the above points imply a large number contexts even for moderately complex visual input. of 2.1 Advantages of Using a TMS for Vi- sual Interpretation Let us now outline the advantages over relaxation-based methods (e.g. [8]) afforded by a multiple-context TMS. “hand,” the reason for that assignment, e.g. that rectangle F, labeled “forearm,” overlapped G accord- ing to some constraint, is stored. Studying many different alternative solutions. An al- gorithm which can explore multiple interpretations simultaneously is more useful than one which ex- plores one context at a time, as locally contradic- tory interpretations (which are outlawed for a single- context TMSs) may not necessarily indicate global inconsistency but multiple global interpretations. Utilizing updated input. The truth maintenance aspect of TMSs enables updating of databases with the in- put of new information. Both [l] and [7] use the TMS for creating a consistent interpretation of stereo data, for example. Constraint-exposing. Such a notion of semantics forms the basis for a powerful constraint-exposing process, one example of which is contradiction flagging. By tracing justification paths for the nodes in a nogood back to the assertions causing the contradiction, (iden- tical to dependency directed backtracking), we can identify incorrect/impossible assertions, rule these out, and consequently eliminate all possible solutions based on these global inconsistencies. In this manner, we can rule out large portions of the search space. Robust given noise. Scenes with noisy data occur fre- quently, and a TMS can extract interpretations from noisy/ambiguous situations. This is achieved by the TMSs’ justification structure “cutting through” noise. Rectangles extraneous to the figure (e.g. a puppet) will not be included in the justification structure and will be ignored by the system. Robust given occluded/incomplete scenes. This oc- curs 1. 2. via two mechanisms: Automatic default mechanisms: these are incor- porated in the TMS and can be used to fill out incomplete (but plausible) figures. Justified default mechanisms: the justification structure has an explicit notion of “complete- ness” of a figure, and can flag an almost-perfect figure using a “closeness relationship” with re- spect to this notion of completeness. This gives a semantics for the notion of defaults; for ex- ample, we might have “this default is an arm because this figure would be a perfect puppet if such an arm were presentn. Explicit (domain dependent) constraints. An exphcit Explicit semantics for images via justifications. TMSs notion of domain-dependent constraints has been found explicitly store justifications for all labeling assign- necessary to provide a powerful means of reducing ment 5. Thus if rectangle G is assigned the label the search space. For example, in the detection of 174 Automated Reasoning puppet figures, such constraints include the repre- sentation of geometric structure in terms of posture and global scaling. A puppet having a right and left side, an upright or reclining posture introduces much more powerful constraints (which can significantly re- duce the search space) than if those concepts were not present. Hence, an arm being a right arm rather than a left arm determines the allowable angle of the elbow joint quite specifically. A sense of global scal- ing is also crucial, as a thigh can be a thigh only in proportional relation to the trunk and calf to which it is attached. 2.2 Performance of T S within the VC Let us now briefly outline some simple examples of prob- lems the VCRS can solve. 2 In Figure 1, we see a sample input for the VCRS, randomly overlapping rectangles in which the target figure, a puppet, is distinguishable. Fig- ure 1 shows a much-simplified example of the program’s operation. Here we have a situation in which four orienta- tions can produce a puppet, taking either A, B, 6, or D as a head. For example, if B is chosen as a head, the partial puppet (head, neck, trunk) consists of rectangles B, B’, E. Moreover, it is ambiguity such as shown in this Figure that gives rise to multiple contexts during search for a solu- tion, as well as multiple possible solutions. When process- ing complicated scenes in searching for puppets, a multiple context TMS builds a context for each possible puppet fig- ure interpretation. Even for relatively simple cases we have discovered that the number of contexts formed can be un- reasonably large. In the above example, four environments are necessary for just a small number of rectangles (A, A’, B, B’, C, C’, D, D’, E). Let us now look at two inter-related reasons why a very large number of environments will need to be constructed for this problem, which results in the ATMS creating an exponentially large number of contexts. Size of nogoods expected For complex figures, once we have found a seed, it is reasonably easy to form the first few elements of the figure, and it then becomes increasingly difficult, with inconsistencies more liable to occur. This means that, of the seeds found, the majority of the nogoods found will be of size 2 k, with k dependent on the complexity of the problem. Thus, if there are 100 seeds found and k w 10, the ac- tual space which must be searched is extremely large, as the nogoods of large size, as shown in Section 3, will not reduce the search space very much even if there are many such nogoods. Expected number of partial solutions The number of environments constructed increases rapidly as problem- 2Forfulldetails consult [13]. Figure 1: VCRS Example for Detecting a 15-element Pup- pet solving progresses. Consider a partial puppet consist- ing of a head, neck and trunk (A, A’, E respectively). As in Figure 1, if this trunk has 4 overlaps which could be upper arms and 4 which could be thighs, 42 we can have 2 0 = 36 possible interpretations. Now, if an upper arm and a thigh have 2 possible fore-arms 52 and calves respectively, this gives 2 0 = 100 inter- pretations. Even for this very simple example we can already see the combinatorial explosion of the num- ber of necessary environments. This combinatorial explosion grows even faster (i.e. is more serious) the more complex the scene and the figure for which we are looking. This points out that, even if we end up finding just a few full figures, there may be an expo- nential number of environments for the partial figures at an intermediate stage of the solution process. We now present theoretical empirical results. evidence to corroborate these Questions concerning the complexity of the ATMS were first mentioned with reference to a parity problem ([3], [ 111). We will now analyse some issues raised by problems such as the parity and visual constraint recognition prob- lems. But before beginning this analysis, we shall formally state the problem. Provan 175 3.1 Problem Definition Consider that we have a problem with n distinct facts, forming the fact set A. We call the set of environments the power set of A, A = PA. Within this power set there are subsets which are inconsistent. We call such inconsis- tent subsets nogoods, and the consistent subsets contexts. We denote the set of contexts C & A. There are 2” envi- ronments and (;) environments with k facts. A minimal nogood is a subset B from which removing a single fact will leave either the null set or a context. It is important to note that all supersets of a nogood set are also nogood sets. Let us call the size of the minimal (or “seed”) nogood set Q, size meaning the number of facts contained in the nogood. In the following discussion, we shall be referring to a general algorithm which attempts to determine all maxi- mal contexts, where a maximal context is a set C* C C such that either: (1) ] C* ] = n, or (2) C*U{U} is inconsistent for all facts a E A \ C*. Such an algorithm proceeds by form- ing all subsets (representing partial solutions), first of size 1, then of size 2, etc. until we produce maximal contexts. Nogoods are used to prune the search space by eliminating all supersets of minimal nogoods from the search space. It must be noted that, in its full generality, this algorithm, referred to as interpretation construction in [2], is isomor- phic to the minimum set covering problem (which is NP- complete). The ATMS utilizes the most efficient method of interpretation construction given the specific problem, but for certain problems the exponential complexity is un- avoidable, and is unavoidable for any algorithm searching multiple contexts. The example of algorithm which we shall be using is the ATMS, although this analysis is equally valid for algo- rithms which use a similar multiple-context approach. We will now isolate the factors necessary to avoid exponential growth of the search space. In this analysis, we show the power of nogoods of small size in cutting down the number of contexts, and hence the size of search space. We also see that even for problems in which the number of solutions is non-exponential in the problem size n, the number of par- tial solutions could still be very large, and hence produce an unreasonably large number of contexts. 3.2 Analysis of Search-Space Reduction Using Nogoods We begin this combinatorial analysis by looking at how nogoods reduce the search space. We introduce the prob- lem with the simplest case, that in which the seed nogoods are non-overlapping. An overlap occurs between two seed (or minimal) nogoods ngl and ng2 if ngl n ng2 # 0. A non-overlapping problem is one in which none of the seed nogoods have overlaps: for the set U of seed nogoods, wi n ngj = 0, Vngi,ngj E U, i # j. We then proceed to more general cases, analyzing the complex nogood in- teractions when we have overlapping of seed nogoods. Due to space limitations, we provide just a sample of our results without proofs, and refer the reader to [13] for these proofs and a more intelligible analysis. 3.2.1 Non-overlapping Nogood Analysis Lemma 1 For a problem with n distinct facts, x “seed” (minimal) non-overlapping nogoods each of size Q produces Q(x, oz) total nogoods, where @(x, a) = 2,-,( 2 - 2F”b4)+“1(1- Lemma 1 describes the size of space overlapping nogoods all of equal size. generated by non- Lemma 2 For a non-overlapping problem with a nogoods of size cy, b nogoods of size p, c nogoods of size 7, etc., an upper bound for the number of nogoods formed is given by @((~,a), (b,p), (c,7), ..) 5 2”(~2-~ + b2-a + ~2-~+ . . . . ). Lemma 2 extends Lemma 1 to cases of non-overlapping nogoods of different sizes. Given that we know the search-space reduction achieved by non-overlapping nogoods, we next investigate the reduc- tion achieved by nogoods of specific sizes. Corollary 1 For cdl n, (Y > 0, if A, = $$$, x 1 2, then, to a close approximation, A,, 1. & is constant independent of n, 2. The eect of a nogood in reducing the search space is inversely proportional to its size, 3. qy$ = I/2. Corollary 1 shows that the size of the nogood has a sig- nificant effect on this reduction. More importantly, Corol- lary 1 implies that the reduction in the size of the search space is inversely proportional to the size of the seed no- good, and in fact diminishes by l/2 as the size of the no- good is increased by 1. This means that, for the largest reduction of the search space, it is best to have nogoods as small as possible. We have completed a simulation of this combinatorial analysis which provides empirical confirmations to our an- alytic results. Namely, the % reduction is independent of n, the size of the problem, and it is most advantageous to have minimal nogoods of as small a size as possible. 3.2.2 Overlapping Nogood Analysis We now turn to an analysis of multiple overlapping no- goods. The difficult aspect is modeling the complex in- 176 Automated Reasoning teractions of the nogoods, namely taking account of the complicated manner in which overlapping of nogoods oc- curs when several nogoods are present; it is important not to double-count supersets of nogoods. Lemma 3 A problem in which overlaps of nogoods occur is convertible to one in which they do not occur. Lemma 3 implies that many of the results which we have obtained so far for non-overlapping problems can be used for this more complicated case. Let us now state one of the major results of [13], an upper bound on the size of the search space reduction by a set of nogoods. Theorem Ih An upper bound for a problem defined by the purumeters (( a+), (b,BL(c,7),..), with the nogoods over- lapping randomly, is given by @((a,a), (b,@, (c,7),..) < 2n(a2-Q + b2-@ + ~2-~+ . . . . ). Our (worst-case) problem is still 0(2n) over a wide range of nogood parameters ((a, cy), (b, p), . ..). It must be emphasized that the value of 2n for n = 100 is 1.26 x lOso, so even for relatively large search-space reductions, a huge amount of the search-space still remains. From the pre- vious section, we see that nogoods cut down this number. However, any problem which forces the ATMS to construct a substantial portion of the environment lattice will cause inefficient ATMS performance. The real problem is that ATMS interpretation con- struction is intrinsically NP-complete. We have just de- scribed a problem which brings out this exponential be- haviour. The solution to such a combinatorial explosion of the solution space is either ensuring the constraints will generate small nogoods or carefully controlling the problem- solving. The principal aim of this latter course of action is to constrain the ATMS to look at one solution at a time, us- ing a dependency-directed backtracking mechanism or em- ploying consequent reasoning and stopping when a single solution is found (i.e. to revert to JTMS-style behaviour). This, however, appears to be an extreme reaction, since for problems such as this, exploring multiple solutions would be ideal. Our two main complexity results are the following: first, problems such as the visual constraint recognition problem described here can have a very large number of solutions, and such problems are not pathological (as claimed by de Kleer in [2]) b u occur naturally. To the contrary, we argue t that the most challenging problems facing AI are exactly those with multiple possible solutions. Second, as again cited by deKleer [2], y ou do not need a problem with 2” solutions to make the ATMS infeasibly slow. Even with a fraction of these solutions the ATMS can “blow up.” This is because cases exist in which problems with a moder- PI PI PI 141 PI PI I71 PI PI J. Bowen and J. Mayhew. Consistency Maintenance in the REVgraph Environment. Technical Report AIVRU 020, University of Sheffield, 1986. J. de Kleer. An assumption-based TMS. AI Journal, 28:127-162, 1986. J. de Kleer. Problem solving with the ATMS. AI Jour- nal, 28:197-224, 1986. J. de Kleer and J. Brown. A Qualitative Physics Based on Confluences. AI Journal, 24:7-83, 1984. J. de Kleer and G. Sussman. Propagation of Constraints Applied to Circuit Analysis. Circuit Theory and Appbi- cations, 8, 1980. . J. de Kleer and B. Williams. Diagnosing Multiple Faults. AI Journal, 1987, to appear. M. Herman and T. Kanade. Incremental Reconstruc- tion of 3D Scenes from Multiple, Complex Images. AI Journal, 30:289-341, 1986. G.E. Hinton. Relaxation and its Role in Vision. PhD thesis, University of Edinburgh, 1977. V.R. Lesser and L.D. Erman. A Retrospective View of the Hearsay-II Architecture. In Proc. IJCAI, 1977. ate number of complete solutions may have an exponential number of partial solutions, forcing the ATMS to construct an exponential number of intermediate contexts. One important contribution of this research is the be- ginning of a classification of problems for which different TMSs are suited. The performance of JTMSs and ATMSs is highly problem-specific, and as yet little or no empiri- cal or theoretical work has been done to define a better problem classification based on TMS efficiency. There is no doubt that for moderately-sized problems there are many cases for which the ATMS is the most ef- ficient TMS algorithm. However, for large and complex problems (e.g. vision and speech-understanding problems), this efficiency can be lost in constructing an environment lattice whose size is often exponential with respect to the database size. Acknowledgements I have received a great deal of comments and encour- agement from Mike Brady. Many thanks to Johan de Kleer and Ken Forbus for providing me with their TMSs. [lo] J. Martins and S. Shapiro. Reasoning in Multiple Be- lief Spaces. In Proc. IJCAI:370-373, 1983. [ll] D. McAllester. A Widely Used Truth Maintenance System, unpublished, 1985. [12] D. McDermott. Contexts and Data Dependencies: a Synthesis. IEEE Trans. PAMI, 5(3):237-246, 1983. [13] G. Provan. Using Truth Maintenance Systems for Scene Interpretation: the Vision Constraint Recogni- tion System (VCRS). Robotics Research Group RRG- 7, Oxford University, 1987. Provan 177
1987
19
609
CCLISPm on the IPSC” Concurrent Computer David Billstrom and Joseph Brandenburg Intel Scientific Computers Beaverton, Oregon, 97006 USA Abstract Concurrent Common LISPTM (CCLISP) is the LISP environment for the iPSCm system, the Intel Personal SuperComputer. CCLISP adds message-passing communication and other constructs to the Common LISP environment on each processor node. The iPSC system is configured with Intel 80286 processor nodes, in systems ranging from 8 to 128 nodes. Performance on a per node basis roughly equivalent to AI workstation LISP performance is discussed, as are the implementation details of the CCLISP language constructs. Concurrent Common LISP (CCLISP) is a LISP environment extended for concurrency for the Intel Personal SuperComputer (iPSC) System. This software environment enables the researcher to implement concurrent symbolic programs on the iPSC System in a familiar language: Common LISP. The iPSC System is based on the hyperc&~ interconnect topology pioneered by architects at California Institute of Technology [Seitz, 851. The CCLISP environment is LISP listners at each processing node, communicating with each other via message St?WW?lS. The iPSC System, first available in 1985, utilizes VLSI technology in each of the processing nodes. Processing nodes consist of an Intel 80286 processor, 512 Kbytes random access main memory, and ethernet-based communication processors. Processing nodes are packaged in a standalone cabinet, along with optional memory expansion cards and vector processing cards. Additional memory enables 4.5 Mbytes of memory per node, and vector processing nodes offer better than 6 MPLOPS numeric performance per node. An Intermediate Host, currently a 286-based multiuser UNIX-based system, serves as network gateway, system administration console, and disk file system. The availability of a concurrent LISP for a concurrent computer is attractive because existing artificial intelligence tools and applications may be ported from conventional computers and workstations. Several applications have already been moved to CCLISP, demonstrating substantial speedup by the use of many processors executing concurrently. The speedup demonstrated by these first applications provides the motivation for using concurrent computers for symbolic problems. Many symbolic problems are too large for currently available uniprocessor computers to solve in a reasonable time, or at all [Stolfo et al. 831, @Iillyer and Shaw, 861. Computers such as the iPSC system are used to develop algorithms, which in turn will be used to implement applications on future very large scale concurrent computers. John Teeter Gold Hill Computers Cambridge, Massachusetts, 02139 USA 1.1 Architecture The Intel concurrent computer is based upon a message-passing architecture. Asynchronous processor nodes execute their own programs, sharing data by passing messages. Although several alternative architectures provide ways of sharing data between processes, most are disadvantaged as the collection of processing nodes increases to hundreds or thousands of nodes. Since future delivery systems are planned to utilize such large numbers of nodes, the architecture of the iPSC system easily accommodates thousands of nodes. Message passing avoids shared memory architectural solutions, which require expensive and complex data buses or switches for large numbers of nodes &ee, 851, (Pfister, SS]. Data is exchanged between processors not by accessing common memory, with semaphores or monitors for synchronization, but by requesting and sending data objects among the processors via messages. The architecture was attractive because of the low component cost, and also because the system scales to large sizes. The message-passing architecture is also attractive because it translates relatively easily into language semantics and software protocols. Parallel architectures were built upon the pretext of message-passing, such as ACTORS from MIT, even before such architectures existed in hardware. ACTOR languages have been implemented on top of workstations, maintaining message passing [Agha and Hewitt, 851. (The MIT Artificial Intelligence Lab is in progress bringing ACTORS to the iPSC system). 2. Constructs Each processing node has a complete and separate Common LISP environment, with interpreter, editor, compiler, and debugging facilities. The programmer can open a window to any Concurrent Common LISP environment, on any node, from a workstation. Pile access is provided from each node CCLISP environment to connected workstations and to the system manager. Currently, the programmer edits and prototypes LISP code on the workstation, and then moves the source code down to a CCLISP node on the cube for testing, debugging, and compilation. Both compiled and interpreted code may be moved to other nodes as desired. 2.1 As with other languages for the iPSC system (C and Billstrom, Brandenburg, and Teeter 7 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. FORTRAN), the hypercube network is not directly visible to the programmer. Instead, the system provides a completely connected graph of processors. Resident on each processor node is a lightweight operating system, called Nx (Node executive) with two kinds of services: multitasking and message communications. Multiple UNIX-like processes may execute, passing messages between themselves and processes on other nodes. Communication services appear to the programmer a$ system calls such as recv, send, recvw, and sendw . The programmer is ‘responsible for assembling messages in local memory, and specifying the message buffer, buffer size, type of message, and target processor id when calling the system service. In CCLISP, rather than the original C, the message is somewhat simpler; an example of a message would be: (defstruct node-message connection ; fixnum, NX channel id host-addr : fixnum, src/dest node correspondent-id ; fixnum, process on host-addr tw= ; fixnum, (always 0 for now) buffer ; simple-vector with fill ptr Host-add?-, correspondent-id, and buffer are filled with information during the receipt of a message, with host-addr indicating the source node of the message; correspondent-id indicating the name of the sender of the message; and buffer filled with the incoming information. follows: An example of a receive (sYs:recv node-message) The. buffer is a simple LISP array, containing either fixnum or chars. Messages passed at this level are compatible with C and FORTRAN processes, and their message-passing routines, so this is a method of implementing so-called hybrid applications. These are applications with mixed processor nodes: LISP on extended memory nodes, C on standard nodes, and FORTRAN on vector processing nodes. For instance, LISP applications with graphical output have been implemented by passing messages to C processes outside the iPSC system on connected workstations, utilizing the existing C graphic libraries on those workstations [Brandenburg, 861. 2.2 Node Streams Above the transport-level message passing services, each node has the ability to communicate with any of the other nodes through a facility called node streams. Node streams are similar to Common LISP I/O streams. A stream can be established between any two LISP processes whether on the same node, different nodes, or on the remotely connected AI workstations. A wide array of LISP functions enable the programmer to send small or large packets of data to other processes, over a node stream (See Table 1). Since node streams are similar to I/O streams, many Common LISP functions operate on both I./O streams and node streams. (let (stream (make-node-stream 0 :a-test :direction :io)) (print 'hello-world stream) (finish-output stream) (close stream)) This code establishes a stream from the node where the code is executed to the node specified as a parameter, node 0. Nodes on the iPSC are numbered from 0. As many as 64 nodes can be loaded with LISP. The parameter :a-test is the name of the established stream; and :direction specifies the nature of the movement: input, output, or input/output. The name of the stream can be specified by the companion make-node-stream function on the opposing node, and the names are checked for equality. In this example a string is printed on node 0. The node message stream is a powerful construct because of the similarity with I/O streams of Common LISP. Although it is premature for standards in concurrent languages, it is encouraging that such a concurrent construct could be added to the language, while approximating existing constructs such as I/O streams. The CCLISP node stream is also notable because it is the first higher-level abstraction for communication on the iPSC system. Previous implementations for assembler, C, and FORTRAN languages exclusively utilized libraries of communication services, providing transport-level functions. Each of these calls required parameters containing not only node number and message, but message buffer length, processor id, and message type. Sufficient for the style of scientific :read-char.. .............. :read-line ................. :unread-char.. ........... :read-char-no-hang ...... peek ...................... :listen.. ................... :fill-array.. ............... :write-char.. ............. write-line.. .............. :dump-array.. ............ :finish-output.. .......... :flush-output.. ........... :clear-output.. ........... :name.. ................... :host.. .................... :element-type.. .......... :close.. ................... :direction.. ............... :messages-sent.. ........ :messages-received ..... :which-operations.. ..... Inputs next character from stream, waits if none Inputs next line, as delimited Places character back onto the front of the stream Inputs next character from the Sm Inputs next character without removing it Returns t if a character is available Places next n characters available into array Outputs character into stream Outputs string to the stream Outputs the contents of array to the stream Attempts to insure that all output is complete Initiates the send of all internally buffered data Aborts outstanding output operations in progress Returns the name of stream Returns the name of the node to which COMCCtCd Returns the type of the stream Closes the stream Returns the direction of the Stream Returns the number of messages sent Returns the number of messages received Returns list of operations supported by the stream :close-all-node-streams. Closes all of the streams Table 1: The functions available for use with node streams. 8 Al Architectures programming in C and FORTRAN, the message stream construct offers a level of abstraction in communication not previously seen on the iPSC system. Further, the message streams are used to communicate with processes not within the hypercube concurrent computer, such as LISP environments on network connected AI workstations. F’ASE streams are a special version of node streams allowing the transfer of CCLISP objects between nodes. This communication is speedier than regular node streams, at some sacrifice of functionality -- the functions supplied are enumerated in Table 2. The intent is to supply the programmer with a construct to move larger structures and compiled objects more rapidly. Compiled objects cannot be moved on regular node streams; all of the formats and protocols compatible with the CCLISP compiler may be moved by Pas1 stream. In this example, the Pas1 stream is used to move a compiled object: (let (f&ream (open-fasl-node-stream 0 :fasl-test :direction)) (dump-object myfun fstream) (close fstream)) The ability to move compiled developing load balancing schemes movement. peek . . . . . . . . . . . . . . . . . . . . . . :listen.. ................... :read-object.. ............ :dump-object.. ........... :dump-fasl-operator... . . :close ..................... :host.. .................... :name. . . . . . . . . . . . . . . . . . . . . :which-operations...... objects will be key to , dependent upon code Inputs next byte without removing it from stream Returns t if byte available Returns next object available Outputs the specified object to the biis the specified fasl-operator to the stream Closes the stream Returns the node number to which it is connected Returns the name used when stream made Returns a list of the operations supported Table 2: The functions operating on FAST stream. 2.4 emote Evaluation In addition to message streams, there is a powerful construct called remote evaluation. Remote evaluation offers the programmer a way to pass a Common LISP expression from one LISP environment to another LISP environment for evaluation. The programmer specifies a Common LISP form, and a target node number, and chooses either a synchronous or asynchronous remote evaluation of the form. The effect of the remote evaluation is to interrupt the target node (or connected AI workstation) and cause a read-eval-print loop to execute on the passed form. In the case of synchronous execution, called evel-remotely, the sending process blocks and waits for the results of the evaluation before contmuing to execute it’s own program code. Consider the example of a simulation application: the basic “cause and effect” paradigm may be divided across processor nodes easily. (if (cause) (eval-remote ly 3 ' (effect #myparameter))) Here, a cause results in an effect on a different node, node 3 in this case. There is no user code on the target node needed to “listen” for an activate request. Mowever, the programmer must remember that the LISP form will be executed in the target environment, and there is no automatic provision for maintaining synchronous environments. In the case of asynchronous execution, the sending process does not block and wait, but continues execution of its own code, while the specified destination process evaluates the passed expression. If the sending process needs the return value, the target node can use a message node-stream, or remote evaluate the value back to the original sending process. This simple-eval-remotely is the more commonly used construct, since it follows the paradigm of asynchronous processing, necessary for maximum utilization of a concurrent computer: processors do not stand idle dependent upon other processors. (dotimes (node max-node) (simple-eval-remotely t1+ node) ' (myfunction))) In this example, the previously specified function myfunction is executed on each of the processor nodes in the system, in a serial manner. Since no value is returned, simple-eval-remotely is utilized for the side-effects it causes in the target node. Interrupts from multiple simple-eval-remotely calls at the same target node can undesirably disrupt the execution of code, so an evaluation construct without-interrupts is available to turn interrupts off. Used carefully, for short periods of time, this allows multiple sources of interrupts to occur within one environment. For instance, the P and V primitives of semaphores could be implemented @3ijkstra, 681. Also, a broadcast of remote evaluations is offered by the construct do-on-all-nodes. previous example: This macro replaces the code in the (dotimes (node max-node) (simple-eval .-remotely to (l+ node) n (myfunction))) (do-on-all-nodes '(myfunction)) Do-on-all-nodes follows a ring architecture on the iPSC system, an artifact of the system scheme for numbering nodes. A spanning tree algorithm could also be utilized by the user to broadcast to every node in the system in Log N iterations of remote evaluation [Brandenburg and Scott, 861. A common use of do-on-all-nodes is the need to load the same CCLISP executable program on each node of the system, in order to solve a large concurrent problem: (do-on-all-nodes ' (load "my-file"') 1 Both forms of remote evaluation can be used to pass Common LISP forms outside the iPSC system to connected (and supported) AI workstations. This construct enables the iPSC system to be used as an remote evaluation server in a network of AI workstations -- an attractive scenario because a current LISP application would remain on the workstation, particularly user interface portions, and the compute intensive portions of the application would be moved to the iPSC system. Tasks would be assigned from the workstation or multiple workstations to the iPSC system via remote evaluation. While in development, the entire application could be prototyped on the workstation, separately from iPSC code development, and then later linked with the insertion of the remote evaluation function call. 3 Development Environment The intent of the development enviroment is to offer the user flexibility: choosing dynamically during the development cycle between a familiar, connected workstation and the concurrent LISP environment. The model of user interaction with CCLISP consists of three parts: virtual terminals, keyboards, and file I/O. Every CCLISP node is connected to a virtual terminal, and the user can switch the physical terminal dynamically from window to window. The user attaches the physical keyboard to one of the virtual terminals, and may also switch that connection dynamically. The intent is to give the programmer instant access to any node of the iPSC system, and while independent processes execute on various nodes, virtual terminals remain connected in order to capture any output sent to the screen. The virtual terminal and keyboard user interface may be connected to a variety of physical devices, including connected AI workstations. Each CCLISP node environment is also connected, via Common LISP I/O streams, to disk file services on the Intermediate Host, as well as connected AI workstations. All of the Common LISP I/O stream functions are available. Some support for the specific file system on the Intermediate Host (a UNIX file system), such as cd (change directory), is available from CCLISP. Other support for the Intermediate Host also exists, such as a single key escape to the operating system 4 Implementation The. original CCLISP software was based upon Gold Hill Computer’s 286 personal computer LISP product, GCLISP 286 DeveloperTM. Offering a subset of full Common LISP, this interpreter with compiler provided strong performance from the original iPSC 286 node processor. The GCLISP environment is a subset of full Common LISP, with mark and sweep garbage collection, reasonable performance, and a large installed user base. The message-passing constructs already available in the iPSC node operating system were interfaced to CCLISP, and node streams and remote evaluation were built on the message-passing. The CCLISP environment on each node, including the compiler, uses about 1.7 Mhytes of the 4.5 Mbytes available on the node. The user may elect to not load the compiler on every node of the iPSC, thus preserving an extra 0.6 Mbyte of memory for the user’s own code and data. The CCLISP node streams were added to LISP by exploiting the transport-level services already provided by the node processor operating system. The FASL Stream implementation was based on work at Carnegie Mellon for fast file formats in SPICE LISP. Briefly, at each end of the stream a simple stack machine is established. Byte operators are transmitted from the emitting end, along with data, and then interpreted at the receiving end and assembled into objects. The implementation code is available to users of the CCLISP system. Remote evaluation was then implemented with FASL streams, and by an eval server, resident on each node LISP environment (as well as on connected AI workstations). The servers allow for the asynchronous handling of remote evaluation requests in the target environment. The evaluation of the requested form occurs in the current stack group, and environment, of the target node. Care must be taken to insure that node-specific CCLISP environments are maintained in a consistent manner. This includes package considerations as well as stack-group management. Support for AI workstations for the user interface, file i/o services, and remote evaluation required compatible lower level services, such as TCP/IP ethernet connections. Special eval servers for each of these workstation environments, along with file servers and user interface connections, were developed. Each follows, or will follow, a public protocol for such support jointly developed by Intel, Gold Hill, and early users of the CCLISP/iPSC system [Intel, 871. 5 erformance Concurrent computers demand at least two steps to measuring performance: first, processor node performance, and then applications demonstrating aggregate performance, the effect of all of the processing nodes. And, because benchmarks do not always consume the dynamic memory required of real-life applications, the total memory available per node is an important secondary component. Performance of sequential LISP is popularly compared by use of Gabriel’s LISP Benchmarks [Gabriel, 851. Using a simple average of (most of) the Gabriel Benchmarks, the performance of the original CCLISP on a single node is almost equivalent to a low-end AI workstation, such as the Xerox 1108 Dandelion, as illustrated in the chart below. CCLlSP on a single Xerox Gabriel iPSC node 1108 Symbolice Benchmarks (cornoiled) Dandelion 3600 ‘A9 1 -74.60 1 11.89 I Measuring the aggregate performance of a concurrent computer is more difficult. Primarily the problem is one of 10 Al Architectures size: large computers requite large problems, especially in light of constant overhead. Only three of the Gabriel benchmarks entice the effort of parallelization. The longest running Gabriel Benchmark is the Triangle game, which consumes 14.44 seconds of cpu time on a CBAY-XMP, and 15 1.7 seconds on a Symbolics 3600. Gn a 16 node iPSC system, the benchmark is completed in 69.8 seconds, demonstrating 14.8 speedup for 16 processors. The benchmark completes in 37.5 seconds, for 27.6 speedup on a 32 node processor system. Perhaps more interesting than numerical results from recent timings of simple, small benchmarks are the approaches used to “parallelize” these benchmarks. The following sections each describe the changes made to run the simple benchmarks concurrently. The benchmark finds all solutions to the “triangle game.” The game consists of a triangular board with fifteen holes; a peg is placed in every hole except the middle. The player makes a move by jumping over a peg into a vacant hole and removing the jumped peg, as in checkers. The object of the game is to remove all of the pegs but one,. There are many possible sequences of moves, but only 1,550 sequences result in a single remaining peg. The Gabriel version of the algorithm finds 775 solutions; the other 775 solutions are symetrically identical (only one of the two initial moves is taken by the original benchmark algorithm). The general problem is represented as a tree of possible moves; each node of the tree represents a decision about the next possible move. The problem heap technique [Moller-Nielsen and Straunstrup, 85],[Brandenburg, 861 was used as the basic distribution algorithm. This method uses a single node as a manager; the manager assigns each of the other nodes subgraphs from the search tree. The manager node solves the tree to the fourth level of 120 leaves. It then distributes each of the 120 subgraphs to the remaining ftiteen worker nodes, assigning a subgraph to each worker node as the worker node becomes available from solving a subgraph. Bach node solves the subgraph using the traditional sequential algorithm, and then reports back to the manager the results, and requests another subgraph. The manager continues to distribute subgraphs and receive results until results for all subgraphs have been received from worker nodes. This exhaustive search incurs little overhead from communication between nodes, since nodes only communicate when they have completed computing each of 120 tasks. The algorithm is also attractive because it retains the basic search algorithm of the original benchmark, and thus minimizes the programming effort to implement the concurrent version. Triangle Gabriel Benchmark v (seconds) Speedup The Browse benchmark is a simple database search, similar to many AI matching problems. The database itself is an artificial representation of a real LISP database, consisting of very small objects. To convert the problem to a concurrent computer> the simple approach of dividing the database into 16 portions was used. Each CCLISP node carries a duplicate copy of the benchmark code, and its’ own l/l6 portion of the database. There is no communication between processes, because the iterative nature of the benchmark requires none. And the results, compared with single node performance on the same problem, indicate superlinear speedup. This extra efficient use of 16 processors is not due to the architecture of the system, or even the problem, but the decreased load on the processor for garbage collection and other system resources in the LISP. As the data objects were reduced in volume to l/16 the original size, system resource overhead was reduced accordingly. Browse Gabriel Benchmark B (seconds) Speedup We-load T Speedup The Browse results clearly illustrate the difficulty of measuring symbolic computational performance on a large concurrent computer with small problems: the speedup for 32 nodes is 26.5, and for 16 nodes is 26.6. This lack of improvement with additional computational resources is due to the dominant factor of file access. This can be illustrated even further by a simple tuning of the code. By pre-loading the basic functions required in the benchmark -- lowering the non-computational overhead -- speedup can be improved to 76 for 32 nodes, and 42.6 for 16 nodes. The Puzzle benchmark was not successful demonstrating substantial speedup; the parallel version (with 16 processing nodes) found the solution only twice as fast as the sequential version. This is due to Puzzle seeking only one sohH.i~n in the search tree, rather than all of the SQ~U~~QIIS -- as Triangle requires. When only one solution is sought, the decomposition of the problem into segments for each node of a concurrent computer becomes much more difficult. Depending upon where in the search tree the single solution may be found, the relative efficiency of a depth-fmt or breadth-first search varies greatly. In the case of the Puzzle benchmark, the solution was found among the first branches of the tree, down nine levels. Billstrom, Brandenburg, and Teeter 1 I Single solution search tree algorithms remains an important research issue. for ’ concurrent computers The constructs of CCLISP were designed to allow relatively easy conversion of sequential LISP applications to a concurrent computer. As well as being conceptually simple but powerful, the constraints are sufficient to have allowed, in most cases, the first versions of applications to limp along in parallel within a week of starting the conversion. Indeed, since CCLISP has been in customer’s hands, two software environments have been developed [Gasser et al., 861, [Gasser and Braganza, $71, [Blanks, 861, others are on the way, and demonstration applications have been shown [Brandenburg, 861, [Intel, 861, [Gasser et al., 87a], [Gasser et al., 87b], fleung, 861. All of the work thus far was converted from AI workstations such as the TI Explorerm and the Symbolics 3600TM. A handful of concurrent applications, including a community of expert systems, were running within a few months. The performance of the iPSC system with CCLISP is wholly dependent upon algorithm choice and programmer effort, but the initial indications are very encouraging, with speedups of 14.8 for 16 processor systems on common symbolic processing problems of search. Node performance of CCLISP is respectable, almost equivalent to AI workstations such as the XERQX 1108 Dandelion. The iPSC System with CCLISP has been available since September, 1986. References [Agha and Hewitt, SS] G. Agha & C. Hewitt. Concurrent Programming Using ACTORS: Exploiting Large-Scale Parallelism. RlIT Press, AI Memo No. 865, October 1985. [Blanks, 861 M. Blanks. Concurrent Cooperating Knowledge Bases. Presented at Aerospace Applications of Artificial Intelligence. Dayton, QH. October, 1986. [Brandenburg, 861 J. Brandenburg. A Concurrent Symbolic Program with Dynamic Load Balancing. To appear in Proceedings of the Second Conference on Hvnercube Multinrocessors, Qalaidge, TN. September, 1986. [Brandenburg and Scott, 861 J. Brandenburg and D. Scott. Embeddings of Communication Trees and Grids into Hypercubes, Intel Scientific Computers Document: Technical Report No.1, 1986. [Dijkstra, 681 E.W. Dijkstra. Cooperating Sequential Processes. Programmings. (F. Genuys, editor), Academic Press (1968). [Gabri;l,P85] R. Gabriel. Performance and Evaluation of IId S Svstems. MIT Press, 1985. [Gasser and Braganza, 871 L. Gasser and C. Braganza. MACE Multi-Agent Computing Environment, Version 6.0. Technical Report CRI 87-16. Distributed Artificial Intelligence Group, CS Dept, University of Southern California. March, 1987. [Gasser et al., 861 L. Gasser, C. Braganza, N. Herman, and L. Liu. MACE Multi-Agent Computing Environment, Reference Manual, Version 5.0. Distributed Artificial Intelligence Group, CS Dept, University of Southern California. July, 1986. [Gasser et al., 87a] L. Gasser, C. Braganza, and N. Herman. MACE, A Flexible Testbed for Distributed AI Research. To appear in Distributed Artificial Intellirrence, H. Hugns, Ed. Pitman, 1987. [Gasser et al., 87b] L. Gasser, C. Braganza, and N. Herman. Implementing Distributed AI Systems Using MACE. To . appear in Proceed PS o f the Third IEEE Conference on AI Annlications, &a.ndo, FA. February, 1987. mllyer and Shaw, 861 Hillyer and Shaw. Execution ofOPS.5 Production Systems on a Massively Parallel Machine. Journal of Parallel and Distributed Computing Vo1.3,No.2, June 1986. pp. 236-268. [Intel, 861 A Preliminary Naval Battle Management Simulation. Intel Scientific Computers Document: Artificial Intelligence Note 116. July, 1986. [Intel, 871 iPSC CCLJSP host interface protocols. Intel Scientific Computers Document, February, 1987. @Lee, 851 R. Lee. On “Hot Spot” Contention, Computer Architectu e News Vol 13 No. 5 December 1985. woller-Nielsei and Straunstrup, 851 ‘P. Moller-Nielsen. and J. Straunstrup. Problem Heap: A Paradigm for Multiprocessor Algorithms. Aarhus University Technical Report DK-8000. Denmark. 1985. pfister and Norton, 851 G.F. Pfister & V.A. Norton. “Hot Spots” Contention and Combining in Multistage Interconnection Networks. EEE Transactions on Vol. C-34, No. ‘10, October 1985, pp. F;r;;r. [Seitz, 851 C.‘Seitz. The Cosmic Cube. Communications of the ACM, 28-l (1985), pp. 22-23. [Stolfo et al., 831 Stolfo, Miranker, and Shaw. Architecture and Applications of DADO: A Large-scale Parallel Computer for Ad. Proceedines of the EiPhth International Joint Con erence on Artificial Intell&,&, August, 1983. pp. 850&4. [Yeung, 861 D. Yeung. Using Contract Net on the iPSC. Distributed Artificial Intelligence Group Research Note 20, CS Dept, University of Southern California. July, 1986. CCLISP, Concurrent Common LISP, and GCLISP 286 Developer are trademarks of Gold Hill Computers; iPSC, Intel Personal SuperComputer, 80286, and 80287 are trademarks of Intel Corporation; Explorer is a trademark of Texas Instruments; Symbolics 3600 is a trademark of Symbolics, Inc; XEROX 1108 Dandelion is a trademark of XEROX. 12 Al Architectures
1987
2
610
Revised D ected Backtracking Charles J. Petrie, Jr. Microelectronics and Computer Technology Corporation 3500 West Balcones Center Drive Austin, TX 78759 Abstract Default reasoning is a useful inference technique which involves choosing a single context in which further inferences are to be made. If this choice is incorrect, the context may need to be switched. Dependency-directed backtracking pro- vides a method for such context switching. Doyle’s algorithm for dependency-directed backtracking is re- vised to allow context switching to be guided by the calling inference system using domain knowledge. This new backtracking mechanism has been imple- mented as part of software for developing expert sys- tems. . Doyle presented an algorithm for performing contradic- tion resolution by dependency-directed backd;rack- ing(DDB) in a Truth Maintenance System (TMS)[5, 61.’ Doyle’s algorithm performs an abductive inference [16]. It takes a special state, denoted by a set of conflicting be- liefs, and finds some currently disbelieved assertion, belief in which would resolve the conflict. Doyle’s algorithm pro- vides a search method for finding such an assertion and for constructing a reason for its belief. This paper derives another algorithm more suitable as a general method for revising the results of default reasoning. We perform default reasoning when we have a set of disjoint alternatives and heuristics allow us to make a choice without doing all of the computation necessary to ensure that that choice is correct. Commonsense reason- ing as well as tasks which involve incremental construction, design [9], or decision making often require default reason- ing. In contrast to tasks involving parallel computation in hypothetical worlds and comparison of the results, in default reasoning a single context is preferred. A TMS maintains a single context and switches it when a conflict is signaled by the assertion of a contradic- tion. When such a switch is made, contradiction resolution constructs a reason for at least one new belief which then provides an explanation for the change. For example, a circuit designer may prefer to use flip-flops with totempole 1 This was later renamed a Reason Maintenance System [7], but a TMS was defined to mean a class of algorithms by [13] and this historic usage is continued here. output and to base his design on that choice unless it later causes a conflict. If this default choice later must be re- jected in favor of an alternative, the designer can always discover why he is using tristate output flip-flops instead of his original preference by inspecting the reasons for the current belief. This paper proposes semantics for the justifications generated by contradiction resolution and revises Doyle’s algorithm to conform to them. This technical revision has important consequences for default reasoning. Doyle’s algorithm restricts the set of beliefs subject to revision through a domain-independent strategy of “minimal re- vision”, but does not provide a general method of fur- t her specifying the revision. The dependency-directed backtracking method presented here eliminates domain- independent search constraints because they are insuffi- cient to determine correct belief revisions for a given do- main and they may even eliminate that revision from con- sideration. Instead, a syntax is presented for representing domain knowledge that can be used by the calling infer- ence system to reason about belief revision and to generate new alternatives as needed to resolve the contradiction. 0 E?SCD A. Justification Criteria In [6], a network of assertions is maintained along with reasons for their belief or disbelief. Each node in the net- work has associated with it a set of jusMications.2 A justification is composed of two sets of nodes: an IN-Iis& and an OUT-k& Each node also has associated with it a spIppor& status. A justification is valid if each node in its IN-list has a status of IN and each node in its OUT-list is similarly OUT. (A justification with empty IN-list and OUT-list is valid and called a premise.) An assignment of statuses to a TMS network is consisted when each node is assigned a status of IN iff it has at least one valid justifi- cation, and OUT otherwise. Status assignment algorithms for a TMS network attempt to find assignments in which the network is consistent and well-founded: no node is in its own believed repercussions [22]. Alternatives to Doyle’s original algorithm, which did not always terminate, 2The first use of TMS technical terms will be printed in boldface. Some defined in [6] are not redefined here. Petrie 167 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. are given in 122, 11, 191. Doyle also gave an algorithm for resolution of contradictions using dependeucy-directed backtracking [23]. In Doyle’s TMS, contradictions have no logical im- port. They denote a user designated conflicting set of be- liefs represented by a valid justification for the contradic- tion. Doyle’s $gorithm resolves contradictions by invali- dating the reason for belief in-an underlying assumption: a belief based upon the disbelief of some other node in the database of propositions. A belief is an assumption if its supporters include nodes which are OUT (disbelieved). In the TMS, the assumption selected for disbelief is known as the culprit. It is retracted by constructing a valid justification for the elective: the OUT supporter of the culprit which is chosen for belief in preference to belief in the culprit. We propose the following desiderata for the justification constructed for the elective: 1. The justification should be sufficient: it should allow a consistent and well-founded assignment of support statuses such that the contradiction is OUT. 2. The justification should be safe: it should not in- troduce an unsatisflable circularity into the TMS data. (This has also been independently noted by Re- infrank 1211.) 3. The justification should be complete: whenever the contradiction is OUT, either the elective is in any pos- sible transitive closure of the supporters of the contra- diction or the justification is invalid. (I.e., if it is pos- sible for the contradiction to be resolved without the elective being IN, then the justification should become invalid.) The justifications constructed by Doyle’s contradic- tion resolution algorithm are sufficient but can be greatly simplified and are neither safe nor complete. Es. evision of the Elective Justification Doyle’s algorithm for contradiction resolution determines a maximal assumption set (MAS): the largest subset of the set of all assumptions in the foundations of the contradiction such that no member of the subset is in the foundations of another subset member. If the MAS is A, an element Ai is chosen as the culprit, Ai has IN support- ers I and OUT supporters D, then the chosen elective Dj will receive the justification with an IN-list composed of IU{NG}UA - {Ai} and an OUT-list with D - {Oj}, where NG is a specially constructed nogood node with a conditional-proof (CP) justification, both of which we briefly describe below. We first note that previously pub- lished algorithms [6, l] h ave failed to include the IN-list of the culprit in the justification of the elective. This minor error causes the justification to be incomplete. The nogood node asserts that simultaneous belief in the members of the MAS is inconsistent. The justifica- tions described so far are support-&t justifications in contrast to the CP justification required for the nogood node. Rather than describe the CP justification, it is only necessary to note that it is a significant disadvantage of the TMS. The CP justifications are actually represented by equivalent support-list justifications requiring contin- ued validity checks by the TMS. Not only is this emulation expensive, but also the nodes with CP justifications are IN only in a subset of cases in which they could be [6]. Other types of TMS’s have been proposed that require no such justification [12, 13, 11. In the case of [l], this means that the elective justification is incomplete. For the others, it means that the calling logic must be integrated into the TMS [17], which at the least alters the intent of the TMS. In the case of [13], nonmonotonic reasoning is not permit- ted. A different implementation of CP justifications has also been proposed in [ll]. However, a slight alteration of the status assignment algorithm completely removes the necessity for CP justifications. It is possible to create dependency networks in which there are at least two distinct consistent and well-founded assignments of support statuses. For instance, consider the network of nodes with single justifications: node A has a node B in its justification’s OUT-list, node B has node C in its justification’s OUT-list, and node C has node A in its justification’s IN-list. Either nodes A and C are IN and B is OUT, or just the inverse. Either assignment of sta- tuses is acceptable. We now require the status assignment mechanism to prefer the consistent and well-founded state in which all contradictions are OUT, if one exists. In this example, the status assignment mechanism would make A and C IN and B OUT, if node B were the contradiction and these were the only nodes and dependencies involved. Now let each contradiction in the TMS be unique: each can have only one justification. Then the elective for a contradiction will have the same justification as described with two exceptions. The most important change in the justification is that we substitute the contradiction in the 0 UT-list for th e nogood node in the IN-list and the latter is eliminated. This will always create an ambiguity in status assignment if the contradiction can be consistently labeled OUT. Then the status assignment mechanism will so label the contradiction. Now belief in the elective rests directly on lack of belief in the contradiction. The semantics of this are that we are assuming belief in the elective in preference to belief in the contradiction. The justification directly represents this explanation for belief in the elective. The second change is that, to be complete, we must also include additional elements in the justification that were previously used in the CP justification. However, the work of finding these nodes can be eliminated, as well as the work of generating the MAS. We are led to this conclusion by noting that our elective justification, like Doyle’s justification, is not safe. C. Odd Loop Checking If an element of the IN-list of the justification is in the be- lieved repercussions of the elective, then obviously it will cause a problem. If some element of this set is a believed repercussion, then we must choose a different elective. If 168 Automated Reasoning there are no more, we must choose a different assumption. Similarly, it is clear that no element of the OUT-list may be in the repercussions of the elective. Actually, the sit- uation is worse than this. An unsatisfiable circularity may be created if any element of the justification of the elec- tive is only in the transitive closure of its consequences (TCC). This is evident for the case when the elective oc- curs in the OUT-list of an already invalid justification of an element in the justification of the elective. The proposed justification will not introduce an unsat- isfiable circularity if it does not introduce any o&I loops [l]. If an element of the justification of the elective is con- tained in the TCC of the elective and there are only “even loops” containing that element and the elective, the elec- tive is still “safe”. Determining this adds negligible com- putation to that involved in generating the TCC. Also, we only need to do a partial closure in that we need not consider the consequences of contradiction nodes. But the requirement for this determination leads to elimination of the creation of the MAS. For a contradiction C, the MAS contains those as- sumptions which are nearest to C in its foundations and none of which are in the foundations of another element of the set. This set implements the strategy of “minimal revi- sion” : retract the assumption that causes the least change in the database. Retraction of a maximal assumption is guaranteed to cause fewer changes than retraction of a nonmaximal assumption. We must check the TCC of the elective even when we have a maximal assumption set, but this check also allows us to avoid choosing an assumption which is not maximal. Suppose that we pick some set T of assumptions, not necessarily maximal, in the foundations of the contradiction. If we pick a culprit A and an elective E, we can easily determine if A is in the foundations of any other element of T by examining the TCC of E. Generating the MAS requires that we completely ex- amine the foundations of the contradiction. We must also examine the transitive closure of the consequences of the elective to be safe. Thus, the foundation examination rep- resents unnecessary work. We now present an algorithm which eliminates such work, avoids deriving the nodes cor- responding to the CP justification, still permits minimal revision, and generates a safe and complete justification . evised I3 -based Contradiction esohtion Let set S be the supporters of contradiction C. (As- sume for now that contradiction nodes have empty OUT-lists.) Th e contradiction is resolved iff function MAKEOUT(S,nil,{C)) t re urns a justification for an elec- tive. If so the justification will satisfy the three properties of section 1I.A. Define MAKEOUT (A, IJ, OJ): 1. If A is null, return “false”. 2. Pick Ai E A. Construct the TCC of Ai and save it. If some element of A - Ai is in the repercussions of Ai, 3. 4. 1. 2. 3. 4. then go to step 4. Otherwise, let Ji be the support- ing jt&ificatio~ of Ai. If MAKEIN(OUT-list of Ji, IJ U IN-list of Ji U A - {Ai}, OJ), then return it. If MAKEOUT(A - {Ai), IJ lJ {Ai), OJ) then return it. Return MAKEQUT(IN-list of Ji) IJ U A - { Ai}, OJ U OUT-list of Ji). Define MAKEIN (A, IJ, OJ): If A is null, return ‘false’. Pick Ai E A. If Ai is a contradiction, return MAKEIN(A-{Ai}, IJ, OJ U {Ai}). Construct justification J, for Ai with an IN-list con- sisting of IJ, and an OUT-list consisting of OJ to- gether with A - {Ad}. C onstruct the TCC of Ai mak- ing use of the TCCs of elements of IJ already con- structed so far. If the new justification will not create any odd loops, return it and the elective. Return MAKEIN(A-{Ai}, IJ, OJ U {Ai)). In this algorithm, we start by trying to make OUT some IN supporter of the contradiction using MAKEOUT. To make a node OUT with this function, we first try to make some element of the OUT-list of the supporting jus- tification IN using MAKEIN. If that fails, we first try to make another element of the first argument of MAKEOUT OUT. As a last resort, we try to make some element of the IN-list of the supporting justification OUT using MAKE- OUT recursively. A contradiction node cannot be made IN. Any other node can be made IN by giving it the con- structed justification. Sufficiency is acheived by an OUT-list-first search fol- lowed by a depth-first seach of the contradiction’s foun- dations. Safety is guaranteed by step 3 of MAKEIN. The actual check for odd loops by examination of the TCC can be implemented cheaply. Completeness is ensured by the additions to IJ and OJ in recursive calls to MAKEOUT and MAKEIN. The algorithm also collects other relevant contradictions to be placed in the OUT-list of the justifi- cation of the elective. . ase Because of the elimination of the MAS in the above al- gorithm, the search space can be extended to include the entire foundations of the contradiction rather than being restricted to the maximal assumptions. In an experimental expert application development system called “Proteus” [2013, the above algorithm has been extended in the fol- lowing ways. 3There are two commercial design applications [25, 241 using Pro- teus. Other design applications are experimental and internal to MCC. A. Extended Backtracking The elective may already have at least one justification, al- though it has no valid one. Adding a new justification may implicitly violate domain rules. For instance, the reason a particular elective is OUT is because some exception is IN. Simply adding another justification, as dictated by the strategy of minimal revision, ignores this exception. In the current implementation, a proof attempt is first made on each candidate elective. If it is not successful, then either domain knowledge or the user must indicate that it is per- missible to construct a new justification for the elective. If this is not the case, an attempt is made to make one of the existing justifications, if any, valid. This is accom- plished by recursively calling MAKEOUT on the elements of the OUT-list and MAKEIN on the elements of the IN- list. This may result in justifications being constructed for more than a single elective. Contradictions resolved earlier are prevented from coming IN again by recursively calling MAKEOUT on those found in the TCC of the elective. If the current contradiction cannot be resolved, or cannot be resolved without bringing IN some previously resolved contradiction, it is left unresolved and marked as such. An extended algorithm for this is given in [18]. B. Depth-First Guidance The extended algorithm is essentially a depth-first search biased toward the OUT-lists. It is easily modified to use domain knowledge to guide the search for electives. In step two of the algorithm for MAKEOUT, some element of the set of candidate culprits, which is the first argument to MAKEOUT, is chosen. An attempt will be made to find a reason to disbelieve the chosen culprit. It would be a less than optimum choice to select a candidate which is more strongly believed than some other candidate. It is desir- able, then, to provide a general mechanism for representing domain knowledge about the relative rankings of beliefs in the various assertions under different circumstances and be able to use the same rule system to reason about this knowledge as is used in the rest of the application. A two argument predicate PREFER has been imple- mented which states that belief in the assertion of its first argument is preferred to belief in that of the second for the purposes of belief revision. When choosing a candi- date culprit from the MAKEOUT set, an attempt is made to prove that this culprit is preferred to some other ele- ment of the set. If a less preferred element is found, then it becomes the candidate culprit and the process recurses. PREFER thus enforces a partial ordering on the candi- date culprits such that none of them will be chosen before others which are more suspect. Similarly, in MAKEIN, PREFER is used to ensure that no candidate elective is chosen before some other for which belief is preferred. Assertions using the PREFER predicate, called pref- erences, can be based on object types and can be concluded by Proteus rules with variables instantiated at run time. Since these rules may have antecedents which are satisfied only in certain contexts, the selection of culprits and elec- tives can be controlled dynamically. The preferences may be based on numbers, lists, or arbitrary domain reasoning. We also provide for interactive control of the selection of culprits and electives if desired. The ability to add this domain knowledge overcomes the disadvantage of “blind” dependency-directed backtracking.[l4] PREFER is also used to define a terminal node in the search. As described above, a proof attempt is made on each candidate elective. If such a proof is not possible using rules, but the application allows the truth of that particular elective to be asked of the user, one of three possible answers is allowed: “yes”, “no”, and “maybe”. The first causes the elective to be provided with a premise justification. The second disqualifies the elective from fur- ther consideration. The third gives permission for a new justification to be constructed, thus defining a leaf of the network search. Instead of such a user query, PREFER can be used for the same purpose. If CONTRADICTION is the second argument of a preference, then the first argu- ment is eligible to receive a constructed justification: the semantics are that belief in the elective may be assumed in preference to that in the contradiction. If CONTRA- DICTION is the first argument, the inverse is true and the candidate also defines a terminal but unsuccessful node in the network. In no case will Proteus arbitrarily justify an elective in order to resolve a contradiction. C. Finding New Alternatives to Defaults The capability for dynamic generation of elective candi- dates is provided with a predicate, DEFEAT, which takes three arguments: the candidate culprit, some element of the IN-list of a justification of the culprit, and some new elective. If a DEFEAT assertion can be proven, the elec- tive may be added to the OUT-list of any justification of the culprit identified by the IN-list element. The semantics of DEFEAT are that some reason for believing the culprit can be defeated by belief in some new exception or alter- native. In step two of the algorithm for MAKEOUT, if no member of the existing OUT-list can be made IN, then an attempt is made to prove a DEFEAT for the culprit to add to the OUT-list. DEFEATS are general in that they can be concluded by rules, like preferences. Thus, they can be used to generate arbitrary new alternatives to belief in the culprit. Proteus also allows for the interactive acquisition of new alternatives and even DEFEATS. D. Focused Search For a given domain and situation, the culprit and elective that should be considered first may lie several levels deep in the dependency-structure which supports the contradic- tion. A predicate FIX has been implemented which takes three arguments: the first is a candidate culprit, the sec- ond a possible ancestor, and the third a possible elective for that ancestor. The semantics of a FIX are that if the second argument is an ancestor of the first, for the current dependency network, and if the third can be an elective 170 Automated Reasoning of the second (is in the OUT-list of a supporting justi- fication or can be placed there by a DEFEAT), then an attempt should be made to believe the elective (and a call to MAKEIN is made on it.) Such FIXes are used to focus first on nodes deep within the foundations of a contradic- tion prior to performing the depth-first search described above. Fixes are most useful when the first argument uni- fies with a supporter of the contradiction, but may also be used during search. The choice between which of several FIXes to pursue first is determined by the preference order on their various electives. FIXes may also be concluded by rules. E. Example A doctor might conclude that patient Jane is dehydrated from observation of the appropriate symptoms. That leads to a conclusion that Jane has a low amount of water which in turn leads to a conclusion that Jane should have a high sodium concentration. However, the lab results indicate that Jane actually has a low sodium concentration. There were at least three default assumptions made in this ex- ample, retraction of which would resolve the contradiction. The doctor assumed that the symptoms were those of de- hyration and not some other disease. He assumed that Jane’s sodium level was normal. And there is an assump- tion that the lab test is correct. In a typical TMS dependency network produced by this logic, a depth-first search would explore first a possi- ble lab test error, then an abnormal sodium level, and fi- nally a recheck of the symptoms. This does not correspond to actual practice. Typically, the doctor first rechecks the symptoms because it is easy to do. Then (or perhaps at the same time), he may ask for the lab work to be redone. He will be particularly suspicious of the lab work if a high glucose level is indicated because this interferes with nor- mal calculations of sodium levels. If these simple “fixes” to the problem aren’t sufficient to resolve it, he carefully reconsiders his thinking. The obvious alternative to a nor- mal sodium level is either directly proven or assumed and the doctor is led to consideration of a cause. In Proteus, this problem resolution could be repre- sented by two FIXes, one conditional preference between them, and a DEFEAT. One FIX would say that if the con- tradiction is supported by a conclusion which is somehow supported by an observation of symptoms, then suspect a mistaken observation. The other FIX would simply say that any lab test supporting a contradiction should be re- done tid checked for error. The preference would be to check out the mistaken observation prior to redoing the lab test, unless the patient were unusually difficult to ob- serve or it was known that the patient’s glucose level was high and the lab test under consideration was for sodium concentration. The DEFEAT would defeat the conclusion of a high sodium concentration by generating the possibil- ity of an abnormal sodium level. This would occur in the depth-first search if neither FIX were successful. e orisons McAllester’s RUP [13] p rovides a general purpose method of premise control using “likelihood classes”. However, this approach imposes a global ordering on the database which is more complete than is actually justified. Proteus uses the PREFER predicate to avoid this problem. Unlike Co- hen [2], no predefined categories of preferences are defined. In DEBACLE, Forbus [lo] 1 a so objects to likelihood classes and instead defines “closed-world assumptions” which are checked for invalidity before trying special purpose rou- tines ordered in a stack. In an extension to the ATMS [4], a simple list is used to order the selection of culprit candidates. In WATSON [15] , the call in in erence system is allowed to attempt g f to prove that candidate culprits may be ordered by rele- vance to the story being parsed. In PLANET [3], special objects called “decision choices” are created which contain information about alternatives and their effects on con- strained resources. Contradictions are caused by resource constraint violations and resolved by selecting alternatives which are not disadvantageous to the resource in question. Proteus provides a Lore domain-independent and dy- namic mechanism for controlling backtracking than these systems. Preferences may be concluded on an arbitrary basis and alternatives need not be enumerated prior to the occurance of the contradiction. Rather than select a set of predefined objects from the foundations of the contradic- tion, FIXes allow any assertion in the database to belong in the initial focus of candidate culprits and electives. If these are not successful, exhaustive search is used. Unlike pre- vious algorithms, , the extended search described in section III may result in more than one elective being justified. Doyle’s original algorithm for contradiction resolution is - revised to conform to proposed semantics. A new al- gorithm for dependency-directed backtracking is derived which allows any node in the dependency network to be considered for inclusion in a set of assumptions to be retracted during contradiction resolution. This allows domain-independent search constraints to be rejected in favor of a general backtracking control method dependent on domain knowledge. Special predicates are defined which allow the application builder to represent domain knowl- edge about how to switch contexts when beliefs conflict. The selection of the new context can be reasoned on the basis of the current state of the database and new alterna- tives generated dynamically. This has been implemented as a general method for revising the results of default rea- soning in expert systems. Petrie 171 PI PI PI PI 151 PI 171 PI PI PO1 WI WI WI Char&k E., Riesbeck C., and McDermott Dv “Data Dependencies,” Artificial Intel- ligence Programming, Chap. 16, L. E. Erl- baum, Baltimore, 1979. Cohen, P. R., Heuristic Reasoning about Uncertainty: An Artificial Intelligence Ap- proach, Pitman Publishing, Marshfield, MA, 1985. Dhar V. and Quayle C., “An Approach to Dependency Directed Backtracking using Domain Specific Knowledge,” PTOC. IJCAI- 85, pp. 188-190, 1985. De Kleer J., “Back to Backtracking: Con- trolling the ATMS,” PTOC. of the Fijth Na- tional Conference on Artificial Intelligence, AAAI, pp. 910-917, 1986. Doyle J., “Truth Maintenance Systems for Problem Solving,” Technical Report AI- TR-419, Massachusetts Institute of Tech- nology, AI Lab., 1978. Doyle J., “A Truth Maintenance System,” Artificial Intelligence, Vol.12, No.3, pp. 231-272, 1979. Doyle J., “A Model for Deliberation, Ac- tion, and Introspection,” AI-TR-581, Mas- sachusetts Institute of Technology, AI Lab., 1980. Doyle J., “Some Theories of Reasoned As- sumptions,” CMU CS-83-125, Carnegie- Mellon University, Dept. of Comp. Sci., 1983. Feldman, Y. A., and Rich, C., “Reasoning With Simplifying Assumptions: A Method- ology and Example,” PTOC. of the Fijth Na- tional Conference on Artificial Intelligence, AAAI, pp. 2-7, 1986. Forbus, K. D., “Qualitative Process The- ory,” Appendix, AI-TR-789, Massachusetts Institute of Technology, AI Lab., 1984. Goodwin, James W., “An Improved Algo- rithm for Non-monotonic Dependency Net Update,” LiTH-MAT-R-82-23, Linkoping Institute of Technology, Sweden. Martins J., “Reasoning in Multiple Belief Spaces,” TR-203, State University of New York at Buffalo, 1983. McAllester D., “An Outlook on Truth Maintenance,” A.I. Memo 551, Mas- sachusetts Institute of Technology, AI Lab., 1980. [14] Morris, P. H., Nado, R. A., “Representing Actions with an Assumption-Based Truth Maintenance System,” PTOC. of the Fifth 172 Automated Reasoning National Conference on Artijkial Intelli- gence, AAAI, pp. 13-17, 1986. [15] Orejel-Opisso, J. L., “Story Understand- ing with WATSON: A Computer Program Modeling Natural Language Inferences Us- ing Nonmonotonic Dependencies,” Masters Thesis, University of Illinois at Urbana- Champaign, 1984. [16] Pierce, C. S., Scientific Metaphysics, Vol. VI, pp. 358. [17] Petrie, C., “‘Using Explicit Contradictions to Provide Explanations in a TMS,” Micro- electronics and Computer Technology Cor- poration Technical Report MCC/AI/TR- 0100-05, 1985. [18] Petrie, C., “Extended Contradiction Res- olution,” Technical Report, Microelectron- ics and Computer Technology Corporation MCC TR AI-102-86, 1986. [19] Petrie, C., “A Diffusing Computation for Truth Maintenance,” PTOC. of the IEEE In- ternational Conf. on Parallel Processing, August 1986, pp. 691-695. [20] Petrie, C., Russinoff, R., and Steiner, D., ‘“Proteus: A Default Reasoning Perspec- t ive,” PTOC. 5th Generation Conj., Nat. Inst. for Software, October, 1986. [21] Reinfrank, M., et al., “KAPRI - A Rule- Based Non-Monotonic Inference Engine with ~l~l Integrated Reason Maintenance System,” Research Report Draft, Univer- sity of Kaiserslautern, January 1986. [22] Russinoff, D., “An Algorithm for Truth Maintenance,” Microelectronics and Com- puter Technology Corporation Technical Report AI/TR-062-85,1985. [23] Stallman ,R. and Sussman, G., “Forward Reasoning and Dependency-Directed Back- tracking,” Memo 380, Massachusetts Insti- tute of Technology, AI Lab., Sept. 1976. [24] Steele, R., “An Expert System Applica- tion in Semicustom VLSI Design,” PTOC. 24th IEEE/ACM Design Automation Con- ference, Miami, 1987. [25] Virdhagriswaran, S., et al., “PLEX: A Knowledge Based Placement Program for Printed Wire Boards,” PTOC. 3rd IEEE AI Applications Con$, February, 1987.
1987
20
611
Path Dissolution: A Strongly Complete Neil V. Murray Dept. of Computer Science State Univ. of N.Y. at Albany Albany, NY 12222 1. Introduction We introduce path dissolution, a rule of inference that operates on quantifier-free predicate calculus formulas in negation normal form (NNF). We use techniques first developed in [Murray & Rosenthal 1985a], and in [Murray & Rosenthal 19871 employing a representation of formu1a.s that we call semantic graphs. Dissolution is a generalization to NNF of the Prawitz matrix reduction rule, which operates on formulas in conjunctive normal form (CNF). One important distinction between dissolu- tion and most other rules of inference is that one cannot restrict attention to CNF: A single application of dissolu- tion generally produces a formula that is not in CNF even ij the original formula is. For almost a decade, the connection-graph resolution procedure had been conjectured to be strongly complete, i.e., to converge under any sequence of inferences for all contradictory ground formulas. Norbert Eisinger [Eis- inger 19861 recently discovered counterexamples. Path dissolution is strongly complete: Each dissolution step strictly reduces the number of c-paths in a formula. The procedure always terminates, producing (in effect) a list of the formula’s models. (If the formula is unsatisfiable, the empty graph results, representing the empty set of models.) Bibel has presented several algorithms for determin- ing whether a propositional formula is unsatisfiable [Bibel 19821. He built on the work of Prawitz [Prawitz 19701 and later work of his own [Bibel 19791, [Bibel 19811 and of Andrews [Andrews 19811. His t This research was supported under grant DCR-8600848. in part by the National Science Foundation Erik Rosenthal Dept. of Computer Science Wellesley College Wellesley, MA 02181 approach was to search for paths containing links (com- plementary literals). The technique developed in this paper also employs links, but they are used to remove the paths through them. Dissolution, unlike most resolution-based inference rules, does not directly lift into first-order logic; tech- niques for employing dissolution at the first order level are discussed. Dissolution is quite different from other rules of inference, which is not surprising in view of its strGng completeness and of the fact that it forces formulas away from CNF. As a result, we omit proofs and present exten- sive examples. 2. Preliminaries We briefly summarize semantic graphs, including only those results that are necessary for the analysis of path dissolution. We assume the reader to be familiar with the notions of atom, literal, formula, resolution, and unification. We will consider only quantifier-free formu- las in which all negations are at the atomic level. A semantic graph is empty, a single node, or a triple (N, C,O) of nodes, c-arcs, and d-arcs, respectively, where a node is a literal occurrence, a c-arc is a conjunction of two non-empty semantic graphs, and a d-arc is a disjunc- tion of two non-empty semantic graphs. Each semantic graph used in the construction of a semantic graph will be called an explicit subgraph. We use the notation (G ,H), for the c-arc between G and H and similarly use (G ,H )d for a d-arc. We will consider an empty graph to be an empty disjunction, which is a contradiction. If G = (X, Y), observe that every other arc is an arc in X or in Y, we call (X, Y) the final arc of G. As an example, the formula ((A r\ B) V C) A ( --A V (D A C)) is the graph A+B ;;I 1 --) 1 C D+C Note that horizontal arrows are d-arcs. arrows are c-arcs, and vertical The formulas we are considering are in negation nor- mal form (NNF) in that all negations are at the atomic level; the only connectives used are AND and OR. Murray and Rosenthal From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Lemma 1. Let G be a semantic graph, and let A and B be nodes in G. Then there is a unique arc connect- ding A and B. One of the keys to our analysis is the notion of path. Let G be a semantic graph. A partial c-path through G is a set c of nodes such that any two are connected by a c- arc. A c-path id a partial c-path that is not properly con- tained in any partial c-path. We similarly define d-path using d-arcs instead of c-arcs. Several other authors have employed paths; for example see [Andrews 19811, [Bibel 19791, [Bibel 19811, [Bibel 19821, [Eisinger 19861, [Murray 19821, [P rawitz 19701. They generally concen- trated on c-paths. Lemma 2. Let G be a semantic graph. Then an interpretation I satisfies (falsifies) C 8 I satisfies (falsifies) every literal on some c-path (d-path) through G. We will frequently find it useful to consider sub- graphs that are not explicit; that is, given any set of nodes, we would like to define that part of the graph that corisists of exactly the giiren set of nodes. The previous example is shown below on the left. The subgraph rela- tive to the set {A, A, D} is the graph on the right. A+B 7i: ic 1 - 1 A-+ 1 C D-+C D If N is the node set of a graph G, and if N ’ C N, we define GN I , the subgruph of G relative to N ’ , as follows: If N ’ = N, then GN I = G. If the final arc of G is (X, Y), and if no node in N ’ appears in Y (or in X), then GN t = X, I (or G, I = Y, I ). Other- wise, GN I =(xNr, Y,r ), where this arc is of the same type as (X, Y). In practice, we typically will not distinguish between N ’ and G, I . A c-block C is a subgraph of a semantic graph with the property that any c-path p that includes at least one node from C passes through C; that is, the subset of p consisting of the nodes that are in C is a c-path through C. A d-block is similarly defined with d-paths, and a full block is a subgraph that is both a c-block and a d-block. We define a strong c-block in a semantic graph G to be a subgraph C of G with the property that every c-path through G contains a c-path through C. A strong d-block is similarly defined. The fundamental subgraphs of a semantic graph G are defined recursively as follows. If G = (X, Y )c, and if the final arc of X is a d-arc, then X is a fundamental sub- graph of G. Otherwise, the fundamental subgraphs of X are fundamental subgraphs of G. (The dual case when G = (X, Y ), is obvious.) An isomorphism from W CA to (N’,C’,D’) is a bijection f: N + N ’ that preserves c- and d-paths such that for each A in N, A=f(A). We call Theorems 1 and 1’ and their corol- laries the Isomorphism Theorem. Theorem 1. Let G be a semantic graph, and let B be a full block in G. Then B is a, union of fundamental subgraphs of some explicit subgraph of G. Theorem 1’ . If G and H are isomorphic semantic graphs, then H can be formed by reassociating and com- muting some,of the arcs in G. Corollary 1. Let G be a semantic graph, and let B be a full block in G. Then there is a semantic graph G I and an isomorphism f: G - G ’ such that f(B) is an explicit subgraph of G ’ . Corollary 2. The intersection of two full blocks is a full block. Corollary 3. Given a semantic graph G and a col- lection of mutually disjoint full blocks, there is a graph isomorphic to G in which each full block is an explicit subgraph. Moreover, given any two of the blocks, each node in one is c-connected to each node in the other or each node in one is d-connected to each node in the other. Several additional definitions are necessary to define the dissolution operation. From the isomorphism theorem we know that any full block U is a conjunction or a dis- junction of fundament,al subgraphs of some explicit sub- graph H. If the final arc of H is a conjunction, then we define the c-extension of U to be Hand the d-extension of U to be U itself. (The situation is reversed if the final arc of H is a d-arc.) We define the c-path extension of an arbitrary subgraph H in a semantic graph G as follows (note that this is different from the c-extension of a full block): Let F,, . . . , F, be the fundamental subgraphs of G that meet H, and let Fkfl, . . . , F, be those that do not. Then CPE(0,G) = 0 and CPE(G,G) = G. CPE(H ,G) = CPE(HF,,Fl) V * - * V CPE(HFn,Fn) if the final arc of G is a d-arc CPE(H,G) = CPE(HQ’,) A - - - A CPE(HF,,Fk) A Fk+l A * . * A Fn if the final arc of G is a c-arc Lemma 3. The c-paths of CPE(H, G) are precisely the c-paths of G that pass through H. Using the same notation we define the strong split graph of H in G, denoted SS( H,G), as follows: SS(0,G) = G and SS(G,G)=0. SS(H,G) = SS(H,F,) V . . . V SS(H,F,) if the final arc of G is a d-arc SS(H,G)’ = SS(H,F,) v . . * v SS(H,Fk) A &+I A * * - A F, if the final arc of G is a c-arc Lemma 4. If H is a c-block in G, then SS(H, G) is isomorphic to the subgraph of G relative to the nodes that lie on c-paths that miss H. Define the uuxilfury subgraph Aux(H, G) of a sub- graph H in a semantic graph G to be the subgraph of G relative to the set of all nodes in G that lie on extensions of d-paths through H to d-paths through G. Lemma 5. If His a non-empty subgraph of G, then Aux(H, G) is empty iff H is a strong c-block. Moreover: 162 Automated Reasoning Aux(H, G) cannot contain a d-path through G; if H is a c-block, then so is Aux(H, G). Lemma 6. If H is a c-block then CPE(H, G) = SS(Aux(H, G), G). 3. Path Dissolution We define a chain in a graph to be a set of pairs of c- connected nodes such that each pair can simultaneously be made complementary by an appropriate substitution. A link is an element of a chain, and a chain is full if it is not properly contained in any other chain. A graph G is spanned by the chain K if every c-path through G con- tains a link from K; in that case, we call K a resolution chain for G. Intuitively, path dissolution operates on a resolution chain by constructing a semantic graph whose c-paths are exactly those that do not pass through the chain. Not all resolution chains are candidates for dissolution: A special type of chain that we call a dissolution chain (what else?) is required. Since single links always form dissolution chains, the class is not too specialized. The construction of the dissolvent from such a chain is straightforward. A resolution chain H is a dissolution chain if it is a single c-block or if it has the following form: If M is the smallest full block containing H, then M = (X, Y), where HnX and HnY are each c-blocks. Given a dissolution chain H, define DV(H, n4), the dissolvent of H in M, as follows (using the above nota- tion): If H is a single c-block, then DV(H, M = SS(H, Al). Otherwise (i.e., if H consists of two c-blocks), then ’ CPE(H,X) + SS(H, Y) 1 DV(H,M) = SS(H, X) + CPE(H, Y) 1 \ SS(H, X) -+ SS(H, Y) Intuitively, DV(H, M) is a semantic graph whose c- paths miss at least one of the c-blocks of the dissolution chain. The only pa.ths left out are those that go through the dissolution chain and hence are unsatisfiable. Notice that we may express DV(H, AQ in either of the two more compact forms shown below (since CPE(H, X) U SS(H, X) = X and CPE(H, Y) U SS( H, Y) = Y): x - SS(H,Y) SS(H,X) ---, Y 1 1 SS(H,X) - CPE(H,Y) CPE(N,X) - SS(H,Y) Note that the three representations are semantically equivalent but are not in general isomorphic; in particular their d-paths need not be the same. The c-paths of all three representations, however, are identical; they consist of exactly those c-paths in M that do not pass through H. Theorem 2. Let H be a ground dissolution chain in a graph G, and let M be the smallest full block contain- ing H. Then it4 and DV(H , M) are equivalent. We may therefore select an arbitrary dissolution chain H in G and replace the smallest full block contain- ing H by its dissolvent, producing (in the ground case) an equivalent graph. We call the resulting graph the disso- lution of G with respect to H and denote it Diss(G,H); links are inherited in the obvious way. The graph formed by dissolution has strictly fewer c-paths than the old one: All remaining c-paths were present in the old graph, and the two graphs are semanti- cally equivalent. The original graph has only finitely many c-paths, and each dissolution operation preserves its meaning. As a result, finitely many dissolutions (bounded b a ove by the number of c-paths in the original graph) will yield a graph without links. If this graph is empty, then the original1 graph was spanned; if not, then every (necessarily linkless) c-path characterizes a model of the original graph. If we dissolve on a link {A, x} in a graph in CNF, then H = -{A, A}, X and Y are the two clauses contain- ing A and A, respectively, M = X U Y, Hx = (A\, = CPE(Hx , X), and HY = {x} = CPE(H,, Y ). Since CPE(H,, X) = CPE( Hy , Y) the Prawitz matrix reduc- tion rule [Prawitz 19701 may then be used. The resulting graph is CPE(H, X) + SS( H, y) SS(H, X) ----) CPE(H, r) Note that Theorem 2 does not apply in this case (i.e., the Prawitz rule preserves unsatisfiability but not equivalence). 4. A Dissolution Refutation The graph below is unsatisfiable and has 12 c-paths. We box the smallest full block containing a dissolution chain about to be activated. Links 1 and 2 form a dissolution chain; M = (X, Y ), where X and Y are the two leftmost fundament-al sudr graphs of the entire graph. Notice that SS({C,D}, X) = XiA,B) and SS({c,D}, Y) = YIKEj; also CPE({C,D}, X) ze X qc D). Dissolution removes 4 c-paths resulting in fo lo&ng \ graph ( we use the second of the two com- pact versions of dissolvent throughout this section): Murray and Rosenthal 163 The subgraph c -+ b and the single occurrence of C are both linkless full blocks. (We have not deleted links in the ordinary sense of [Bibel 19811 or IMurray & Rosenthal 1985b]. With path dissolution, links simply disappear because their associated nodes, although c-connected in the original graph, become d- connected in the dissolvent.) We may therefore apply the Pure Lemma [Bibel 19811, [Murray & Rosenthal 1985b], [Murray & Rosenthal 19871 and delete the d-extensions of these full blocks, which in turn renders the upper occurrence of A pure. The result is: B 3 l AT 1--+x-+ D E Now we activate link 3 and apply the Pure Lemma to A: D --) 1 E We next dissolve on link 4 to produce: ;D D . --+ “\z $ The remaining two links constitute a single strong c-block and they span the entire graph. Dissolving on them results in the empty graph. 5. &plying dissolution to a satis%iable graph. We may always apply dissolution to a ground semantic graph until the graph is without links. The remaining c- paths, if any, characterize exactly the interpretations satisfying the graph. We must not, however, apply the Pure Lemma if that is our objective since it, unlike disso- lution, preserves only satisflability, not equivalence. The graph below is similar to that of the previous example but is satisfiable. 6 1 E I I The details, which are similar to the previous example, are left to the reader because of space considerations. After six dissolutions (activating-a total of-10 lisks), the graph is reduced to A --* C + D + E -+ D --) B, which specifies those interpretations that satisfy the original graph. 6. First Order Dissolution The usual arguments (involving the application of Robinson’s Unification Theorem) allow us to lift ground chains to the general level. More stringent conditions, however, must be satisfied if we wish to replace the smal- lest full block containing a dissolution chain by its dissol- vent. (Were this not the case, we would have a decision procedure for first order satisflability.) The difficulty arises from ground instances (possibly crucial to a proof) in which the chain does not exist, i.e., instances that are not consistent with the mgsu of the chain. Of course, the dissolvent can always be soundly conjoined to the exist- ing graph. Dissolution (with replacement) may then be applied freely to the newly inferred portion of the graph. A partial replacement technique may be applied to chains that link the old and new sections of the graph. These ideas are discussed in this section. During the construction of the mgsu of a chain, some care must be taken regarding the familiar process of stan- dardizing variables apart. If x is any variable, two occurrences of x cannot be standardized apart if they appear in d-connected nodes. In CNF this is a sufficient condition for determining whether variables may be standardized apart; in semantic graphs (NNF), this is not the case. What is required is the transitive closure of the relation ‘are d-connected’, which provides all the occurrences of x that are in fact the same variable. 6.11. Partia1 replacement Let G be a (first order) semantic graph, and let H be a dissolution chain with mgsu Q in the full block M =(X, Y),. Let U= DV(H, M) and consider the graph G --) Ua 164 Automated Reasoning If we have a dissolution chain in Ua, we may dissolve with replacement since anything lost due to successive instantiations is present in G. If we have a dissolution chain from G to U, the smallest full block containing it will consist of a fundamental subgraph of G and a funda- mental subgraph of U. (This full block could be larger if the chain contains a strong c-block.) We of course cannot replace the fundamental in G, but we may replace the funda.mental in U by the entire dissolvent. The following example illustrates these ideas. It consists of five funda- mental c-connected subgraphs, labeled F, through F5 . A(x) - 1 3 E(x) B7; A(dx)) E(x) .-+ 1 I+ 1 --, 1 -, 1 1 B(x) 4 C(x) W) --r D(x) E(+D(x) W(x)) W(x)) Fx F2 J’s F4 F6 Suppose that we first dissolve on link 1; the smallest full block containing it (F, conjoined with F, ), and its dissolvent, are shown below. Am- 1 ’ Qdv)) A(&)) A(&)) 1 --P 1 Et4 ---) 1 Wkb))) wv)) 1 E@(x)) 1 W(x)) Am + w(v)) Fl F5 F6 In the original graph, several occurrences of x can be standardized apart (although we have not done so), but in the dissolvent, all occurrences of v are d-related. The dissolution operation has created two d-connected occurrences of the literal E(h(v)), both of which are linked to E(x) - in the original graph. Therefore these two links are descendants of the original link, and they form a dissolution chain that is somewhat easier to find (given the appropriate bookkeeping) than an arbitrary two-link chain. Shown below is F, , the result of dissolving on this chain. E(hlo) 3 E(7vj) --) A(&) B(W)) --) C(W)) Wdv))) 1 B(W) - C(W) 4 1 AkW --) E(W)) There are now two d-connected occurrences of the c-block B( h( v)) -+ C(h(v)); the two taken together also form a c-block. Each is linked to (g(,), m)d, a strong c-block in F4. Dissolving results in replacing F7 with: Do) 1 -qqq+D(hTGIJ EkW) - Em - 1 - 4dv)) W&N) Ft3 F9 F 10 Fll N& a dissolvent is computed from the link Kw w%dv)))h we replace only F,, , the fundamental that meets G(f(g(v))). F8 FD FL2 FI, The c-block consisting of B(f(g(vk+D(_f(g(v))) in F,, is linked to the strong c-block (B(x), D(xj)d within F, of the original graph. Replacing F r2 by the dissolvent yields (we omit F8 ): Wtdv))) E@(v)) C&v))) f, D&ih) - - W(v)) 4 W&H) -+ A(dv)) F9 Fls F14 F16 Fll The proof may be completed using F,, F,, and Fr4: - E(x)----- ZBrn\ 1 1 * EkW) B(x) - C(x) + G(x)-+D(X) J-2 Dissolving on link 2 produces B(g(v)) ---f C(g(v)), which is linked to F4 by a dissolution chain that spans the entire graph. 6.2. Dissolution on copies of graphs In the previous technique, dissolution was used once within the original graph to create an inferred graph on which replacement could safely be performed. Another strategy would be to create a copy of the original graph, and then dissolve with replacement on the copy as much as possible. The idea is to drive the copy toward some instantiated linkless consequence, which is then conjoined to the original graph. (If we are lucky, the consequence will be empty!) The process can then be repeated, with preference given to those links (if any) not used in previ- ous iterations. Note that in general, as dissolution is applied, some links not yet used in the copy will simply vanish, their literals having become instantiated in ways inconsistent with their original unifiers. Let us try this approach on the previous example. Links 1 through 6 are compatible. Regardless of the order in which they are activated, the resulting graph contains Murray and Rosenthal 165 all c-paths except those through any of the six links. Fl F2 F3 J’4 F6 We omit the individual only the result: dissolution steps, and present The semantic graph above has 6 c-pa.ths, whereas the original one has 48. Dissolving on links ‘7 and 8 yields the empty graph. These techniques look promising, but both are primi- tive; much remains to be investigated at the first order level. Our intuition is strong that dissolution at the ground level is likely to be an effective technique, and we cannot help but believe that, if properly lifted, it would also be effective for first order logic. Acknowledgements We would like to thank Stacia Quimby and Scott Shurr, without whose help we could not have dealt with multiple systems in multiple geographic locations. References [Andrews 19811 P.B. Andrews, “Theorem proving via general matings,” J.ACM, vol. 28,2, pp. 193-214, April 1981. [Bibel 19791 W. Bibel, “Tautology testing with a general- ized matrix reduction method,” Theoretical Computer Science, vol. 8, pp. 31-44, 1979. [Bibel 19811 W. Bibel, “On matrices with connections,” J.ACM, vol. 28,4, pp. 633-645, Oct. 1981. [Bibel 19821 W. Bibel, “A comparative study of several proof procedures,” Artificial Intelligence, vol. 18,3, pp. 269-293, 1982. [Brown 19761 F. Brown, “Notes on chains and connection graphs,” Personal notes, Department of Com- putation and Logic, Edinburgh University, 1976. [Chang & Slagle 19791 C.L. Chang and J.R. Slagle, “Using rewriting rules for connection graphs to prove theorems,” Artificial Intelligence, vol. 12, pp. 154178, Aug. 1979. [Eisinger 19861 N. Eisinger, “What you always wanted to know about clause graph resolution,” Proceedings of the Eighth International Conference on Automated Deduction, Oxford, England, July 1986. In Lecture Notee in [Kowalski 19751 [Murray 19821 Computer Science, Springer-Verlag, Vol. 230, 316336. R. Kowalski, “A proof procedure using con- nection graphs,” J.ACM, vol. 22,4, pp. 572- 595, Oct. 1975. N.V. Murray, “An experimental theorem prover using fast unification and vertical path graphs,” Proc. of the Fourth National Conf. of Canadian Society of Computational Studies of Intelligence, pp. 125-131, U. of Saskatchewan, May 1982. [Murray & Rosenthal 1985a] N.V. Murray and E. Rosenthal, “Path Reso- lution and Semantic Graphs,” Proceedings of EUROCAL ‘85, Linz Austria, April 1-3, 1985. In Lecture Note8 in Computer Science, Springer-Verlag, vol. 204, 50-63. [Murray & Rosenthal 1985101 N.V. Murray and E. Rosenthal, “Path Reso- lution With Link Deletion,” Proceedings of IJCAI-85, pp. 1187-1193, UCLA, Aug. 18-24, 1985. [Murray & Rosenthal 19871 N.V. Murray and E. Rosenthal, “Inference With Path Resolution and Semantic Graphs,” To appear in J.ACM, vol. 34,2, April 1987. [Prawitz 19701 D. Prawitz, “A proof procedure with matrix reduction,” Lecture Notes in Mathematics, vol. 125, pp. 207-213, Springer-Verlag, 1970. [Robinson 19651 J.A. Robinson, “A machine oriented logic based on the resolution principle,” J.ACM, vol. 12,1, pp. 23-41, 1965 . [Stephan & Siekmann 1978) W. Stephan and J. Siekmann, “Completeness and soundness of the connections graph proof procedure,” A ISB/ GI Conference on Artificial intelligence (D. Sleeman, ed.), pp. 340-344, Leeds University Press, 1978, Ham- burg, July 18-20, 1978. [Stickel 1982) M.L. Stickel, “A nonclausal connection-graph resolution theorem-proving program,” Proc. AAAI-82 Nat. Conf. on Artificial Intelligence, pp. 229-233, Pittsburgh, Pennsylvania, Aug. 1982. 166 Automated Reasoning
1987
21
612
The Deductive Synthesis of Imperative LISP Programs Zohar Manna Stanford University Stanford, California Abstract A framework is described for the automatic synthesis of imperative programs, which may alter data structures and produce destructive side effects as part of their in- tended behavior. A program meeting a given specifica- tion is extracted from the proof of a theorem in a variant of situational logic, in which the states of a computation are explicit objects. As an example, an in-place reverse program has been derived in an imperative LISP, which in- cludes assignment and destructive list operations (rplacu and rplacd) . Introduction For many years we have been working on the design of a system for program synthesis, i.e., the automatic derivation of a program from a given specification. For the most part, we have been con- centrating on the synthesis of applicative programs, i.e., programs that return an output but produce no side effects (Manna and Waldinger [80], [87a]). H ere we consider the synthesis of impera- tive programs, i.e., programs that alter data structures as part of their intended behavior. We adapt the same techniques that we have used for applicative programs. We have developed a deductive approach, in which the con- struction of a program is regarded as a task in theorem proving. For applicative programs, we prove a theorem that establishes the existence of an output object meeting the specified conditions. The proof is restricted to be sufficiently constructive to indicate a com- putational method for finding the desired output. This method provides the basis for a program that is extracted from the proof. The difficulty in adapting this deductive approach to imper- ative programs is that, if data structures are altered, a sentence that is true at a certain state of the computation of a program may become false at other states. In the logical theories in which This research was supported by the National Science Founda- tion under Grants DCR-82-14523 and DCR-85-12356, by the De- fense Advanced Research Projects Agency under Contract N00039-84-C-0211, by the United States Air Force Office of Sci- entific Research under Contract AFOSR-85-0383, by the Office of Naval Research under Contract N00014-84-C-0706, by United States Army Research under Contract DAJA-45-84-C-0040, and by a contract from the International Business Machines Corpora- tion. Richard Waldinger SRI International Menlo Park, California we usually prove theorems, a sentence does not change its truth- value. A time-honored approach to this problem is to employ a situational logic, i.e., one in which states of the computation are explicit objects. Predicate and function symbols each have a state as one of their arguments, and the truth of a sentence may vary from one state to another. In this paper, we adapt situational logic to the synthesis of imperative programs. We find that conventional situational logic is inadequate for this task, but formulate a new situational logic, called imperative-program theory, that overcomes this inadequhy. To be specific, we shall set down a theory of imperative LISP, which includes the destructive operators rplaca and rplacd and the as- signment operator setq. We intend, however, that other versions of imperative-program theory shall be equally applicable to other languages. Historical Notes Situational logic was introduced into the computer science liter- ature by McCarthy [63] and was also applied to describe imper- ative programs by Burstall [69]. It was used for the synthesis of imperative programs in the systems QAS (Green 1691) and PROW (Waldinger and Lee [69]). We have used situational logic ear: lier to describe ALGOL-like programming languages (Manna and Waldinger [Sl]). R ecently, we have adapted situational logic to be a framework for automatic planning (Manna and Waldinger [87b]). Imperative LISP has recently been described (in terms of “mem- ory structures”) in the thesis of Mason [86]. We have translated many of Mason’s notions into the situational logic framework. Mason applies his framework to proving properties of programs and to program transformation, but does not deal with synthesis from specifications. We also treat the assignment operation (setq), which Mason omits. The Trouble with Conventional Situational Logic To construct a program in conventional situational logic (e.g., the QA3 logic), one proves the existence of a final state in which the specified conditions will be true. One regards the initial state as the input and the final state as the output of the imperative program. In other words, one uses the same approach one would use for applicative programs, treating states as objects that can be passed around like numbers or lists. The trouble with this approach is that one can construct pro- grams that can perform more than one operation on the same state, contrary to the physical fact that, once an operation has been per- formed on a state, that state no longer exists. For example, it is possible to construct programs such as Manna and Waldinger 155 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. bad(x) X= then’ setqjz, x, so) . . else setq(y, 2,230). According to this program, in our initial state SO we are to set x to x2 and then test if condition p is true. If so, in our initial state se, we are to set z to x; otherwise, we are to set y to x. Unfortunately, once we have changed the value of x, we have destroyed our initial state and no longer have access to the initial value of 2. Imperative-program theory has been designed to overcome this sort of difficulty. In this theory, programs are denied access to explicit states, and always apply to the implicit current state. Elements of Imperative-LISP Theory In an imperative-program theory, we return the states and objects of situational logic and introduce a new sort of entity, called the fluent, which is best described in terms of what it does. We shall say that evaluating a fluent in a given state produces a new state and returns an object. For example, evaluating the fluent setq(x, 2) in a given state s produces a new state, similar to the given state except x has been set to 2. The evaluation also returns an object, the number 2. We shall think of an imperative program as computing a func- tion that, applied to a given input object (or objects), yields a fluent. For example, if we apply the imperative reverse program nrev(a) to a list structure e, we obtain a fluent nrev(e); evaluating this fluent in a given state will produce a new state (in which e is reversed) and return an object (the reversed list structure itself). Because we do not regard the state as an explicit input to the pro- gram, we cannot construct programs, such as bad(x), that perform multiple operations on the same state. To construct a program, we prove the existence of a fluent that, for a given initial state and input object, produces a final state and returns an output object satisfying the specified condi- tions. The actual program is then extracted from the proof. To be more precise, let us restrict ourselves to imperative LISP. In imperative-LISP of objects. theory we distinguish among several sorts e States. These are states of the computation. a Locations. These may be thought of as machine locations or cells. We discriminate between Pairs. These are conventional LISP cells. They “point” to two locations. Atoms. These are identified with storage loca- tions. They sometimes point to a single loca- tion, which must be a pair. The special atom nil cannot point to anything. Atoms and pairs are assumed to be disjoint. Locations are the input and output objects of imperative-LISP programs. e Abstract trees. These are finite or infinite binary trees, which may be represented by pair locations. We identify atoms with atomic abstract trees. o Fluents. These may be thought of as functions mapping states into state-location pairs. We identify each atom with a fluent, which we describe later. Note that the above sorts are not disjoint. In particular, atoms are included among the locations, abstract trees, and fluents. We can only identify atoms with storage locations because of the ab- sence of simple aliasing in LISP; two distinct atoms are never bound to the same storage location. Now let us describe the functions that apply to and relate these sorts. Functions on Locations If J? is a pair location and s a state, :Zeft(& s) and :right (l, s) are locations, called the Zeft and right components of e. If a is an atom and s a state, and if a has some location stored in it (i.e., a is bound), then :store(a, s) is the location stored in a. Functions on Abstract Trees We assume that we have the usual functions on abstract trees: the tree constructor tl o t2, the functions Zeft(t) and right(t), etc. Abstract lists are identified with abstract trees in the usual way: the list (tl, t2, . . . . tn) is identified with the tree t1 a (t2 a - - * 0 (t, @ niZ) - * .). The append function tl q 22 and the reverse function rev(t) are defined in the case in which tl and t are finite lists and t2 is a list. Functions on Fluents To determine the state produced and the location returned by eval- uating a fluent in a given state, we employ the production function “;” and the return function “Cr. If s is a state and e a fluent, s;e is the state produced by evaluating e and s:e is the location returned by evaluating e (in state s). We assume that the evaluation of atoms does not change the state and returns the location stored in the atom. This is expressed by the atom axioms, W;U =w (production) w:u = :store(u, w) (return) for all states w and atoms u. (We use letters towards the end of the alphabet as variables. Variables in axioms have tacit universal quantification.) Evaluating the atom nil is assumed to return nil itself, i.e., :store(nil, w) = nil b-4 for ah states w. If er and es are fluents, el ;;e2 is the fluent obtained by composing er and es. Evaluating er;;es is the same as evaluating first er and then es. This is expressed by the composition axioms, w;(u1742) = (vh);~a (production) w:(u1;;u2) = (w;u1):212 (return) for all states w and fluents ~1 and 212. We shall write w;q;u2;. . . ;u, and u);u~; . . . ;u+l:u, as ab- breviations for (( ~31);~s); . . . ;a, and ( . . . (w;TJ~); . . . ;u,-1) x,, respectively. For any positive integer i, we shall write Q as an ab- 156 Automated Reasoning breviation for 2~1;;~;; . . . ;;ui, where al is taken to be ~1 itself. Thus (by the production composition axiom), w;Fij = w;u1; . . . ;uj and (by the return composition axiom) w:a; = wp1;. . . ;uj-1:uj. The Linkage Axioms Each function on fluents induces corresponding functions and on locations. We begin with some definitions. on states A fluent function f(el, . . . , e,) adplies to fluents er, . . . ,en and yields a fluent. A state function h(f21 , . . . , !,, s) applies to locations er, . . . ,!, and a state s and yields a state. A location function g(1r , . . . , !,, s) applies to locations er, . . . , !, and a state s and yields a location. For each fluent function f(er, . . . , e,), we introduce a cor- responding state function ;f(er, . . . , &, s) and location function :f(4, * . . ,f&, s). If f uses ordinary LISP evaluation mode, the three functions are linked by the following linkage axioms: w;f(u1, . . . ,u,) = ;f(w:‘itt, . . . ,w:Tin, w;&) (production) w:f(u1, . . . ,u,) = :f(w:q, . . . ,w:&, w;Tin) (return) for all states w and fluents ‘1~1, . . . , u,. In other words, the state and location functions ;f and :f describe the effects of the fluent function f after its arguments have been evaluated. The function ; f yields the state produced, and the function :f yields the location returned, by the evaluation of f. For example, for the fluent function setleft [conventionally written rplaca], we have the production linkage axiom w;setZeft(ul,uz) = ;setZeft(w:ul, w;ul:u2, w;ul;uz) and the return linkage axiom w:setZeft(ul, 222) = :setZeft (w:ul , w;ul :u2, w;ul ;u2), for all states w and fluents ur and us. That is, to find the state pro- duced and location returned by evaluating setleft (ur , uz) in state w, first evaluate ur in state 20, then evaluate us in the resulting state, and finally apply the corresponding state and location func- tions ;setZeft and :setZeft in the new resulting state. The axioms that describe ;setZeft and :setZejl are given in the next section. While the fluent function setstore [conventionally, set] does ad- here to ordinary LISP evaluation mode, the fluent function setstoreq [conventionally, se@] does not; it requires that its first argument be an atom and does not evaluate it. For this function, the production linkage axiom is w;setstoreq(u, v) = ;setstore(u, w:v, w;v) and the return linkage axiom is w:setstoreq(u, v) = :setstore(u, w:v, w;v), for all states w, atoms u, and fluents v. These axioms take into account the fact that evaluating an atom has no side effects. As an informal abbreviation, we shall sometimes use “s:;e” as an abbreviation for the string “s:e, s;e”, and “:;f(el, . . . ,e,)” as an abbreviation for the string “:f(el, . . . , e,), ;f(el, . . . , e,)“. Describing LISP Operators Although we regard LISP programs as computing functions on fluents, they are best described by providing axioms for the corresponding state and location functions. A fluent function f (er , . . . , e,) is said to be applicative if its evaluation produces no side-effects other than those produced by evaluating its arguments er, . . . ,e,. This is expressed by the ax- iom if (Xl, . . . , x,,w) = w (applicative) for all locations xl, . . . , x, and states w. It follows that w;f(w, - - *, Un) = ;f(w:z1, . . . , w:&&, w;?in) (by the production linkage axiom for f) = w;Tin (by the applicative axiom for f) for all fluents ur, . . . , u, and states w. For example, the fluent functions left and right [convention- ally written car and cdr, respectively] are applicative, that is, ;keft(x, w) = w and ;right(x, w) = w for all locations x and states w. It follows by the above reasoning that w;Zeft(u) = w;u and w;right(u) = w;u for all fluents u and states 20. The fluent function setleft alters the left component of its first argument to contain its second argument. This is expressed pre- cisely by the primary production axiom for setleft, deft (x1, ;setleft(xl,x2, w)) = x2 (primary production) for all pair locations xl, locations x2, and states w. by The function setleft returns the return axiom for setleft, its first argument ; this is expressed for all pair :setleft(xl, x2, w) = 21 locations xl, locations (return) 22, and states 20. We must also provide frame axioms indicating that the func- tion setleft does not alter anything but the left component of its first argument; namely, ;setZeft(xl, x2, w):u = w:u (atom) if not (x1 = y) then :Zeft(y,;setZeft(xl,xz, 20)) = :Zef’t(y, w) (W fmme) :right(y, ;setZeft(xl, x2, w)) = :right (y, w) (right frame) for all pair locations x1 and y, locations x2, atoms u, and states W. The above axioms give properties of the state and location functions ;setZeft and :setZeft. Properties of the corresponding flu- ent function setleft can now be deduced from the linkage axioms. The function setright [conventionally, ogously. Locations and Abstract Trees rplacd] is treated anal- We think of each location as representing an abstract (finite or infinite) tree. While we describe LISP functions at the state and location level, it is often natural to express the specifications for LISP programs at the abstract tree level. In this’section we explore the relationship between locations and abstract trees. A formal- ization of abstract trees is discussed in Mason [86]. We introduce a function decode (e, s) mapping each location into the abstract tree it represents. If u is an atom, decode(u, w) = u (atom) for all states w. Thus each atom represents itself. Manna and Waldinger 157 Furthermore, if x is a pair location, decode (x, 20) = decode (:Zeft(x, w), w) e decode (:right(x, w), w) (pair) for alI states w. Specification constructs Certain concepts are used repeatedly in the description of impera- tive-LISP programs. We give a few of these here, without defining them formally. We follow Mason [86] in our terminology. e spine The spine of a pair location e in state s, written spine(l, s), is the set of locations we can reach from 1 by following a trail of right pointers (not left or store pointers). e finiteness We say that the spine of location ! is finite in state s, written finite (.& s), if we can get from location fJ to an atom by following a trail of right pointers (not left or store pointers). A location is finite if and only if its spine is a finite set. 0 list We say that the location e represents a list in state s, written list (e, s), if either the spine of e is infinite or we can get from 4! to the atom nil by following a trail of right pointers. e accessibility A location 4 is said to access a location m in state s, written access(&, m, s), if it is possible to get from e to m by following a trail of Zeft or right pointers (not store pointers). We shall also say that m is accessible from e. e spine accessibility We say that the spine of location e accesses the spine of lo- cation m in state s, written spine-access(& m, s), if some element in the spine of e accesses some element in the spine of m in state s. e purity We say that the location f! is pure in state s, written pure@, 4, if no element of the spine of e is accessible from the left component of some element of the spine of f2 itself. Otherwise, the location f2 is said to be ingrown. Abstract Properties of LISP Operators While LISP functions are most concisely defined by giving their effects on locations, their most useful properties often describe their effects on the abstract lists and trees represented by these locations. For example, we can establish the abstract property of the function cons, decode (:;cons(x, y, w)) = (abstract) decode(x, w) e decode(y, w) which relates cons to the abstract tree constructor e. 158 Automated Reasoning Often, the properties we expect do not hold unless certain stringent requirements are met. For example, the abstract property of the function setright is if pair(x) and not access(: Zeft ( x, w), x, w) and not access(y, 2, w) then decode (:;setright (x, y, w)) = Zeft (decode(x, w)) e decode(y, w) (abstract) where the relation pair characterizes the pair locations. That is, the function setright “normally” returns the result of a left oper- ation followed by a tree construction. However, we require that z must be inaccessible from :Zeft(x, w); otherwise, in altering the right component of x, we inadvertently alter the abstract tree rep- resented by the left component of x. Also, x must be inaccessible from y; otherwise, in altering the right component of z, we inad- vertently alter the abstract tree represented by y. Many errors in imperative programming occur because people assume that the abstract properties hold but forget the conditions they require. Specification of Programs Each LISP program is applied to an initial input location es. The resulting fluent is evaluated in an initial state SO, produces a new state sf, and returns an output location e,. The specification for a LISP program may thus be expressed as a sentence epo, SO,lf, Sfl. The program to be constructed computes a fluent function, which does not apply to locations directly; it applies to an input parameter a, an atom that (in normal evaluation mode) contains the input location, that is, so:a = es. When the program is evalu- ated, its actual argument, which is a fluent, is evaluated first. The location it returns is stored in the parameter a. (This is easily extended for programs which take more than one argument.) Furthermore, the computed function does not yield a state or location itself, but a fluent z. Evaluating z in the initial state so produces the final state and returns the output location, that is, ss:z = 4’1 and SO;% = sf. To construct the program, therefore, we prove the theorem (vQ>(vso)(~%)a[so:Q, so, so:z, so;z]. In other words, we prove the existence of a fluent z that, when evaluated in the initial state, will produce a final state and yield an output location meeting the specified conditions. For example, suppose we want to specify a destructive list- reversing program. In terms of its principle abstract property, we may specify the desired program by the sentence P[so,&,sf,ef]: decode(lf, sf) = rev(decode(&, so)). In other words, the list represented by the location !Jf after evalu- ation of the program is to be the reverse of the list represented by 4!0 before. For a moment, we forget about the conditions required to make this possible. The theorem we must prove is accordingly (Va)(Vss)(3z)[decode(ss:t, ss;z) = rev (decode (,~:a, so))]. The Deductive System The system we employ to prove our theorems is an adaptation of the deductive-tableau system we use to derive applicative programs (Manna and Waldinger [80], [87a]). The adaptation to imperative programs mimics our development of a deductive system for au- tomatic planning (Manna and Waldinger [87b]). Because we shall only informally present a segment of the program derivation in this paper, we do not describe the system in detail. A complete description appears in the report version of this paper. The In-place Reverse Program At the risk of spoiling the suspense, let us present the final program we shall obtain from the derivation: nrev(a) * nreQ(a, nil) { if null(a) nrev2(a, b) S= then b else nrev2(right (a), setright (a, b)). This is an in-place reverse, used as an example by Mason [86]. The program nrev is defined in terms of a more general program nrev2, which has the effect of reversing the list a and appending the result to the list b. The consequence of applying the program nrev2 is illustrated in the following figure: after: Note that the pointers in the spine of so:a have been reversed. The principal condition in the specification for nrev2 is decode(l?f, sf) = rev(decode(.f& SO)) 0 decode(mo, SO), where f?s and mo are the two input locations, In other words, the abstract list represented by the location ef after the evaluation is to be the abstract list obtained by reversing the list represented initially by & and appending the result to the list represented initially by mo. The program nrev2 we derive does not satisfy the above spec- ification in all cases. We must require several input conditions that ensure that our given lists are reasonably well behaved. We impose o the list conditions Zist(&, se) and Zist(m0, SO), that es and m,-, initially represent abstract lists. e the finiteness condition finite(e0, SO), i.e., that the list .fs initially represents is finite; otherwise, nrev2 would not terminate. the purity condition pure (lo, SO), i.e., that the spine of !s is not initially acces- sible from any of the left components of spine elements; otherwise, in altering the pointers in the spine, we would inadvertently be altering the elements of the list represented by &. the isolation condition not spine-access(m0, !o, SO), i.e., that the spine of &, is not initially accessible from (the spine of) mo; otherwise, in altering the spine of 10, we would inadvertently be altering the list represented by me. These are reasonable enough conditions, but to complete the deriva- tion we must make them explicit. (Similarly, we must impose the list condition for l?~, and the finiteness and purity conditions, on the specification for nrev itself.) The full specification for nrev2 is thus if Zist(&, so) and Zist(mo, SO) and finite(&, SO) and pure(&,, SO) and not spine-access(m0, !o, so) then decode (lf , sf ) = rev (decode(!o, SO)) 0 decode(mo, SO), and the theorem we must prove is if Zist(so:a, SO) and Zist(so:b, so) and finite(so:a, SO) and pure(so:a, SO) and not spine-access(so:b, so:a, SO) then decode(so:;z) = rev (decode(so:;a)) n decode(so:;b). (Here we have dropped quantifiers by skolemization.) We do not have time to present the full derivation of the pro- gram nrev here, so let us focus our attention on the most interest- ing point, in which the pointer reversal is introduced into nrev2. Using the pair axiom for the decode function, properties of abstract lists, and the input conditions, we may transform our goal into pair (so:a) and decode(so:;z) = rev(decode(so:;right(a)))o 1 Zeh (decode(so:;a)) Q d&Je(so:;b) 1 1 - ., 1 We omit the details of how this was done. At this point, we invoke the abstract property of setright, if pair(x) and not access(:Zeft(x, w), x, w) and not access(y, 2, w) then decode (:;setright(x, y, w)) = left (decode (x, w)) e decode ( y, w) I. The boxed subsentence of the above property is equationally unifiable with the boxed subsentence of our goal; a unifying sub- stitution is 0 : {x t so:a, y t so:b, w c SO}. To see that 8 is indeed an equational unifier, observe jhat Manna and Waldinger 159 (Zeft(decode(so:;a)) 0 decode(so:;b))O = Zeft(decode(so:a, sop)) e decode(so:b, so;b) (by our abbreviation) = Zeft(decode(so:a, so)) 8 decode(so:b, so) (by the prod&ion atom axiom) = (Zeft(decode(z, w)) 0 decode(y, w))O. This reasoning is carried out by the equational unificution algo- rithm (Fay 1791, see also Martelli and Rossi [SS]). We can thus use the property to deduce that it suffices to establish the goal puir(sf~:u) and not access (so:Zejl(u), .~:a, SO) and not uccess(so:b, ~+,:a, SO) and decode (SO :;z) = rev (decode (so:;right(u))) 0 decode (so:;setright (a, b)). Formally this reasoning is carried out by the equality replacement rule. The second conjunct of the goal, that se:u is inaccessible from so:Zeft(u), is a consequence of the purity input condition on se:u. The third conjunct, that ss:u is inaccessible from ss:b, is a conse- quence of the isolation input condition. These deductions can be made easily within the system. Hence we are left with the goal puir(so:u) and decode (so :;z) = rev (decode (so:;right(u)))n decode (so:;setright (a, b)). Now we may use induction to introduce the recursive call into the program nrev2. We omit how this is done. The complete program derivation is described in the report version of this paper. The program we have obtained is an in-place reverse, which does not use any additional space. Of course, nothing in the deriva- tion process ensures that the program we obtain is so economical. Other, more wasteful programs meet the same specification. If we want to guarantee that no additional storage is required, we must include that property in the specification. More precisely, we can define a function spuce(e, s) that yields the number of ad- ditional locations (cons cells and gensyms) required to evaluate fluent e in state s. We may then include the new condition spuce(z, SO) = 0 in the theorem to be proved. We could then derive the same in-place reverse program nrev but we could not derive the more wasteful ones. Our derivation for nrev would be longer, but would include a proof that the derived program uses no additional space. Discussion The ultimate purpose of this work is the design of automatic sys- tems capable of the synthesis of imperative programs. By perform- ing detailed hand-derivations of sample imperative programs, we achieve a step in this direction. First of all, we ensure that the system is expressive enough to specify and derive the program we have in mind. But it is not enough that the derivation be merely possible. If the derivation requires many gratuitous, unmotivated steps, it may be impossible for a person or system to discover it unless the final program is known in advance. Such a system may be useful to verify a given program but hardly to synthesize a new one. Of course, the fact that we can construct a well-motivated proof by hand does not guarantee that an automatic theorem 160 Automated Reasoning prover will discover it. We expect, however, that the proofs we require are not far beyond the capabilities of existing systems. Looking at many hand derivations assists us in the design of a theorem prover capable of finding such derivations, work that is still underway. Close examination of many proofs suggests what rules of inference and strategies are appropriate to discover them. Acknowledgments The authors would like to thank Tom Henzinger and Ian Mason for valuable discussions and careful reading of the manuscript, and Evelyn Eldridge-Diaz for wing many versions. References Burstall [69] R. M. Burstall, Formal description of program structure and semantics in first-order logic, Machine In- telligence 5 (B. Meltzer and D. Michie, editors), Edin- burgh University Press, Edinburgh, Scotland, 1969, pp. 79-98. Fay [79] M. Fay, First-order unification in an equational the- ory, Proceedings of the Fourth Workshop on Automated Deduction, Austin, Texas, Feb. 1979, pp. 161-167. Green [69] C. C. G reen, Application of theorem proving to problem solving, Proceedings of the International Joint Conference on Artificial Intelligence, Washington, D.C., May 1969, pp. 219-239. Martelli and Rossi [86] A. Martelli and G. Rossi, An algorithm for unification in equational theories, Proceedings of the Third Symposium on Logic Programming, Salt Lake City, Utah, Sept. 1986. McCarthy [63] J. McCarthy, Situations, actions, and causal laws, technical report, Stanford University, Stanford, Calif., 1963. Reprinted in Semantic Information Pro- cessing (Marvin Minsky, editor), MIT Press, Cambridge, Mass., 1968, pp. 410-417. Manna and Waldinger [80] 2. Manna and R. Waldinger, A deductive approach to program synthesis, ACM Trans- actions on Programming Languages and Systems, Vol. 2, No. 1, Jan. 1980, pp. 90-121. Manna and Waldinger [81] 2. Manna and R. Waldinger, Prob- lematic features of programming languages: a situation- al-logic approach, Acta Informuticu, Vol. 16, 1981, pp. 371-426. Manna and Waldinger [87a] Z. Manna and R. Waldinger, The origin of the binary-search paradigm, Proceedings of the Ninth International Joint Conference on ArtificiuZ Intel- ligence, Los Angeles, Calif., Aug. 1985, pp. 222-224. Also in Science of Computer Programming (to appear). Manna and Waldinger [87b] Z. Manna and R. Waldinger, How to clear a block: a theory of plans, Journal of Automated Reasoning (to appear). Mason [86] I. A. Mason, Programs via transformation, Sympo- sium on Logic in Computer Science, Cambridge, Mass., June 1986, pp. 105-117. Waldinger and Lee [69] R. J. Waldinger and R. C. T. Lee, PROW: A step toward automatic program writing, Pro- ceedings of the International Joint Conference on Artifi- cial Intelligence, Washington, D.C., May 1969, pp. 241- 252.
1987
22
613
Robert D. McCartney Department of Computer Science Brown University Providence, Rhode Island 02912 Abstract This paper describes MEDUSA, an experimental al- gorithm synthesizer. MEDUSA is characterized by its top-down approach, its use of cost-constraints, and its restricted number of synthesis methods. Given this model, we discuss heuristics used to keep this process from being unbounded search through the solution space. The results indicate that the performance cri- teria can be used effectively to help avoid combinato- rial explosion. The system has synthesized a number of algorithms in its test domain (geometric intersec- tion problems) without operator intervention. o Synthesis should be done without user intervention Algorithms will be produced to meet some given per- formance constraints o The synthesizer should be reasonably efficient, i.e., it should be considerably better than exhaustive search. Algorithm synthesis in general is very difficult; it requires large amounts of domain and design knowledge, and much of design appears to be complex manipulations and intu- itive leaps. We have attempted to circumvent these prob- lems by working with a restricted set of synthesis methods in a restricted domain. The underlying hypothesis is that a fairly restricted set of methods can be used to produce algorithms with clean design and adequate (if not optimal) performance. The domain to develop and test MEDUSA is pla- nar intersection problems from computational geometry. This domain has a number of characteristics that make it a good test area: Nearly all objects of interest are sets, so most algorith- mic tasks can be defined in terms of set primitives. f This work owes a lot to the continuing support, encouragement. and advice of Eugene Char&k, and has been supported in part by the Office of Naval Research under grant N00014-79-C-0529 There exist a number of tasks that are not very hard ( i.e. linear to quadratic complexity); algorithms in this range are practical for reasonably large problems. Although all of the objects are ultimately point sets, most can be described by other composite structures (e.g. lines, planar regions), so object representation is naturally hierarchical. Problems in this domain are solvable by a variety of techniques, some general and some domain-specific. Choosing the proper technique from a number of pos- sibilities is often necessary to obtain the desired per- formance. The test problems (with associated performance con- straints) used in developing this system are given in ta- ble 1. These problems have many similarities (minimiz- ing the amount of domain knowledge needed), but differ enough to demand reasonable extensibility of techniques. MEDUSA is implemented in LISP. It uses and modifies a first-order predicate calculus database using the deductive database system DUCK. [7]. The database contains knowl- edge about specific algorithms, general design techniques, and domain knowledge, and is used as a scratchpad during synthesis. Useful DUCK features include data dependen- cies, datapools, and a convenient bi-directional LISP inter- face. . esis C@SS The synthesis process is characterized by three features: it proceeds top-down, it is cost-constrained, and subtasks can only be generated in a small number of ways. A. To own: Synthesis proceeds top-down, starting from a functional description of a task and either finding a known algo- rithm that performs its function within the cost constraint, or generating a sequence of subtasks that is functionally equivalent; this continues until all subtasks are associated with a sequence of known algorithms (primitives). This leads quite naturally to a hierarchical structure in which the algorithm can be viewed at a number of levels of ab- straction. Furthermore, it allows the synthesis process to be viewed as generation with a grammar (with the known algorithms as terminals). McCartney 149 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Table 1: Test problems for algorithm synthesizer. Task Cost Constraint Detect intersection between 2 convex polygons N Report intersection between 2 convex polygons N Detect intersection between 2 simple polygons NlogN Report intersection between 2 simple polygons (N+S)logN Report connected components in a set of line segments (N+S)logN Report connected components in a set of isothetic line segments NlogN+S Report intersection among N k-sided convex polygons Nk log N Report intersection of N half-planes NlogN Report intersection of N half-planes N2 Report intersection of N isothetic rectangles N Report intersection of N arbitrary rectangles NlogN Report the area of union of set of isothetic rectangles NlogN Report the perimeter of union of set of isothetic rectangles NlogN Report the connected components of set of isothetic rectangles NlogN+S Detect any three collinear points in a point set N2 Detect any three collinear points in a point set N3 Detect any three lines in a line set that share an intersection N2 Report all lines intersecting a set of vertical line segments NlogN Report all lines intersecting a set of x-sorted vertical line segments N El. Cost-constrained: Synthesis is cost-constrained; included in the task speci- fication is a performance constraint (maximum cost) that the synthesized algorithm must satisfy. We take the view that an algorithm is not known until its complexity is known with some (situation dependent) precision. We chose asymptotic time complexity on a RAM (big-Oh) as the cost function for ease of calculation, but some other cost measure would not change the synthesis process in a major way. Two reasonable alternatives to using a cost-constraint that are precluded by practical considerations are having the synthesizer produce 1) optimal (or near-optimal) al- gorithms, or 2) the cheapest algorithm possible given its knowledge base. To produce an optimal algorithm, the system be able to deal with lower bounds, which is very difficult [l], so not amenable to automation. Producing the cheapest possible algorithm is probably equivalent to pro- ducing every possible algorithm for a task. This is at best likely to be exponential in the total number of subtasks, making it impractical for all but the shortest derivations. C. Subtask generation: A key function in MEDUSA is subtask generation; given a description of a task, return a sequence of subtasks that is functionally equivalent. One of the ways we simplify synthesis in this system is by using only four methods to generate subtasks. The first method is to use an equivalent skeletal al- gorithm. A skeletal algorithm is an algorithm with known function, but with some parts incompletely specified (its subtasks); e.g. an algorithm to report all intersecting pairs in a set of objects may have as its subtask a two object in- tersection test. The cost of a skeletal algorithm is specified as a function of its subtask costs. These algorithms range from quite specific algorithms (e.g. a sort algorithm with a generic comparison function) to quite general algorithmic paradigms (e.g. binary divide-and-conquer). These are a convenient way to express general paradigms, and allow generalizations of known algorithms whose subtasks can be designed to exploit specific task characteristics. The second subtask generation method is to trans- form the task into an equivalent task one using explicit domain information. This allows the use of logical equiv- alence in decomposing tasks e.g. the fact A contains B if and only if B is a subset of A and their boundaries do not intersect allows the decomposition of a containment test of two polygons into a conjunction of the tests for subset and boundary intersection. The third subtask generation method uses case de- composition. Suppose that there is some set of disjoint cases, at least one of which is true (a disjunction). An equivalent algorithm is determine which case holds, then solve the task given that case is true. The necessary subtasks for each case is an algorithm to test whether the case holds, and an algorithm to do the original task 150 Automated Reasoning A subset of B ? one of A’s vertices member of B 7 Figure 1: Synthesis of polygon intersection algorithm. Tasks are represented by rectangles, known and skeletal algorithms by ovals. given the case holds. We restrict this by considering only disjunctions where exactly one disjunct is true (termed oneofdisjunction by de Kleer [6]). Care must be taken to ensure that the case decomposition chosen is relevant to the task at hand. The fourth way to generate a subtask is to use some dual transform; specifically, we transform a task and its parameters into some dual space and solve the equivalent task there. This can be a “powerful” technique [3], al- lowing the use of algorithms and techniques from related problems and domains. For example, suppose we want to detect whether any three points in a finite point set are collinear. Given that we have a transform that maps lines to points and vice-versa, and if two objects intersect in the primal if and only if they intersect in the dual, then we can recast this problem as 1) map the points in the input set to a set of lines, then 2) detect whether any three lines in this line set share an intersection. : etect intersection 8 convex Ywns The synthesis process can be illustrated with an example (shown graphically in Figure 1.): determine whether two convex polygons (A and B) intersect, time linear in the total number of vertices. First, the task is decomposed into four cases: the boundaries intersect, A contains B, B contains A, or the polygons do not intersect. Since the cost of the task is the sum of its subtasks, each subtask has the linear time con- straint. This simple propagation of the parent’s constraint will hold for the rest of the subtasks in this example as well. Working first on the boundaries intersect case, we syn- thesize an algorithm to see if the boundaries intersect. We use a skeletal algorithm, a sweep-line algorithm to detect line-segment intersection [8], which applies since a polygon boundary is a set of line-segments. It has two components (subtasks): one to sort the vertices of the segments in X- order, one to perform a dynamic neighbor-maintain on the segments in Y-order. To sort the vertices, we use a skeletal mergesort algorithm: its subtasks are 1) sort A’s vertices, 2) sort B’s vertices, and 3) merge the two sorted vertex sets. The two sorts are each equivalent to a known algo- rithm that sorts the vertices of a convex chain in linear time, the third is equivalent to the standard linear merge algorithm (with constant-time comparisons). The dynamic neighbor-maintain is a dictionary algo- rithm. Set items are line-segments; they are to be put into some structure on the basis of their relative Y po- sitions. The input to this algorithm is a linear number of queries; the queries used are insert, delete, and report- neighbors (i. e., for a given segment, return the segments directly above and below it). This is equivalent to a known algorithm, a dictionary implemented with a 2-3 tree. The cost of this algorithm is the sum of its query costs, each of which is equal to the log of the working set (the excess of inserts over deletes). The working set here is bounded McCartney 151 by the number of line segments in the two sets that in- @ Use 2-3-tree dictionary during scan. tersect some vertical line; since the polygons are convex, If detect-segment-intersections returns true, report true the number of segments intersecting any line is bounded by a constant. Since the number of queries in the neighbor else Do a polygon-point-inclusion for polygon A, any ver- maintain is linear, and the working set is always constant tex of B. bounded, this algorithm satisfies the linear constraint. if that shows intersection, report true Detecting whether the polygons intersect given that else Do a polygon-point-inclusion for polygon B, any the the boundaries intersect is equivalent to the known al- vertex of A. gorithm “report true”, since boundary intersection implies intersection. if that shows intersection, report true Next, we work on the A contains B case, first trying else report false. to get an algorithm to see if the case holds, with the ad- algorithm, which is linear in the number of vertices, fin- ditional precondition that the boundaries do not intersect ishing the algorithm to detect whether A contains B. The (since we already tested that case, and would only reach here if it were false). By definition, A contains B if and only if B is a subset of A and their boundaries do not intersect. Since the boundary intersection is null, an equivalent task second part of this case, determining whether the polygons is to determine whether B is a subset of A, which is equiv- alent to determining whether B’s boundary is a subset of intersect given that A contains B, is equivalent to “report A, since A and B are bounded regions. Since the bound- aries are closed chains, and they do not intersect, either B’s boundary is a subset of A, or B’s boundary and A are dis- joint. Therefore, it suffices to check whether any non-null subset of B is a subset of A, so for simplicity we use a sin- gleton subset of B (any member) and test for inclusion in B. This is equivalent to a known polygon-point-inclusion IV. Synthesis mechanics Synthesis can be represented by a single routine that takes a task and 1) generates an equivalent decomposition (sub- task sequence), 2) calls itself for each subtask in its decom- position that is not completely specified, and 3) computes the cost of the task as a function of the costs in the de- composition. The cost constraints propagate forward from task to subtask, the costs percolate back from subtask to task. The important control mechanisms are those that pick from a group of possible decompositions, choose which active task to work on. It is also necessary to be able to find equivalent decompositions, propagate constraints, and combine costs. true”. Next, we work on the B contains A case, with the natorial explosion due to multiple decomposition choices, since it is impossible in general to know a priori that a de- A l Choosing among alternative decom- positions A problem inherent in synthesis is the possibility of combi- added preconditions that the boundaries do not intersect composition will lead to a solution within the constraint. and A does not contain B. It differs from the previous If a “dead-end” is reached (no decomposition possible, or slightly, since the task A subset of B? is equivalent to one time constraint violated), some form of backtracking must point of A being in B because of the added precondition Finally, we work on the A and B disjoint case, with that A does not contain B, but otherwise it is just the previous case with the parameters reversed. the added preconditions that the boundaries do not in- tersect and that neither contains the other. These added be done. Unless severely limited, backtracking will lead to - likely to succeed. exponential time for synthesis. To reduce backtracking, we algorithms (by unknown algorithms we mean any that are use a sequence of heuristics to choose the candidate most neither known nor skeletal)-we partition the possible de- The first heuristic is to favor known equivalent algo- rithms to everything and skeletal algorithms to unknown preconditions imply that A and B are disjoint, so deter- (always true)“, and determining whether the polygons in- mining whether the case holds is equivalent to “do nothing compositions into those three classes and pick from the tersect given that they are disjoint is equivalent to-“report most favored non-empty class. If a solution is known, there false”. is no reason to look any further; similarly if a skeletal algo- Therefore this question can be resolved using the fol- rithm exists, it is probably worth examining since it gives lowing sequence of operations (with cost proportional to a decomposition of the problem that often leads to a solu- the sum of the number of sides of the two polygons). tion. Although any known algorithm within the constraint Run detect-segment-intersections algorithm using the fol- is adequate, we favor one of zero cost (a no-op ) over one of lowing components: constant cost over any others, since the test is cheap and it is aesthetically pleasing to avoid unnecessary work. If e Sort the polygon vertices using mergesort with there is more than one skeletal or unknown algorithm left components after this filtering, the choice is dictated by the second or - sort each polygon’s vertices using convex- chain-vertex sort - merge the two polygon’s vertices. third heuristic. The second heuristic, which chooses among alterna- tive skeletal algorithms, uses the time constraint as a guide 152 Automated Reasoning to choose the algorithm most likely to succeed. For exam- ods for oneofdisjunction and disjunction): ple, suppose the constraint is N log N; likely candidates are divide-and-conquer and sweep-line (or some other al- gorithm involving a sort preprocess). To implement this we have associated a “typical cost” with each of the skeletal al- gorithms, that is, the cost that the skeletal algorithm usu- ally (or often) takes. The mechanism used is to choose the alternative whose typical cost most closely approximates the constraint. We do not choose the alternative that is typically cheapest; in fact we want the most expensive pos- sibility within the constraint, based on the hypothesis that less efficient algorithms are typically simpler. More con- cretely, suppose we have as a task reporting the intersec- tion of N half planes with alternative constraints N log N and N2. In the first case, the decision to use divide-and- conquer based on the time constraint leads to an N log N solution; in the second, the looser constraint would lead to a different choice, to process the set sequentially, build- ing the intersection one half-plane at a time (linear cost for each half-plane addition). The step in the sequential reduction, computing the intersection of a half-plane with the intersection of a set of half lanes in linear time is sim- pler than the merge step in the divide-and-conquer, which is to intersect two intersections of half-plane sets in linear time. If more than one skeletal algorithm has the same typical cost, and none has a cost closer to the constraint, the third heuristic is used to choose the best one. Given a set of conjunctive tasks cl, c2,. . . , ck : 1. Find and report an algorithm to teat one of these (say ci). 2. Use this method to solve conjunctive tasks Cl I Ci, C2 1 Ci,*--yCk 1 Ci- These algorithms will be combined in the order they were synthesized into a nesting of if-then-else’s in the ob- vious way. (The combination of the cases in the exam- ple shows this combining for the disjunctive case.) This method is guaranteed to find an order of the subtasks if one exists without interleaving (that is, if there is a se- quence such that the synthesizer could find an algorithm for each conjunct given the previous conjuncts in the se- quence were true); adding a precondition to a task can only increase the number of equivalent decompositions. The most efficient use of this method is to work depth first on the the tasks in the proper order. If the order is incorrect, a fair amount of effort may be expended on tasks that fail; in the worst case, the number of failed con- juncts is quadratic in the number of conjuncts. Working breadth first can lower the number of failures, those with longer paths than successful siblings, but since precondi- tions are added to active tasks whenever a sibling finishes, the partial syntheses done on these active tasks may also be wasted work. In MEDUSA, work is done primarily depth- first, but if a path too far ahead of its siblings, another path is worked on basically depth-first with some catch up to avoid long failing paths. The third heuristic, which is used if the others lead to no choice, is to compare all of the alternatives, choosing the one with the most specific precondition. The intuition is that a less-generally applicable algorithm is likely to be more efficient than a more generally applicable one. The specificity of a precondition is the size of the set of facts that are implied by the precondition; this definition gives equal weight to each fact, but is reasonably simple concep- tually. We approximate this measure by only considering certain geometric predicates, which is more tractable com- put at ionally. . rdering subtasks in synthesis For this method to be reasonably efficient, the tasks must be tried in something close to the proper order. We currently use a three-level rating scheme for subtasks, top preference to simple set predicates (like null tests), low- est preference to predicates involving a desired result, and middle preference for the rest. This is the rating that led to the order of the cases in the example: the boundaries- intersect case was done first since it is a simple set pred- icate (is the intersection of A and B null?), the A and B disjoint case was done last since it is the desired result, If all subtasks in a decomposition were independent, the and the two containment tests were done second and third order in which the subtasks were performed in the algo- (with equal preference), since they are not in either of the rithm would be unimportant. This is not always the case; other categories. consider the case determination tasks in the example. The fact that the boundaries did not intersect was important c. Finding equivalent osit ions to the solution of the containment determinations, -and the A basic function in the synthesizer is to find an algorithm fact that the boundaries did not intersect and neither poly- equivalent to a task using one of the methods given in 1I.C. gon contained the other made the test for A and B disjoint This is done by queries to the database, unifying variables trivial. In general, the testing of any conjunction, dis- and checking for equivalence. There is a certain amount of junction, or oneof disjunction of predicates is highly order- deductive effort involved in getting all of the equivalent de- dependent, since each predicate test is dependent on which compositions, much of it on decompositions that will be fil- predicates were already tested. It may be that not all or- tered out by the heuristics. Our system tries to reduce this derings lead to a solution within the time constraint, so wasted effort via a “lazy fetching” approach; rather than part of the synthesis task is to determine this order of ex- fetching all of the equivalent decompositions, it fetches all ecution. equivalent known algorithms and sets up closures to fetch The method we use to get the subtask algorithms in the others (skeletals and unknowns). This fits well with the conjunctive case is the following (with analogous meth- our known/skeletal/ un k nown filter heuristic explained in McCartney 153 the previous section: if a known algorithm exists, the ac- the previous section: if a known algorithm exists, the ac- tual fetching of the others is never done, similarly with tual fetching of the others is never done, similarly with skeletals vs. unknowns. skeletals vs. unknowns. Since we get closures for all of Since we get closures for all of the equivalent decompositions, we can always fetch them the equivalent decompositions, we can always fetch them if they are needed during backtracking. if they are needed during backtracking. D. Propagating constraints and cornbin- D. Propagating constraints and cornbin- ing costs ing costs Since much of the control of the system is based on costs, it Since much of the control of the system is based on costs, it is necessary to manipulate and compare cost expressions. is necessary to manipulate and compare cost expressions. Costs are symbolic expressions that evaluate to integers; Costs are symbolic expressions that evaluate to integers; they are arithmetic functions of algorithm costs, set car- they are arithmetic functions of algorithm costs, set car- dinalities, and constants. We have an expression manipu- dinalities, and constants. We have an expression manipu- lator that can simplify expressions, propagate constraints, lator that can simplify expressions, propagate constraints, and compare expressions. The use of asymptotic costs sim- and compare expressions. The use of asymptotic costs sim- plifies the process considerably. plifies the process considerably. V. V. Results and planned Results and planned extensions extensions Currently, MEDUSA will synthesize all of the problems Currently, MEDUSA will synthesize all of the problems in table one. in table one. In doing so, In doing so, it uses a variety of “stan- it uses a variety of “stan- dard” algorithmic paradigms (generate-and-test, divide- dard” algorithmic paradigms (generate-and-test, divide- and-conquer, sweep-line), and uses such non-trivial algo- and-conquer, sweep-line), and uses such non-trivial algo- rithm/data structure combinations as priority queues and rithm/data structure combinations as priority queues and segment trees. segment trees. In general, the choice heuristics work ef- In general, the choice heuristics work ef- fectively to pick among possible decompositions; the most fectively to pick among possible decompositions; the most common reason for failure is that the “typical cost” given common reason for failure is that the “typical cost” given for skeletal algorithms is different from the attainable cost for skeletal algorithms is different from the attainable cost due to specific conditions. due to specific conditions. The rating scheme for order- The rating scheme for order- ing dependent subtasks works adequately since usually the ing dependent subtasks works adequately since usually the number of subtasks is small, but as it fails to preferentially number of subtasks is small, but as it fails to preferentially differentiate most predicates the order is often partially in- differentiate most predicates the order is often partially in- correct. More specific comparisons are being examined. correct. More specific comparisons are being examined. As expected, controlling the use of duality has been As expected, controlling the use of duality has been difficult. The problem is that transforming the task is difficult. The problem is that transforming the task is rather expensive (in terms of synthesis), and the possibil- rather expensive (in terms of synthesis), and the possibil- ity of one or more dual transforms exists for nearly any ity of one or more dual transforms exists for nearly any task. Our current solution is to only allow duality to be task. Our current solution is to only allow duality to be used as a “last resort”; used as a “last resort”; subtask generation using duality is subtask generation using duality is only enabled after a synthesis without duality has failed at only enabled after a synthesis without duality has failed at the top level. Although this works, it has the undesirable the top level. Although this works, it has the undesirable features that features that 1. synthesis of an algorithm involving duality 1. synthesis of an algorithm involving duality can take a can take a long time, as it first has to fail completely, long time, as it first has to fail completely, and and 2. an algorithm not involving duality will always be pre- 2. an algorithm not involving duality will always be pre- ferred to one using duality, even if the latter is much ferred to one using duality, even if the latter is much simpler and more intuitive. simpler and more intuitive. We are examining less severe control strategies to better We are examining less severe control strategies to better integrate duality as a generation method. integrate duality as a generation method. VI. Related work VI. Related work A number of researchers are examining the algorithm syn- A number of researchers are examining the algorithm syn- thesis problem; some of the recent work (notably CYPRESS thesis problem; some of the recent work (notably CYPRESS [g], DESIGNER [4], [g], DESIGNER [4], and DESIGNER-SOAR [lo]) has similar and DESIGNER-SOAR [lo]) has similar goals and uses computational geometry as a test domain. MEDUSA differs most in the central role that efficiency has in its operation, and the relatively higher-level tasks that it is being tested on. It uses a more limited set of methods than DESIGNER and DESIGNER-SOAR, which both consider things like weak methods, domain examples, and efficiency analysis through symbolic execution. The use of design strategies in CYPRESS is similar to our use of skeletal al- gorithms, but are more general (and formal), leading to a greater deductive overhead; we chose to have a larger number of more specific strategies. In some respects our goals have been more modest, but MEDUSA was designed to automatically design algorithms in terms of its known primitives, while the others are semi-automatic and/or do partial syntheses. This work is influenced by LIBRA[5] and PECOS[2], which interacted in the synthesis phase of the PSI auto- matic programming system. The primary influences were the attempt to substitute knowledge for deductions and the use of efficiency to guide the synthesis process. 1. 2. 3. 4. 5. 6. 7. 8. 9. efesences Aho, A., J. Wopcroft, and J. Ullman, The Design and Anabysis of Computer Algorithms. Addison-Wesley, 1974. Barstow, David R. “An experiment in knowledge-based automatic programming,” _ Arti$cial Intelligence 12, pp.73-119 (1979). Chazelle, Bernard, L.J. Guibas, and D.T. Lee. “The power of geometric duality,” PTOC. 24th IEEE Annual Symp. on FOG’S, 217-225, (November 1983). Kant, Elaine. “Understanding and automating algo- rithm design,” IEEE Transactions on Software Engi- neering, Vol. SE-11, No. 11, 1361-1374. (November 1985). Kant, Elaine. “A knowledge-based approach to using efficiency estimation in program synthesis,” Proceed- ings IJCAI-79, Tokyo, Japan, 457-462 (August 1979). de Kleer, Johan. “An assumption-based TMS,” Artifi- cial Intelligence 28, pp.127-162 (1986). McDermott, Drew. The DUCK manual, Tech. Rept. 399, Department of Computer Science, Yale Univer- sity, June 1985. Preparata, Prance P., and Michael Ian Shames. Com- putational Geometry: An Introduction , Springer- Verlag, 1985. Smith, Douglas R. “Top-down synthesis of divide-and- conquer algorithms,” Artificial Intelligence 27, pp. 215-218, (1985). 10. Steier, David. “Integrating multiple sources of knowl- edge into an automatic algorithm designer,” Unpub- lished thesis proposal, Carnegie-Mellon University, September 1986. 154 Automated Reasoning
1987
23
614
A New Structural Induction Scheme for Proving Properties of Mutually Recursive Concepts Peiya Liu Siemens Research and Technology Laboratories 105 College Road East Princeton, NJ 08640 Te1:609-7343349 ABSTRACT Structural induction schemes have been used for mechanically proving properties of self-recursive concepts in previous research. However, based on those schemeq, it becomes very difficult to automatically generate the right induction hypotheses whenever the conjectures are involved with mutually recursive concepts. This paper will show that the difficulties come mainly from the weak induction schemes provided in the past, and a strong induction scheme is needed for the mutually defined concepts. Furthermore, a generalized induction principle is provided to smoothly integrate both schemes. Thus, in this mechanical induction, hypotheses are generated by mixing strong induction schemes with weak inductions schemes. While the weak induction schemes are suggested by self- recursive concepts, the strong induction schemes are suggested by mutually recursive concepts. I. Introduction Before formally stating the recursive concepts, some definitions are necessary. S is a term if it is a variable, a sequence of a function symbol of n arguments followed by n terms, or a sequence of a universal quantifier ALL or existential quantifier EX of two arguments followed by a variable and a term. The scope of a quantifier occurring in the term is the subterm to which the quantifier applies. For example, the scope of the quantifier ALL in the term (ALL X(FO0 X Y)) is (FOO X Y). A variable is free in the term if at least one occurrence of it is not within the scope of a quantifier employing the variable. A term t governs an occurrence of term s if either there is a subterm (IF t p q) and the occurrence of s is in p, or there is a subterm (IF t’ p q) and the occurrence of s in q, where t is (NOT t’). A term is f-free if the symbol f does not occur in the term as a function symbol. (ALL-LIST (x1 . . . xn) p) is an abbreviation for (ALL xl(ALL x2( . . . (ALL xn p)))), (EX-LIST(xr . . . xn) p) for (EX xr(EX x2( . . . (EX xn p)))), and (ALL-EX (x1 . . . xh) p) for a sequence of n mixed quantifiers over p, its negated form (EX-ALL (x1 . . . x,)(NOT p)). NIL is considered to be false and T denotes true. The symbols EQUAL and IF are two primitive operators. Informally speaking, if X is NlL, then (lF X Y Z) is equal to Z, end if X is not NIL, then (IF X Y Z) is equal to Y. The logic operators AND, IMPLIES, OR, and NOT can also be represented by IF formulae. The recursive concepts are formally defined as follows. (EQUAL(f,, x1 . . . xk x~,~+~ . . . x~,~ n ) bodyJ ), where (A> f, . . . f, are new function symbols of ni . . . nn arguments, respectively, and l<k<_ni for l<i<n; (B) x1 * * * Xk’ xI,L+I# . *. , Xi+ for l<i<n are distinct variables; (c-3 body i for l<i<n is a term and only mentions free variables in x1, . . . , xk, Xi,k+la ’ - *’ Xi,ni; (D) there is a well-founded relation r and a measure function m of k arguments; and (E) for each occurrence of a subterm of form (fj 71 ’ * ’ yk’ Yj,k+l’ Yj,k+2# -* - l Y,,n ) l l<lln in the bodyi, l<i$r, 1 it Is a theorem that: Ruey-Juin Chang Artificial Intelligence Laboratory The University of Texas at Austin Austin, TX 78712 CS.CHANGOUTEXAS-20 (DEFQ (EQUfi& X1 0.. Xk Xl,k+l -0. X1 n ) body11 (EQUAL(fi x1 . . . xk x~,~+~ . . . I~‘~‘) body,) '2 . . . (ALL--LIST (x 1 . . . xk xi k+i . . . xi n > , ' i (ALL--LIST (z, . . . zs> (IMPLIESUND t1 . . . tp) (r b yi . . . yk) Cm xi . . . ~~11111, where tl . . . tp are f,-free governing terms in the body, for l<t$r, and zi . . . zS are the governing variables which are free variables, excluding variables x1 . . . xk Xi,k+l . -* Xl,ni' in the governing terms or subterm cfj Yi . . . yk’ yj,k+l’ yj,k+2a . . . # The definition principle is to describe that n axioms constitute a recursive definition of some concept. N axioms of the form: (fl x1 . . . ‘k 'l,k+l "- 'l,nl )= body,, (f2 x1 . . . xk ~~,~+r . . . x2 n )= body,, . . . . (f, x1 . . . xk ~~,~+r . . . xnn )= bodyn can be shown to ie recursive if, ‘n according to the same measure m, the complexity of the arguments of every occurrence of f,, . . . . f, in any bodyi, assuming the hypotheses governing the occurrences in the body,, is less than the complexity of x1 . . . xn. The purpose of requirement (E) in the definition principle is to make recursive concepts terminate, and further, to avoid an inconsistency problem. Note that if n=l, then the principle of definition defines a self-recursive concept. 144 Automated Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Example QO: (DEFQ (EQUAL(EVAL L ENVRN) (IF(LISTP L) (APPLY.suTm (CAR L) (E~AL.LIST(CDR L) L)) (EQUAL(EVAL.LIST L ENVRN) (IF(LISTP L) (CONSGVAL(CAR L) ENVRN) (EVAL.LIST(CDR L) ENVRN)) NIL) 11 EN’WW > In example QO above, mutually recursive concepts will be admitted by the following instantiation of our definition principle. (4 f, is the function symbol EVAL; f9 is the function symbol EVAL.LIST. (B) (C) x1 is L, x2 is ENVRN, k is 2, n is 2, nl is 2, and n2 is 2. body1 is the term (IF(LISTP L) (APPLY.SUBR (CAR L) (EVAL.LIST(CDR L) ENVRN)) L), and body, is the term (IF(LISTP L) (CONS(EVAL(CAR L) ENVRN) (EVAL.LIST(CDR L) ENVRN)) NIL). W’) r is PLESSP and m is (LENGTH1 L ENVRN), where (LENGTH1 L ENVRN) is defined to be (LENGTH L). LENGTH is a primitive function for counting the elements in L. (El The following theorems are required in the definition principle: e For an occurrence (EVAL.LIST(CDR L) ENVRN) in the body of function EVAL, the governing term is (LISTP L). It is a theorem that (ALL L(lMPLIES(LISTP L)(PLESSP(LENGTH(CDR L)) (LENGTH L)))). o For an occurrence (EVAL.LIST(CDR L)ENVRN) in the body of function EVAL.LIST, the governing term is (LISTP L). It is a theorem that (ALL L(IMPLlES(LISTP L)(PLESSP(LENGTH(CDR L))(LENGTH L)))). o For an occurrence (EVAL(CAR L)ENVRN) in the body of function (EVAL.LIST L ENVRN), the governing term is (LISTP L). It is a theorem that (ALL L(IMPLlES(LISTP L)(PLESSP(LENGTH(CAR L))(LENGTH L)))). Suppose that (PART L C Ll L2) is true if Ll is a list of elements of L less than C, and L2 is a list of the rest of L. For example, suppose C is 6 and L is a list (2 6 3 9 lo), then Ll is (2 3) and L2 is (6 9 10). The quick sort concept could be defined as follows. Example Ql: (DEFQ (EQ~~(Qs~RT.R z wi w2> (IF(LISTP Z> ax xm Y(IF(PART(CDR z) mm z> x Y) (M V(IF(QS0RT.R X WI (CONS(CAR Z> VI> (QSORT.R Y v w2) NIL)) NIL))) (EQUAL wi w2)))) In the predicate (QS0RT.R Z Wl W2), Z is an input list and the output is the difference list of Wl and W2, which is an ordered list Z. The QS0RT.R could be added to the system because (ALL Z(ALL Wl(ALL W2(ALL X (ALL Y&ALL V(IMPLIES(AND(LISTP Z)(PART(CDR Z)(CAR Z) X Y))(PLESSP(LENGTHl X WI (CONS(CAR Z)) V)(LENGTHl Z Wl W2))))))))) and (ALL Z(ALL Wl(ALL WZ(ALL X (ALL Y(ALL V(IMPLlES(AND(LISTP Z)(PART(CDR Z)(CAR Z) X Y))(PLESSP(LENGTHl Y V W2) V)(LENGTHl Z Wl W2))))))))) hold. III. A Generalized Structural Induction Principle A. Why Strong Induction Schemes are Needed Essentially mechanical induction reasoning works because the similarity could be contrived between the structures of the recursive definition functions and of the induction schemes. The structures of recursive functions serve as templates for automatically generating the suitable induction hypotheses to prove a conjecture involved with those recursive functions. However, there is often no structure similarity between mutually recursive functions and weak induction schemes provided in previous research. The finite number of hypotheses are needed to be specified explicitly in the weak induction schemes. Using these weak induction schemes often results in the generation of useless induction hypotheses for the conjecture involved with mutually recursive functions. A strong induction form will be shown to be needed and can be generated from the structures of mutually recursive concepts. In the strong induction schemes, the finite number of hypotheses are implicitly described by particular recursive concepts. An example will illustrate the problem in using weak induction schemes for hypothesis generation. Suppose we try to prove the conjecture (ALL L(EQUAL L(FO0 L))), where the mutually recursive functions are defined as follows. (DEFQ (EQUAL. (FOO L) (IF(LISTP L) (CONS(CAR L) (FOOLIST(CDR L))) L)) (EQUAL(FOOLIST L) (IF(LISTP L) (CONS(FOO(CAR L)) (F~OLIST(CDR ~1)) L))) Let (p L) be the term (EQUAL L FOO L)). In the weak induction schemes, the instantiated terms of I required to be explicitly described. p L) as induction hypotheses are Thus, the induction hypothesis, based on the weak induction scheme and the structure of (FOO L). its term (p L) in the proof by. induction, it will not look iike counterpart in the hypotheses, and the hypotheses will be useless. Even if we change these functions into a self-recursive an extra argument S as follows, our problem still exists. function with (DEFQ (EQU~~+(FOOS L s) (IF(EQUAL s 0) (IF(LISTP L) (CONS (CAR L) (FOOS(CDR L) I>> L) (IF(EQUAL s 11 (IF(LISTP L) (CONS(FOOS(CAR L) 0) (FOOS(CDR L) 1)) L) NIL)))) Liu and Chang 145 An induction scheme, following the weak induction principle, for the conjecture (ALL L(EQUAL L(FOOS L 0))) will be generated as (ALL LW'LWAWLISTP L)WWP (CDR WP (Cm W (P W, where (p L) is the term (EQUAL L(FOOS L 0)). In the base case, we need to prove (ALL L(lMPLIES(NOT(LISTP L))(p L))). However, if we open the term (FOOS L 0) in the conclusion (EQUAL LrFOOS L O)!, it will also not.look like its counterpart in thk hypothe&. Often this type of redefined self-recursive functions is hard to suggest right induction hypotheses in weak induction schemes, due to its unnatural recursion characteristics and certain sensitive switch arguments irrelevant to the measured arguments on which functions recurse. In this example, what is really needed in the hypothesis is the term (AND(LISTP L) (FOOLIST-IND (CDR L))), where (FOOLIST-IND L) and (FOO-IND L) are mutually defined as (EQUAL (FOO-IND L) (IF(LISTP L) (F~~LIsT-IND (CDR L)) T)), and (EQuAL(FooLIST-IND L) (IF(LISTP L) (AND (p(C;4RL)) (FOOLIST-IND (CDR ~1)) T)). Intuitively, the term (AND (LISTP L)(FOOLIST-IND (CDR L))) is actually ANDing the terms (LISTP L), (p (CAR (CDR L))), . . . . (p CADDDD...R L)) together by recursively opening up the term FOOLIST-IND (CDR L)). Th us, this hypothesis implicitly represents a series of instantiated conjectures and this induction form is actually a strong induction scheme. More importantly, there is an obvio& structural similarity between (FOO L) & (FOOLIST L) and (FOO- IND L) & (FOOLIST-IND L). Later on, we will give detailed descriptions of automatically cdnstructing the terms (FaOLIST-IND L) and (FOO-IND L) from mutually recursive concepts. (FOOLIST- IND (CDR L)) is obtained from the body of (FOO-IND L) since (FOO L L appears in the conjecture, and the corresponding term (FOO-IND suggests the possible induction hypotheses from the recursive structure of its body. B. A Comparison Schemes Between Weak and Strong Induction In the induction step of the weak induction scheme, we show that if X has the desired property at an arbitrarily liven Doint. then it also has the property at-the next higher point. WSippos;! X b a pair, then it can be constructed by applying CONS to two previously constructed objects, namely, (CAR X) and (CDR X). Thus, in the weak induction scheme, we prove that a certain property (P X) holds for all X by considering two cases. In the first case, called the base case, we prove that (P X) holds for all nonpair objects X. In the second case, called the induction step, we assume that X is a pair and that (P (CAR X)) and (P (CDR X)) hold, and prove that (P X) holds. On the other hand, in the strong induction scheme, we prove that a certain property (P X) holds for all X by considering two cases. In the first case, called the base case, we show that (P X) holds for all nonpair objects X. However, in the induction step, we assume that X is a pair and that (P (CAR X)), (P (CADR X)), . . . . and (P(CADDD . . . R X)) hold, and prove that (P X) holds. In other words, the induction step shows that if X has the desired property up to an arbitrarily given point, then it also has the property at the next higher point. For the convenience of mechanical induction, this series of hypotheses is represented as a recursive concept (Q X) defined to be (IF(LISTP X)(AND(P(CAR X))(Q(CDR X))) T). In the FOO example, we represent the hypothesis as (AND(LISTP X) (P*2 (CDR X )), where (P*2 X) is defined to be (IF(LISTP X)(AND(p (CAR X)) P*2(CDR 1 X))) T). In the next section, we will show that the hypothesis can automatically be generated from mutually recursive functions by examining their structures. C. Hypothesis Terms Intuitively, hypothesis terms are those terms allowable to be instantiated as hypotheses in the strong induction schemes. These terms are quite powerful. They can implicitly represent a series of induction hypotheses in mechanical induction proof about the properties of mutually recursive concepts. A formal definition of hypothesis terms is described as follows. A subterm is a call of f in the term s if the subterm beginning with the function symbol f occurs in the term s. (P1 x1 . . . xII x”+~ . . . xJ, . . . . (Pd x1 . . . xI1 x~+~ . . . xt) are the hypothesis terms of f,, . . . . fd with P, replacing f,, . . . . fj l<j<cl, -- if 1. f,, . . . . fd are the following mutually recursive functions based on a well-founded relation R and a measure function M of n arguments, (EQUAL(fl xi . . . xn x~+~ . . . x,> body& (EQUAL(f2 xi . . . xn x~+~ . . . xt> body& . . . , (EQUAL(fd xi . . . xn x~+~ . . . x,> bodyd); 2. (PO x1 . . . xn, xn+l . . . xt) is a term; and 3. (P1 x1 . . . xn xn+l . . . xJ, . . . . (Pd x1 . . . xn x~+~ . . . xt) are obtained in the following way. (EQUAL(Pi x1 . . . x,, x~+~ . . . x,) body'& (EQUAL(P2 x1 . . . xn x,,+~ . . . xt> body'& . . . , (EQUAL(Pd x1 . . . xn x~+~ . . . x,> body'& where body’i=(HT body,) for l<i<d and HT is -- recursively defined as follows: a. Suppose the term s has the form (ALL-EX(z) v), then (HT s)=(ALL-EX(z)(HT v)). b. Suppose the term s has the form (IF c u v).. Then (HT s)=(IF c (HT u)(HT v)) if the term c is fcfree, l<i$l; (HT s)=(IF (HT c)(HT u)(HT v)) otherwise. C. Suppose the term s is fcfree, l<i<d, then (HT s)=T. a. Suppose s’ is a term obtained by replacing every occurrence of fk (for l_<k<j) as a function symbol in the term s with the symbol P,, and by replacing every occurrence of fk (for j<k<d) as a function symbol in the term s with the symbol P,. Then (HT s)=(AND all calls of P,. for O<i<d in the term s’), if there is more than one call of Pi, or (HT s)= a call of P,. for O_<i<d in the term s’, if only one call exists. Example Q2:To find out the hypothesis terms of FOO and FOOLIST with P, replacing FOO. (EQUAL(P~ L) (IF~LISTP L) (p,#ZDR L)) T)) (EQUAL(P~ L) (IF~LISTP L) (AND (P, (CAR I-1 1 (P, (CDR I-1 > > T)) From the bodies of FOO and FOOLIST, the hypothesis terms (PI T,) 146 Automated Reasoning and (PZ L) are constructed as above. Symbol T comes from step 3(c). The term (AND(P,(CAR L))(P,(CDR L))) in the body of (P, L) is obtained by step 3(d) from the term L))(FOOLIST(CDR L) ) 1 (CONS(FO6 CAR in the body of (FOOLIST L). In step 3 d), t s’ is (CONS(Pu(CAR L)) P,(CDR L))), and (HT s)=(AND all calls of P,. for O<i<d in the tern s 3=(AND(P&CAR L))(P&CDR L))). (P, L) (IF (LISTP L) (P, (CDR L)) T)) A formal description of the generalized induction principle is contained in Appendix I. The key point in the generalized induction principle is to allow hypothesis terms, in addition to (PO x1 . . . xJ, to be instantiated in the induction hypothesis. This extension will make the strong induction forms possible in the hypotheses. The soundness proof of this principle was shown in [Liu 861. The principle extends the weak induction schemes [Bayer 791 [Brown&Liu 851 to include the strong one. While strong induction schemes are shown to have a close relationship to mutually recursive concepts, weak induction schemes are related to the self-recursive concepts. In the next section, we focus on strong induction schemes and interactions between strong and weak schemes. For the pure weak induction schemes, we refer the readers to prior work [Boyer 791 [Brown&Liu 851 [Brown 861 [Liu 861. D. Illustrations of Mixing Induction Hypotheses Once each induction scheme is suggested by any term in the conjecture, we begin to heuristically combine the individual schemes to synthesize the best one for the conjecture. Smooth interactions between induction schemes suggested by self-recursive and mutually recursive concepts are shown below in the synthesis of the final induction scheme. Suppose that we try to prove the conjecture (ALL L(EQUAL (FOOLIST L)(FOO L))). Note that it contains two mutually recursive concepts. Let (PO L) be (EQUAL(FOOLIST L)(FOO L)). (P, L) and (P2 L) are the hypothesis terms of (FOO L), (FOOLIST L) *with P, replacing FOO, FOOLIST. (EQUAL(P~ L) (IF(LISTP L) (PO (CDR L) 1 T)) (EQUAL(P2 L) (IFCLISTP (AND T)) L1 (PO (cm I-1) (PO (CDR Therefore, the induction scheme suggested by (FOO L) is: (ALL L(IMPLIES(AND(LISTP L)(P,, (CDR L)))(Pu L))), and the scheme suggested by (FOOLIST L) is: (ALL L(Ih4PLIES(AND(LISTP L)(AND(Pu(CAR L)) (P,(CDR L))))(P,L))). An interesting thing is shown in this case. Two mutually recursive concepts are supposed to suggest the strong induction schemes. However, since both concepts appear in the conjecture, the strong schemes are collapsed into the weak induction schemes. By merging these two induction hypotheses, we provide one induction step and one base case to cover all the relevant recursive aspects as follows. Base case: (ALL L(IMPLIES(NOT(LISTP L)) (PoL>>> Induction step : (ALL L (IMPLIES (AND (LISTP L) (AMD(Po (CAR L)) (P, (CDR L)))) (Po L))) In the second examnle. there are self-recursive and mutuallv recursive concepts in the conjkcture (ALL L(EQUAL(FO0 L)(COPY L))), where COPY L) is defined a~ (IF(LISTP L) (CONS (COPY(CAR L)) COPY(CDR L)) ) L). Let (P, L) be (EQUAL(FO0 L)(COPY L)). Thus, the weak induction scheme suggested by the function (COPY L) is: (ALL L(IMPLIES(AND(LISTP L)(AND (PO (CAR L))(Pu (CDR L)))) (Pa L))), and the strong induction scheme for the function (FOO L) is: (ALL L(IMPLIES(AND(LISTP L)(PZ (CDR L))) (PO L))), where (EQUAL(P~ L) (IFCLISTP L) (AND (PO (CAR L)) (P, (CDR L) > > T)) The final induction scheme is obtained by mixing one scheme and one weak induction scheme as follows. strong induction Base case: (ALL L(IMPLIES(NOT(LISTP L)) (P, L))) Induction step : (ALL L (IMPLIES (AND (LISTP L) (AND (P, (CDR L) > (AND (P, (CAR L) > (P()KDR L))))) (PoL) > > Once the above induction hypotheses are set up, the rest of the proofs will become straightforward. In the research [Bayer 791 [Liu 861, many heuristics are provided to manipulate the induction schemes and formulate the best one. IV. Conclusions A generalized induction principle is provided for the conjectures involved with both self-recursive and mutually recursive concepts. Mechanical induction under the principle could be used as a proof strategy for a theorem prover or logic program interpreter. Two results are shown in this paper for proving properties of recursive concepts: (1) mutually recursive concepts need to suggest strong induction hypotheses, and (2) the relationship between the strong induction scheme and the weak induction scheme in mechanical structural induction. Appendix I: A Formal Description of the Induction Principle Suppose : CA> Po is the term (p*O x1 . . . xn X n+i --* %) with t distinct free variables, l<nlt; (B) 1‘ is a well-founded relation; CC> m is a measure function of n arguments; (D> (p*l x1 . . . xn x~+~ . . . x,1, . . . . (p*d xi . . . X* %+I *** x,> are hypothesis terms of any given mutually recursive functions based on r and m with p*O replacing a subset of Cp*l, . . . . p*d); (E) bl, . . . , bk are non-negative integers; (F) for each i l<i<k, variables zi i, . . ., , =i.bi are distinct and different from x1, . . . , X n# xn+l. . . . I xc; CC> q,, . . . , qk are terms; (H) h,, . . . , h, are positive integers; and (I) for l<i<k and lSjlhls ?,j is a substitution and it is a theorem that (ALL--LIST (x, . . . x,> (ALL--LIST(z, i . . . zi b > ’ i (IMPLIES q, (r(m x1 . . . xn>/si,j(m x1 . . . x,)1)>). Liu and Chang 147 Then (A). (ALL--LIST (x1 . . . x,1 pO) is a theorem if for the base case, (ALL--LIST$ . . . x,) (IMPLIES (AND (NOT (ALL--MI (zi 1 . . . , zl,blw) . . . (NOT(ALL-_E$(zk,i . . . zk b )q,>))' ' k P*) ) is a theorem and for each l<ilk induction step, (ALL-LIST(x, . . . x,> (IMPLIES ~LL--EX,(Z~ 1 . . . zl b 1 , ' i (AND q, p is1 /s~,~ . . . Pi’hi/s, h >I2 l i Po) ) is a theorem. (E), (M--LIST (xi . . . x,1 po) is a theorem if for the base case, &X-LIST(xl . . . x,) (AND (AND (NOT(ALL--M1 (z, i . . . zi b ) q,) > , ’ 1 (hi w-L--~ (z,, l . . . zk,b,> q$ > 1 Po) ) is a theorem or for some l<i<k induction step, (EX-LIST(xl . . . xc> is .a theorem. Po) ) We now illustrate an application of this induction principle to prove the conjecture (ALL L(EQUAL L(FO0 L))). The induction is obtained by the following instantiation of this principle. p, is the term (p*O L) defined as (EQUAL L(FO0 L)); (p*l L) and (p*2 L) are the hypothesis terms of FOO and FOOLIST with p*O replacing FOO; r is a well-founded relation PLESSP; m is LENGTH; n is 1; t is 1; k is 1; b, is 0; x1 is L; q, is the term (LISTP L); h, is 1; s1 1 is {<L, (CDR >}; and one theorem required by (I) is: (ALL L(IMPLIES (LISTP (CDR L))(LENGTH L)))). Thus, the base csse and the induction step produced by this induction principle are (ALL L (IMPLIES (NOT (LISTP Lj) (p*O L))) and (ALL L (IMPLIES (LISTP L) (p*2(CDR L))) (p 0 L))). The soundness of (A) and this induction principle has been proved [Liu 861. The proof needs two * important properties that hypothesis terms preserve: (1) They satisfy the function definition principle based on the same R and M, since governing conditions remain unchanged after translation, and (2) Let <Xl . . . Xt> be a t-tuple in the domain of Dt. If (PO Yl . . . Yt) is not 'ALL EX1, . . . . ALL-EXk could be any sequence of mixed quantifiers. 2piJ . . . x x , .*-, piphi are chosen from any member of {(p*O xl n n+l ... x,), (P*l x1 . . . xn x”+l . . . x,), . . . . (P*d x1 . . . xn xo+l . . . x,)). 148 Automated Reasoning false for every t-tuple <Yl . . . yt> smaller than <Xl . . . Xt>, then (Pi in the Yl . . . domain of Dt that is RM- Yt) lli<d should not be false for such t-tuples. RM is the well-founded relation defined on n-tuples by (RM <Zl . . . Zn><Yl . . . Yn>)=(R (M Zl . . . Zn)(M Yl . . . Yn)). [Aubin 761 Aubin, R. Mechanizing Structural Induction, Ph.D. Thesis. The University of Edingburgh , 1976. [Bourbaki 681 Bourbaki, N. Elements of Mathematics Theory of Sets. Addison-Wesley, Reading, 1968. [Boyer 75) Boyer, R.S., and J S. Moore. Proving Theorems about LISP Functions. Journal of ACM 22(l), 1975. [Bayer 791 Boyer, R.S., and J S. Moore. A Computational Logic. New York, Academic Press, 1979. protz 741 Brotz, D. Proving Theorems by Structural Induction, Ph.D. Thesis. Stanford University , 1974. [Brown 861 Brown, F. M. An Experimental Logic Based on the Fundamental Deduction Principle. AI Journal 30(2), 1986. [Brown&Liu 851 Brown, F. and P. Liu. A Logic Programming and Verification System for Recursive Quantificational Logic. Proceedings of IJCAI-85, Los Angeles , 1985. /Burstall 691 Burstall, R. Proving Properties of Programs by Structural Induction. Computer Journal 12(l), 1969. [Cartwright 761 Cartwright, R. A Practical Formal Semantic Definition and Verification System for Typed LISP, Ph.D. Thesis. Stanford University , 1976. [Clark 771 Clark, K. L. and S-A Tarnlaund. A First Order Theory of Data and Programs. IFIP 77, North Holland , 1977. [Hoare 751 Hoare, C.A.R. Recursive Data Structures. International Journal of Computer and In formation Sciences 4(2), 1975. piu 861 Liu, P. A Logic-based Programming System, Ph.D. Thesis. Department of Computer Sciences, The University of Texas at Austin , 1986. References
1987
24
615
A Model of Two-Player Evaluation Functions1 Bruce Abramson and Richard E. Kor@ Abstract We present a model of heuristic evaluation functions for two-player games. The basis of the proposal is that an estimate of the expected-outcome of a game situation, assuming random play from that point on, is an effective heuristic function. The model is sup- ported by three distinct sets of experiments. The first set, run on small, exhaustively searched game- trees, shows that the quality of decisions made on the basis of exact values for the expected-outcome is quite good. The second set shows that in large games, estimates of the expected-outcome derived by randomly sampling terminal positions produce rea- sonable play. Finally, the third set shows that the model can be used to automatically learn efficient and effective evaluation functions in a game-independent manner. I. Introduction: The Problem Heuristic search theorists have studied static evaluation functions in two settings: single-agent puzzles and dual- agent games. In single-agent domains, the task is typically to find a lowest cost path from the initial state to a goal state. The role of the heuristic evaluation function is to estimate the cost of the cheapest such path. This provides a rigorous definition of single-player evaluators, offers an absolute measure of heuristic quality (its accuracy as an estimator), allows any two functions to be compared (the more accurate estimator is the better heuristic), and has spawned a large body of results that relate evaluator accu- racy to both solution quality and algorithmic complexity of heuristic searches. Unfortunately, the meaning of heuristic evaluation functions for two-player games is not as well understood. Two-player evaluators are typically described as estimates of the “worth” [Nilsson, 19801, “merit”, “strength” [Pearl, 19841, “quality”[W ins t on, 19771, or “promise”[Rich, 19831 ‘This research was supported in part by NSF Grant IST 85-15302, an NSF Presidential Young Investigator Award, an IBM Faculty De- velopment Award, and a grant from Delco Systems Operations. 2Department of Computer Science, Columbia University, and Computer Science Department, University of California at Los Angeles 3Computer Science Department, University of California at Los Angeles of a position for one player or the other. The literature is uniformly vague in its interpretation of game evaluation functions. One popular school of thought contends that a static evaluator should estimate a node’s actual minimax value, or the value that would be returned by searching forward from the given position all the way to the termi- nal nodes of the tree, labelling the leaves with their actual outcomes, and then minimaxing the leaf values back up to the original node. Under this definition, the best heuristic is the function that most accurately approximates the min- imax value over all possible game positions. The difficulty with this proposal is that it provides no way of judging the quality of a heuristic, comparing two different evaluators, or learning a heuristic function, because actual minimax values can only be computed by exhaustively searching the entire game tree below the given node. In real games, this is a computationally intractable task for all but end-game positions. Alternatively, the quality of a heuristic can be defined operationally by the quality of play that it produces. This definition allows any two heuristic functions to be com- pared by playing them against each other. There are two major drawbacks to this approach. First, it compares en- tire control strategies, not just evaluation functions. The quality of a program’s play can be affected by a number of factors, including backup techniques (minimax is not nec- essarily optimal when the values are only estimates), and lookahead depth (the relative performance of two functions may be different at different depths), as well as evaluator strength. Second, comparitive studies fail to provide an absolute measure of the quality of a heuristic function. We introduce a new model of two-player evaluators that resolves all of these difficulties. The expected-outcome model, described in section 2, provides a rigorous defini- tion of an evaluator’s objective, an absolute standard for gauging its accuracy, and a viable method for performing a priori comparisons. Section 3 outlines a series of experi- ments that shows that, at least in its most basic form, the model leads to reasonable play in real games. Some con- clusions and directions for future research are then given in section 4. 90 Automated Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. II. Expected-Outcome: The values can be approximated by random sampling. Along el In a broad sense, the purpose of an evaluation function in a two-player domain is to indicate whether a given node on the search frontier will result in a victory. The stan- dard assumption, forwarded by proponents of approximat- ing minimax values, has been that this corresponds to an estimate of the outcome that would be arrived at by perfect play. Our new model is based on a different set of assump- tions. We view the actual outcome of a game as a random variable and investigate what the game’s payoff would be, given random play by both sides. Although the assump- tion of random play seems unrealistic, it is important to recall that in a two-player game, evaluation functions are normally applied only at the frontier of the search. By definition, the frontier is the limit beyond which the pro- gram cannot gather any further data about the game tree, and in the absence of any other information, random play is the only practical assumption. Furthermore, there is a common belief that any player, including a random one, should find it easier to win from a “strong” position than from a “weak” one. Thus, a technique for determining strong positions for a random player may help indicate strong positions for a perfect one, as well. In any event, our approach stands in stark contrast to the usual one, and the question of its utility is primarily empirical, not intuitive. Any effective evaluator designed under our assump- Cons should indicate the expected value of the outcome variable, or the ezpected-o&come of the given position. with their many advantages, of course, expected values (and other statistical parameters) do bear a serious onus: they can be very misleading when population sizes &re rel- atively small. Thus, care must be taken not to rely too heavily on expected-outcome values in end-game play. Most interesting games generate trees that are too complex and irregular to be discussed analytically. Al- though it is possible to show that on trees with uniform branching factors and depths expected-outcome functions make optimal decisions, when the uniformity disappears, the guaranteed optimality is lost. Since the ultimate cri- terion by which an evaluator is judged is its performance in actual competition, we ran three sets of experiments to verify both the rationality of our assumptions and the strength of our model in real games. In the first set, we generated the complete game-trees of tic-tat-toe and 4by- 4 Othello, calculated the exact numbers of wins, losses, and draws beneath every position, and compared the ex- act expected-outcome function with a well-known standa’rd evaluator for the same game. We found that the quality of the decisions made by expected-outcome was superior to that of the standard evaluators. While these results are encouraging, they are limited to games that are small enough to be searched exhaustively. In the second set of experiments, we used the full 8-by-8 game of Othelld. Since this game is too large for exact values to be calculated, we estimated expected-outcome by averaging the values of a randomly sampled subset of the terminal positions beneath the given node. This estimated expected-outcome evalua- tion was pitted directly (no lookahead) against a standard evaluator, with the result that expected-outcome signifi-’ cantly outplayed the standard. Unfortunately, the cost of implementing the random sampler was prohibitive. In the final set, we attempted to produce an eficient estimator by performing a regression analysis on the expected-outcome estimates returned by the sampler, to automatically learn the coefficients in a polynomial evaluator for Othello. Once again, the results were positive: the learned coefficients played as well as a set of coefficients that had been de- signed by an expert, even though the learning procedure had no information about the game other than the rules and the values of terminal positions. Taken as a whole, this series of experiments offers strong empirical support for the expected-outcome model. Definition: Expected-Outcome Values The expected-outcome value of a game-tree node, G, is given by a player’s expected payoff over an infinite num- ber of random completions of a game beginning at G, or leaf=1 where k is the number of leaves in the subtree, Vl,,f is a leaf’s value, and Pleaf is the probability that it will be reached, given random play. It is important to note that Pleaf is not necessarily equal to i. The probability that a leaf will be reached is one over the product of its ancestors’ branching factors; a node with no siblings is twice as likely to be reached as a node with one sibling. Leaves are only equiprobable in trees in which all nodes of equal depth are constrained to have identical branching factors, thereby making all paths equally likely. Ignoring the issue of plausibility for a moment, this model has a number of attractive features. First, it is pre- cise. Second, it provides an absolute measure of heuristic quality, (namely the accuracy with which it estimates the expected value), hence a means of directly comparing two heuristic functions. Finally, and most importantly, it pro- vides a practical means of devising heuristics - expected 0 porting E-vi One of the most attractive features of expected-outcome is its domain-independence. The model’s reliance on noth- ing more than a game’s rules and outcomes indicates that it should be equally applicable to all two-player games. In addition to being a source of great strength, however, this generality also makes the model somewhat difficult to test thoroughly. Different implementations on differ- ent games are quite likely to yield different results. This section describes a series of experiments that demonstrate Abramson and Kopf 91 the utility of expected-outcome to at least one class of games, those with finite-depth trees and outcomes drawn from (win, loss, draw}. The requirement of finite-depth trees simply means that the game will eventually termi- nate. Without this rule, a chess game could, at least in theory, continue indefinitely. Variants of two games that meet both requirements, tic-tat-toe and Othello, were se- lected for testing. Tic-tat-toe is a game that should be familiar to everyone; Othello, although of growing popu- larity, may not be. The standard game is played on an 8-by-8 board. The playing pieces are discs which are white on one side and black on the other. Each player, in turn, fills a legal vacant square with a disc showing his own color. Whenever the newly placed disc completes a sandwich con- sisting of an unbroken straight line of hostile discs between two friendly ones, the entire opposing line is captured and flipped to the color of the current mover. A move is legal if and only if at least one disc is captured. When neither player can move, the one with the most discs is declared the winner. (For a more detailed description, see [Frey, 19801 [Maggs, 19791 [Rosenbloom, 19821). A. Decision Quality The fist step in determining a model’s theoretical accu- racy is investigating its decision quality, or the frequency with which it recommends correct moves. In the case of expected-outcome, the question is how often the move with the largest (or smallest, as appropriate) percentage of win leaves beneath it is, in fact, optimal. Since optimal moves are defined by complete minimax searches, (searches that extend to the leaves), their calculation is contingent upon knowledge of the entire subtree beneath them. Thus, for this first set of experiments, fairly small games had to be chosen. Moreover, in order to compare the decision quality of expected-outcome with that of a more standard func- tion, popular games (or variations thereof) were needed. Four games that met both requirements were studied, al- though only two of them, 3-by-3 tic-tat-toe and 4-by-4 Othello, have game-trees that are small enough to generate entirely. The other two, 4-by-4 tic-tat-toe and 6-by-6 Oth- ello, were chosen because they are small enough for large portions of their trees to be examined, yet large enough to offer more interesting testbeds than their smaller cousins. For each game studied, every node in the tree (beneath the initial configuration) was considered by four functions: complete-minimax, expected-outcome, a previously stud- ied standard, and worst-possible-choice. The decisions rec- ommended by these evaluators were compared with the op- timal move, or the move recommended by minimax, and a record was kept of their performance. Minimax, by defini- tion, never made an error, and worst-possible-choice erred whenever possible. Expected-outcome, unlike complete- minimax, did not back up any of the values that it found at the leaves; its decisions were based strictly on evalua- tions of a node’s successors. Finally, the standard evalu- ators were taken from published literature and calculated using only static information: the open-lines-advantage for tic-tat-toe [Nilsson, 19801, and a weighted-squares function for Othello based on the one in [Maggs, 19791. Open-lines- advantage is known to be a powerful evaluator; weighted- squares is less so. Nevertheless, its study does have sci- entific merit. Weighted-squares were the first reasonable expert-designed Othello functions, and the more sophis- ticated championship-level evaluators became possible in large part due to the feedback provided by their perfor- mance [Rosenbloom, 19821. Since the purpose of these experiments was not to develop a powerful performance- oriented Othello program, but rather to test the decision quality of a new model of evaluation functions, a useful comparison can be provided by any well thought out game- specific function, albeit less-than-best. The results of these experiments were rather inter- esting and quite positive. Without going into detail, their most significant feature was the evaluators’ relative error-frequency - in tic-tat-toe, expected-outcome made roughly one-sixth as many errors as open-lines-advantage, and in Othello about one-third as many as weighted squares. The b asic point made by these experiments is that in all cases tested, expected-outcome not only made fewer errors than the standard functions, but chose the optimal move with relatively high frequency. This indi- cates that guiding play in the direction of maximum win percentage constitutes a reasonable heuristic. Thus, the expected-outcome model has passed the first test: exact values generally lead to good moves. B. Random Sampling Strategies According to the the decision quality results, if complete information is available, moving in the direction of max- imum win percentage is frequently beneficial. Unfortu- nately, these are precisely the cases in which optimal moves can always be made. Since probabilistic (and for that matter, heuristic) models are only interesting when knowl- edge is incomplete, some method of estimating expected- outcome values based on partial information is needed. The obvious technique is random sampling. Expected- outcome values, by their very definition, represent the means of leaf-value distributions. In the second set of ex- periments, a sampler-based estimate of expected-outcome was pitted against a weighted-squares function in several matches of (8-by-8) Othello. These experiments, like those which investigated decision quality, were designed as pure tests of evaluator strength - neither player used any looka- head. The aim of these tests, then, was to show that sampler-based functions can compete favorably with those designed by experts, at least in terms of their quality of play. As far as efficiency goes, there is no comparison. The sampler was fairly cautious in its detection of con- vergence to a value; many samples were taken, and as a result, the sampling player frequently required as much as an hour to make a single move 4. The static function, 4Convergence was detected by first sampling N leaves and de- veloping an estimate, then sampling an additional N and finding 92 Automated Reasoning on the other hand, never required more than two seconds. The time invested, however, was quite worthwhile: in a 50-game match, the sampler crushed its weighted-squares opponent, 48-2. Veteran 0 the110 players may feel that the number of victories alone is insufficient to accurately gauge the rela- tive strength of two players. Perhaps of even greater sig- nificance is the margin of victory - the single most im- portant feature in determining a player’s USOA (United States Othello Association) rating [Richards, 19811. Over the course of 50 games, the weighted-squares total of 894 discs was 1,079 shy of the 1,973 racked up by the sampler. A statistical analysis of the disc differentials indicates that the sampler should be rated roughly 200 points, or one player class, ahead of the weighted-squares player. These studies show that, efficency considerations aside, sampler-based functions can compete admirably. It is important, however, to keep the results in their proper perspective. As a demonstration of the world’s best Oth- ello evaluator, they are woefully inadequate - the absence of lookahead makes the games unrealistic, the difference in computation times skews the results, and the competition is not as strong as it could be. Their sole purpose was to establish estimated expected-outcome as a function at least on par with those designed by experts, and the data clearly substantiates the claim. Expected-outcome func- tions, then, do appear to make useful decisions in interest- ing settings. Given no expert information, the ability to evaluate only leaves, and a good deal of computation time, they were able to play better than a function that had been hand-crafted by an expert. Thus the second challenge has been met, as well: in the absence of perfect information, an expected-outcome estimator made reasonably good de- cisions. C. earning ExpectecL0utcor-m Fuuc- tions Like most products, evaluation functions incur costs in two phases of their existence, design and implementation. The inefficiency of sampler-based functions is accrued during implementation; their design is simple and cheap, because an effective sampler need only understand the game’s rules and be able to identify leaves. Static evaluators, on the other hand, rely on detailed game-specific analyses, fre- quently at the cost of many man-hours and/or machine- hours. To help reduce these design costs, a variety of auto- matic tools that improve static evaluators have been devel- oped, the simplest of which attempt to determine the rela- tive significance of several given game features. Techniques of this sort are called parameter learning [Samuel, 19631 [Samuel, 19671 [Christensen and Korf, 19861, and should be applicable to learning the relationship between game features and expected-outcome values. While this reliance on predetermined game features will inevitably limit con- formity to the model’s ideal, scoring polynomials, are the backbone of most competitive game programs, and if done properly, the learned functions should combine the statis- tical precision and uncomplicated design of sampler-based functions with the implementation efficiency of static eval- uators. The next set of experiments involved learning static expected-outcome estimators of just this sort. To find a member of the weighted-squares family that estimates the expected-outcome value, a regression proce- dure was used to learn coefficients for the features iden- tified by the original, expert-designed function. Since the exact expected-outcome value is not computable in inter- esting games, an estimated value had to be used as the regression’s dependent variable. Thus, the value that was approximated was not the actual expected-outcome, but rather the estimate generated by the random sampler de- scribed in the previous section. The output of the regres- sion led directly to the development of static estimators of the desired form. In addition, the statistical measures of relationship between the independent and dependent vari- ables indicated that the selected game features are rea- sonable, albeit imprecise, estimators of expected-outcome. This is directly analogous to the assertion that weighted- squares functions can play up to a certain level, but for championship play, additional factors must be considered [Rosenbloom, 19821. For the third, and final set of experiments, four mem- bers of the weighted-squares family of Othello evaluators were studied, two of expert design ’ and two learned by regression analysis. These evaluators differ only in the coefficients assigned to each of the game features. To ascertain the relative strength of the coefficient sets, a tournament was played. Unlike the functions studied in the decision quality and random sampling experiments, all four weighted-squares evaluators are efficiently calcu- lable. This allowed the ban on lookahead to be lifted and more realistic games to be studied. The rules of the tour- nament were simple. Every pair of functions met in one match, which consisted of 100 games each with lookahead length fixed at 0, 1, 2, and 3. Between games, the players swapped colors. Over the course of 400 games, no evabu- ator was able to demonstrate substantial superiority over any other. Not only were the scores of all matches fairly close, but the disc differential statistics were, as well. An analysis of the victory margins shows that with probabil- ity .975, no two of the functions would be rated more than 35 USOA points apart. Since roughly 290 points (actu- ally, 207 [Richards, 1981]), are necessary to differentiate between player classes, the rating spread is rather insignif- another estimate. If the discrepancy between them was within the tolerable error bounds, the estimate was accepted. Otherwise, an- other 2N were sampled, and so on, until convergence was detected. For the sampler used in these experiments, the original sample size was iV = 16 leaves, and the maximum needed was 1024. 5The first expert function was taken directly from [Maggs>1979], while the second, which was also used in the previous section’s ran- dom sampling experiments, modified the first to account for my per- sonal experience. Abramson and Korf 93 icant - it should be sent ially equivalent. clear that all four functions are es- In addition to offering a method of comparing eval- uator strength, disc differentials suggest another applica- tion of expected-outcome: assign each node a value equal to the expected disc-differential of the leaves beneath it. A fifth weighted-squares function was learned to estimate the expected-outcome of this multi-valued leaf distribu- tion (all outcomes in the range [-64,641 are possible), and entered into the tournament. Its performance was notice- ably stronger than that of the other functions, although not overwhelmingly so, with victory margins between 39 and 145, and ratings 25 to 85 points above its competitors. Thus, the coefficients learned by the regression anal- ysis procedure are at least as good as those designed by experts. Of course, it is possible to contend that a func- tion’s strength is derived primarily from its feature set, not its coefficient set. If this is true, any two members of the same family should perform comparably, and it’s not surprising that the new functions competed favorably with the old. To dissipate any doubts that may arise along these lines, some further family members were generated. Each of the four evaluators in the initial tournament played an additional match against a weighted-squares cousin with a randomly generated set of coefficients. All four random functions were demolished - they rarely won at all, and would be rated at least a player class behind the four that had been intelligently designed. With its strong showing in the tournament, the expected-outcome model has met the third challenge: an effeciently calculable estimator played fairly well. IV. Conclusions Our proposed model of two-player evaluation functions, the expected-outcome model, suggests new directions for rethinking virtually every element of game programming. For example, in addition to the obvious benefits of a rig- orous and practical definition for evaluators, the model implies a significantly different approach to the program- ming of two-player games. The standard Shannon Type- A program does a full-width search to a fixed depth and then estimates the values of the nodes at that depth [Shan- non, 19501. The program in the second set of experiments (random sampling) does a full-depth search but only of a subset of the nodes. In a Shannon type-A strategy, un- certainty comes from the estimates of the positions at the search horizon,‘whereas in our model, uncertainty is due to sampling error. Furthermore, the new model avoids one of the major disadvantages of all previous approaches, the need for a game-specific evaluation function based on a set of handcrafted, carefully tuned, ad hoc features. In sharp contrast to this reliance on outside expertise, the expected- outcome model requires only well-defined leaf values, the rules of the game, and a game-independent sampling strat- egy. It is, of course, unreasonable to expect the initial 94 Automated Reasoning implementation of any new model, regardless of inher- ent merit, to match the achievements of thirty-five years of progressive research. Whether expected-outcome will eventually replace minimax as the standard model for game design, or simply augment it by providing a degree of precision to some of its more ambiguous components, remains to be seen. What this paper has shown is that the estimation of expected-outcome functions defines a viable, domain-independent role for two-player evaluation func- tions. We believe that the new model warrants the serious further study that is currently in progress. Acknowledgements We would like to thank Othar Hansson, Andrew Mayer, Dana Nau, and Judea Pearl for providing us with helpful discussions and suggestions. References [Christensen and Korf, 19861 Jens Christensen and Richard Korf. A unified theory of heuristic evaluation functions and its application to learning. In Proceedings of the fifth National Confer- ence on Artificial Intelligence, 1986. [Frey, 19801 Peter W. Frey. Machine Othello. Personal Computing, :89-90, 1980. [Maggs, 19791 Peter B. Maggs. Programming strategies in the game of reversi. BYTE, 4:66-79, 1979. [Nilsson, 19801 Nils J. Nilsson. Principles of Artificial In- telligence. Tioga Publishing Company, 1980. [Pearl, 19841 Judea Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison Wesley, 1984. [Rich, 19831 El aine Rich. Artificial Intelligence. McGraw Hill, 1983. [Richards, 19811 R. Richards. The revised usoa rating sys- tem. Othello Quarterly, 3( 1):18-23, 1981. [Rosenbloom, 19821 Paul S. Rosenbloom. A world- championship-level Othello program. Artificial Intel- ligence, 19:279-320, 1982. [Samuel, 19631 A.L. Samuel. Some studies in machine learning using the game of checkers. In E. Feigenbaum and J. Feldman, editors, Computers and Thought, McGraw-Hill, 1963. [Samuel, 19671 A.L. Samuel. Some studies in machine learning using the game of checkers ii - recent progress. IBM J. Res. Bev., 11:601-617, 1967. [Shannon, 19501 Claude E. Shannon. Programming a computer for playing chess. 1Philosoyh4cal Magazine, 41:256-275, 1950. [Winston, 19771 P.H. Winston. Artificial Intelligence. Ad- dison Wesley, 1977.
1987
25
616
Real-Time Heuristic Search: First Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90024 Abstract Existing heuristic search algorithms are not ap- plicable to real-time applications because they can- not commit to a move before an entire solution is found. We present a special case of minimax looka- head search to handle this problem, and an analog of alpha-beta pruning that significantly improves the efficiency of the algorithm. In addition, we present a new algorithm, called Real-Time-A*, for searching when actions must actually be executed, as opposed to merely simulated. Finally, we examine the nature of the tradeoff between computation and execution cost. EIeuristic search is a fundamental problem-solving method in artificial intelligence. l?or most AI problems, the se- quence of steps required for solution is not known a pri- ori but must be determined by a systematic trial-and-error exploration of alternatives. All that is required to formu- late a search problem is a set of states, a set of operators that map states to states, an initial state, and a set of goal states. The task typically is to find a lowest cost sequence of operators that map the initial state to a goal state. The complexity of search algorithms is greatly reduced by the use of a heuristic evaluation function, often without sacri- ficing solution optima&y. A heuristic is a function that is relatively cheap to compute and estimates the cost of the cheapest path from a given state to a goal state. Common examples in the AI literature of search prob- lems are the Eight Puzzle and its larger relative the Fifteen Puzzle. The Eight Puzzle consists of a 3x3 square frame containing 8 numbered square tiles and an empty position called the “blank”. The legal operators slide any tile hori- zontally or vertically adjacent to the blank into the blank position. The task is to rearrange the tiles from some ran- dom initial configuration into a particular desired goal con- figuration. A common heuristic function for this problem is called Manhattan Distance. It is computed by count- ing, for each tile not ‘in its goal position, the number of moves along the grid it is away from its goal position, and summing these values over all tiles, excluding the blank. A real-world example is the task of autonomous nav- igation in a network of roads, or arbitrary terrain, from an initial location to a desired goal location. The problem is typically to find a shorted path between the initial and goal states. A typical heuristic evaluation function for this problem is the air-line distance from a given location to the goal location. The best known heuristic search algorithm is A*[l]. A* is a best-first search algorithm where the merit of a node, f(n), is the sum of the actual cost in reaching that node, g(n), and the estimated cost of reaching the solution from that node, h(n). A* has the property that it will always find an optimal solution to a problem if the heuristic function is admissible, i.e. never overestimates the actual cost of solution. Iterative-Deepening-A*( IDA*)[2] is a modification of A* that reduces its space complexity in practice from exponen- tial to linear. IDA* performs a series of depth-first searches, in which a branch is cutoff when the cost of its frontier node, fb-4 = sW + O-4, exceeds a cutoff threshold. The threshold starts at the heuristic estimate of the initial state, and is increased each iteration to the minimum value that exceeded the previous threshold, until a solution is found. IDA* has the same prcperty as A* with respect to solution optimality, and expands the same number of nodes, asymp- totically, as A” on an exponential tree, but uses only linear space. The drawback of both A* and IDA* is that they take exponential time to run in practice. This is an unavoid- able cost of obtaining optimal solutions. As observed by Simon[4], however, it is relatively rare that optimal solu- tions are actually required, but rather near-optimal or “sat- isficing” solutions are usually perfectly acceptable for most real-world problems. A related drawback of both A* and IDA* is that they must search all the way to a solution before making a com- mitment to even the first move in the solution. The reason is that an optimal first move cannot be guaranteed until the entire solution is found and shown to be at least as good as any other solution. As a result, A* and IDA* are run to completion in a planning or simulation phase before the first move of the resulting solution is executed in the real Kori 133 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. world. This is a serious limitation of these algorithms with respect to real-time applications. 3 Real-Time Problems In this section we examine several important characteristics of real-time problems that must be taken into consideration by any real-time heuristic search algorithm. The first characteristic is that in real problems the prob- lem solver must face a limited search horizon. This is due primarily to computational and/or informational limita- tions. For example, due to the combinatorial explosion of the Fifteen Puzzle, finding optimal solutions using IDA* with Manhattan Distance on a DEC20 required an aver- age of over five hours per problem instance[2]. Any larger puzzle would be intractable. In the case of the navigation problem without the benefit of completely detailed maps, the search horizon (literalIy, in this case) is due to the infor- mational limit of how far the vision system can see ahead. Even with the aid of accurate maps the level of detail is a limitation. This gives rise to a “fuzzy horizon” where the level of detail of the terrain knowledge is a decreasing function of the distance from the problem solver. A related characteristic is that in a real-time setting, actions must be committed before their ultimate conse- quences are known. For example, a chess tournament re- quires that moves be made within a certain time limit. In the case of navigation, the vehicle must be moved in order to extend the search horizon in the direction chosen. A final important characteristic is that often the cost of action and the cost of planning can be expressed in com- mon terms, giving rise to a tradeoff between the two. For example, if the goal of the Fifteen Puzzle were to solve it in the shortest possible time, as opposed the smallest number of moves, and we quantified the time to actually make a physical move relative to the time required to simulate a move in the machine, then in principle we could find al- gorithms that minimized total solution time by balancing “thinking” time and “action” time. 4 Minimin Lookahead Search In this section we present a simple algorithm for real-time heuristic search in single-agent problems that takes the above characteristics into account. It amounts to a spe- cial case of the minimax algorithm for two-player games[3]. This should not be surprising since two-player games share the real-time characteristics of limited search horizon and commitment to moves before their ultimate outcome can be known. At .first we will assume that all operators have the-same cost. The algorithm is to search forward from the current state to a fixed depth determined by the computational or informational resources available for a single move, and ap- ply the heuristic evaluation function to the nodes at the search frontier. Whereas in a two-player game these val- ues would then be minimaxed up the tree to account for alternate moves among the players, in the single-agent set- ting, the backed-up value of each node is the minimum of the values of its children, since the single agent has control over all moves. Once the backed-up values of the children of the current state are determined, a single move is made in the direction of the best child, and the entire process is repeated. The reason for not moving directly to the fron- tier node with the minimum value is to follow a strategy of least commitment, under the assumption that after com- mitting the first move, additional information from an ex- panded search frontier may result in a different choice for the second move than was indicated by the first search. We call this algorithm minimin search in contrast to minimax searchl. Note that the search proceeds in two quite different, but interleaved modes. The minimin lookahead search occurs in a simulation mode, where the postulated moves are not actually executed, but merely simulated in the machine. After one complete lookahead search, the best move found is actually executed in the real world by the problem solver. This is followed by another lookahead simulation from the new current state, and another actual move, etc. In the more general case where the operators have non- uniform cost, we must take into account the cost of a path so far in addition to the heuristic estimate of the remain- ing cost. To do this we adopt the A* cost function of f(n) = g(n) + h(n). The algorithm then looks forward a fixed number of moves and backs up the minimum f value of each frontier node. An alternative scheme to searching forward a fixed number of moves would be to search for- ward to a fixed g(n) cost. We adopt the former algorithm under the assumption that in the planning phase the com- putational cost is a function of the number of moves rather than the actual execution costs of the moves. To ensure termination, care must be taken to prevent infinite loops in the path actually traversed by the problem solver. This is accomplished by maintaining a CLOSED list of those states that have actually been visited by an actual move of the problem solver, and an OPEN stack of those nodes on the current path from the start state. Moves to CLOSED states are ruled out, and if all possible moves from a given state lead to CLOSED states, then the OPEN stack is used to backtrack until a move is available to a new state. This conservative strategy prohibits the algorithm from undoing a previous move, except when it encounters a dead end. This restriction will be removed later in the paper. 5 Alpha Pruning A natural question to ask at this point is whether every frontier node must be examined to find the one with min- iThis name is due to Bruce Abramson 134 Automated Reasoning imum cost, or does there exist an analog of alpha-beta pruning that would allow the same decisions to be made while exploring substantially fewer nodes. If our algorithm uses only frontier node evaluations, then a simple adversary argument establishes that no such pruning algorithm can exist, since to determine the minimum cost frontier node requires examining every one. However, if we allow heuristic evaluations of interior nodes, then substantial pruning is possible if the cost func- tion is monotonic. A cost function f(n) is monotonic if it never decreases along a path away from the initial state. Monotonicity of f(n) = g(n) + h(n) is equivalent to con- sistency of h(n), or obeying the triangle inequality, a prop- erty satisfied by most naturally occurring heuristic func- tions, including Manhattan Distance and air-line distance. Furthermore, if a heuristic function is admissible but not monotonic, then an admissible, monotonic function f(n) can trivially be constructed by taking its maximum value along the path. A monotonic f function allows us to apply branch-and- bound to significantly decrease the number of nodes exam- ined without effecting the decisions made. The algorithm, which we call alppha pruning by analogy to alpha-beta prun- ing, is as follows: In the course of generating the tree, main- tain in a variable called Q the lowest f value of any node encountered so far on the search horizon. As each interior node is generated, compute its f value and cut off the cor- responding branch when its f value equals (Y. The reason this can be done is that since the function is monotonic, the f values of the frontier nodes descending from that node can only be greater than or equal to the cost of that node, and hence cannot effect the move made since we only move toward the frontier node with the minimum value. As each frontier node is generated, compute its f value as well and if it is less than o, replace cx with this lower value and continue the search. In experiments with the Fifteen Puzzle using the Man- hattan Distance evaluation function, alpha-pruning reduces the effective branching factor by more than than the square root of the brute-force branching factor (from 2.13 to 1.41). This has the effect of more than doubling the search horizon reachable with the same amount of computation. For ex- ample, if the computational resources allow a million nodes to be examined in the course of a move, the brute force algorithm can search to a depth of 18 moves while alpha pruning allows the search to proceed more than twice as deep (40 moves). As in alpha-beta pruning, the efficiency of alpha pruning can be improved by node ordering. The idea is to order the successors of each interior node in increasing order of their f values, hoping to find low cost frontier nodes early and hence prune more branches sooner. Although the two algorithms were developed separately, minimin with alpha pruning is very similar to a single iter- ation of iterative-deepening-A*. The only difference is that in alpha pruning the cutoff threshold is dynamically deter- mined and adjusted by the minimum value of the frontier nodes, as opposed to being static and set in advance by the previous iteration in IDA*. SO far, we have assumed that once an action is committed, it is not reversed unless a dead end is encountered, with the primary motivation being the prevention of infinite loops by the problem solver. We now address the question of how to incorporate backtracking when it appears favorable, as opposed to dead-end backtracking, while still preventing infinite loops. The basic idea is quite simple. One should backtrack to a previously visited state when the estimate of solving the problem from that state plus the cost of back- tracking to that state is less than the estimated cost of going forward from the current state. Real-Time-A*(RTA”) is an efficient algorithm for implementing this basic strategy. While the minimin lookahead algorithm is an algorithm for controlling the simulation phase of the search, RTA* is an algorithm for controlling the execution phase of the search. As such, it is’ independent of the simulation al- gorithm chosen. For simplicity of exposition, we will as- sume that the minimin lookahead algorithm is encapsulated within the computation of h(n), and hence becomes simply a more accurate and computationally more expensive TNay of computing h(n). In RTA*, the merit of a node n is f(n) = g(n) + h(n), as in A*. However, unlike A*, the interpretation of g(n) in RTA* is the actual distance of node n from the current state of the problem solver, rather than from the original initial state. RTA* is simply a best-first search given this slightly different cost function. In principle, it could be implemented by storing on an OPEN list the h. values of a3l previously visited states, and every time a move is made, updating the g values of all states on OPEN to accurately reflect their actual distance from the new current state. Then at each move cycle, the problem solver selects next the state with the minimum g + h value, moves to it, and again updates the g values of all nodes on OPEN. The drawbacks of this naive implementation are: 1) the time to make a move is linear in the size of the OPEN list, 2) it is not clear exactly how to update the g values, and 3) it is not clear how to find the path to the next destina- tion node chosen from OPEN. Interestingly, these problems can be solved in constant time per move using only local information in the graph. The idea is as follows: from a given current state, the neighboring states are generated, the heuristic function, augmented by lookahead search, is applied to each, and then the cost of the edge to each neigh- boring state is added to this value, resulting in an f value for each neighbor of the current state. The node with the minimum f* value is chosen for the new current state and a move to that state is executed. At the same time, the next best f value is stored at the previous current state. Korf 135 This represents the estimated h cost of solving the prob- lem by returning to this state. Next, the new neighbors of the new current state are generated, their h values are computed, and the edge costs of all the neighbors of the new current state, including the previous current state, are added to their h values, resulting in a set of f values for all the neighboring ktates. Again, the node with the smallest value is chosen to move to, and the second best value is stored as the h value of the old current state. Note that RTA* does not require separate OPEN and CLOSED lists, but a single list of previously evaluated nodes suffices. The size of this list is linear in the number of moves actually made, since the lookahead search saves only the value of its root node. Furthermore, the running time is also linear in the number of moves made. The rea- son for this is that even though the lookahead requires time that is exponential in the search depth, the search depth is bounded by a constant. Interestingly, one can construct examples to show that RTA* could backtrack an arbitrary number of times over the same terrain. For example, consider the simple straight- hne graph in Figure 1, where the initial state is node a, all the edges have unit cost, and the values below each node represent the heuristic estimates of those nodes. Since lookahead only makes the example more complicated, we will assume that no lookahead is done to compute the h values. Starting at node a, f(b) = g(b) + h(b) = 1 + 1 = 2, while f(c) = g(c) + h(c) = 1 + 2 = 3. Therefore, the prob- lem solver moves to node b, and leaves behind at node a the information that h(a) = 3. Next, node d is evaluated with the result that f(d) = g(d) + h(d) = 1+ 4 = 5, and node a receives the value f(u) = g(u)+ h(u) = 1+3 = 4. Thus, the problem solver moves back to node a, and leaves h(b) = 5 behind at node b. At this point, f(b) = g(b) + h(b) = 1+5 = 6, and f(c) = g(c) + h(c) = 1 + 2 = 3, causing the problem solver to move to node c, and leave h(u) = 6 behind at node a. The reader is urged to continue the ex- ample to see that the problem solver continues to go back and forth, until a goal is reached. The reason it is not an infinite loop is that each time it changes direction, it goes one step further than the previous time, and gathers more information about the space. This seemingly irrational be- havior is produced by rational behavior in the presence of a limited search horizon, and a pathological space. Unfortunately, the capability of RTA” to backtrack is not exercised by the Fifteen Puzzle with Manhattan Dis- tance as the evaluation function. The reason is that since Manhattan Distance only changes by one in a single move, it can be shown that RTA* will only backtrack at dead ends. 11 1 2 7 IL F 0 8 C Figure 1: RTA* Example E ‘i’ Solution Quality In addition to efficiency of the algorithm, the length of so- lutions generated by minimin lookahead search is of cen- tral concern. The most natural expectation is that solution length will decrease with increasing search depth. In exper- iments with the Fifteen Puzzle using Manhattan Distance, this turned out to be generally true, but not uniformly. One thousand solvable initial states of the Fifteen Puz- zle were randomly generated. For each initial state, the minimin algorithm with alpha pruning was run with search depths ranging from 1 to 30 moves. Moves were made until a solution was found, or a thousand moves had been made, in order to limit overly long solutions, and the resulting number of moves made was recorded. Figure 2 shows a graph of the average solution length over all thousand prob- lem instances versus the depth of the search horizon. The line at the bottom represents 53 moves which is the aver- age optimal solution length for a different set of 100 initial states. The optimal solution lengths were computed using IDA*, which required several weeks of CPU time to solve the hundred initial states[2]. The overall shape of the curve confirms the intuition that increasing the search horizon decreases the resulting solution cost. At depth 25, the average solution length is only a factor of two greater than the average optimal solution length. This is achieved by searching only about 6000 nodes per move, or a total of 600,000 nodes for the entire solution. This is accomplished in about one minute of CPU time on a Hewlett-Packard HP-9000 workstation. However, at depths 3, 10, and 11, increasing the search horizon resulted in a slight increase in the average solution length. This phenomenon was first identified in the case of two-player games and was termed pathology by Dana Nau[5]. He found that for certain artificial games, increas- ing the search depth resulted in consistently poorer play in some cases. Until now. pathology has never been observed in a “real” game. While the pathological effect is relatively small when averaged over a large number of problem instances, in in- dividual problem instances the phenomenon is much more prominent. In many cases, increasing the search depth by one move resulted in solutions that were hundreds of moves longer. In an attempt to understand this phenomenon, we per- formed some additional experiments on decision quakity as opposed to solution quality. The difference is that a so- lution is composed of a large number of individual move decisions. While solution quality is measured by the total length of the solution, decision quality is measured by the percentage of time that an optimal move is chosen. Since the optimal moves from a state must be known to deter- mine decision quality, the smaller and more tractable Eight Puzzle was chosen for these experiments, with the same Manhattan Distance evaluation function. 136 Automated Reasoning r lb is 2’0 2‘r 3c Search Horizon Figure 2: Solution Length vs. Search Horizon In this case, ten thousand solvable initial states were randomly generated. Instead of examining the entire so- lution that would be generated by the minimin algorithm, only the first move from these states was considered. In each case, the percentage of time that an optimal first move was chosen was recorded over all initial states for each dif- ferent search horizon. The search horizons ranged from one move to a horizon one less than the optimal solution length for a given problem. Figure 3 shows a graph of er- ror percentage versus search horizon. As the search depth increases, the percentage of optimal moves also increases monotonically. Thus, pathology does not show up in terms of decision quality. If decision quality smoothly improves with increasing search depth, why is solution quality so erratic? One expla- nation is that while the probability of mistakes decreases, the cost of any individual mistake can be quite high in terms of overall solution cost. This is particularly true in these experiments, where backtracking only occurred when a dead end was encountered. 5 IO 1r 20 2.r Search Horizon Figure 3: Decision Quality vs. Search Horizon Another source of error is ties among alternatives. In a situation where moves must be committed based on un- certain information, ties should not be broken arbitrarily More generally, when dealing with inexact heuristic esti- mates, two values that are closer together than the accu- racy of the function should be considered virtual ties, and dealt with as if they were indistinguishable. In order to deal with this problem, ties and virtual ties must first be recognized. This means that the alpha prun- ing algorithm must be changed to prune a branch only when its value exceeds the previous best by the error factor. This will increase the number of nodes that must be generated. Once a tie is recognized, it must be broken. The most reasonable way to accomplish this is to perform a deeper secondary search on the candidates until the tie is bro- ken. However, this secondary search must also have a depth limit. If the secondary search reaches its depth limit with- out breaking the tie, a virtual tie may as well be resolved in favor of the lower cost move. Viewing a heuristic evaluation function with lookahead search as a single, more accurate heuristic function generates a whole family of heuristic functions, one corresponding to each search depth. The members of this family vary in computational complexity and accuracy, with the more ex- pensive functions generally being more accurate. The choice of which evaluation function to use amounts to a tradeoff between the cost of performing the search and the cost of executing the resulting solution. The minimum total time depends on the relative costs of computation and execution, but a reasonable model is that they are linearly related. In other words, we assume that the cost of applying KOrf 137 an operator in the real world is a fixed multiple of the cost of applying an operator in the simulation. Figure 4 shows the same data as Figure 2, but with a horizontal axis that is linear in the number of nodes gener- ated per move as opposed to linear in the search depth. The curve shows that the computation-execution trade- off is initially quite favorable in the sense that small in- creases in computation buy large reductions in solution cost. However, a point of diminishing returns is rapidly reached, where further significant reductions in solution cost require exponentially more computation. The effect is even greater than it appears, since overly long solutions were arbitrarily terminated at 1000 moves. Different rel- ative costs of computation and execution will change the relative scales of the L-shape of the curve. two axes without altering the basic Average Optimal Solution t -- ----- --------- 1 b 100 200 300 'too 600 600 Nodes per Move Figure 4: Execution Time vs. Computation Time 9 Conclusions Existing single-agent heuristic search algorithms cannot be used in real-time applications, due to their computational cost and the fact that they cannot commit to an action before its ultimate outcome is known. Minimin lookahead search is an effective algorithm for such problems. Further- more, alpha pruning drastically improves the efficiency of the algorithm without effecting the decisions made. In ad- dition, Real-Time-A* efficiently solves the problem of when to abandoned the current path in favor of a more promis- ing one. Extensive simulations show that while increasing search depth usually increases solution quality, occasionally the opposite is true. To avoid the detrimental effect of vir- tual ties on decision quality, additional search is required. Finally, lookahead search can be characterized as generat- ing a family of heuristic functions that vary in accuracy and computational complexity. The tradeoff between solution quality and computational cost is initially quite favorable but rapidly reaches a point of diminishing returns. IO AcknowBedgements The idea of minimin lookahead search arose out of discus- sions with Bruce Abramson. A similar algorithm was inde- pendently implemented by Andy Mayer and Othar Hans- son. I’d like to thank David Jefferson for his comments on an earlier draft of this paper. This research was sup- ported by NSF Grant IST-85-15302, an IBM Faculty De- velopment Award, an NSF Presidential Young Investigator Award, and a grant from Delco Systems Operations. References PI PI PI PI PI Hart, P.E., N. J. Nilsson, and B. Raphael, A formal basis for the heuristic determination of minimum cost paths, IEEE !lVansactions on Systems Science and Cybernet- ics, SSC-4, No. 2, 1968, pp. 100-107. Korf, R.E., Depth-first iterative-deepening: An optimal admissible tree search, Artificial Intelligence, Vol. 27, No. 1, 1985, pp. 97-109. Shannon, C.E., Programming a computer for playing chess, Philosophical Magazine, Vol. 41, 1950, pp. 256- 275. Simon, H.A., The Sciences of the Artificial, 2nd edition, M.I.T. Press, Cambridge, Ma., 1981. Nau, D.S., An investigation of the causes of pathology in games, Artificial Intelligence, Vol. 19, 1982, pp. 257- 278. 138 Automated Reasoning
1987
26
617
Van E. Kelly Uwe Nonnenmann AT & ‘I’ Bell Laboratories 600 Mountain Ave. 3D-418 Murray Hill, New Jersey 07974 ABSTRACT The WATSON automatic programming system computes formal behavior specifications for process- control software from informal “scenarios”: traces of typical system operation. It first generalizes scenarios into stimulus-response rules, then modifies and augments these rules to repair inconsistency and incompleteness. It finally produces a formal specification for the class of computations which implement that scenario and which are also compatible with a set of “domain axioms”. A particular automaton from that class is constructed as an executable prototype for the specification. WATSON’s inference engine combines theorem proving in a very weak temporal logic with faster and stronger, but approximate, model-based reasoning. The use of models and of closed-world reasoning over “snapshots” of an evolving knowledge base leads to an interesting special case of non-monotonic reasoning. The WATSON’ system addresses an important issue in applying AI to the early stages of software synthesis: converting informal, incomplete requirements into formal specifications that are consistent, complete (for a given level of abstraction), and executable. For ten years AI research has attempted this task [Balzer 771, but the problem’s difficulty is exacerbated by the many possible types of imprecision in natural language specifications, as well as by the lack of a suitable corpus of formalized background knowledge for most application domains. We have restricted ourselves to a single common style of informal specification: the “scenario”. Scenarios are abstracted traces of system behavior for particular stimuli, requiring very modest natural-language 1. named for Alexander G. Bell’s laboratory assistant, not the fictitious M.D. technology to interpret. We have also selected a domain -- the design of telephone services -- in which most “common-sense” notions of proper system behavior can be axiomatized in a logic formalism with very weak temporal inference capabilities [DeTreville 841. The resulting domain axioms, together with the small signaling “vocabulary” of telephones, makes exhaustive reasoning* more tractable. These restrictions were justified, allowing us to concentrate on two specific issues: a. gaining leverage from domain knowledge in generalizing scenarios, and b. engineering fast hybrid interactive environments. reasoning systems, for In the next section, we summarize WATSON. We then demonstrate how it exploits domain knowledge using a simple case study, examine its hybrid inference engine, compare it to other research, and, finally, outline our future plans for the system. Figure 1 summarizes WATSON. Scenarios are parsed and generalized into logic rules. These scenarios and rules describe agents (finite-state process abstractions), sequences of stimuli, and assertions about the world whose truth values change dynamically. Omissions and informality in the original scenario may lead to over-generalized, under-generalized, or missing rules. After constructing partial, underconstrained models for each type of agent mentioned in the scenario, WATSON interactively refines the rules and models. It repairs rule contradictions, eliminates unreachable or dead-end model states, infers missing rules,%nd ascertains that all stimuli are handled in every state of the world. To complete information missing from its domain knowledge, WATSON asks the user true-false questions about short hypothetical scenarios. The user is never asked to envision multiple situations at once. Finally, WATSON performs an exhaustive consistency test on its refined set of rules. This proves Kelly and Nonnenmann 127 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theorem Proving ! I ,L ------- Early Prototype -----^--- --------- rules FIGURE I WATSON BLOCK DIAGRAM that the refined rules prohibit transitions from consistent to inconsistent world-states for all input stimuli. If these rules are later embedded in a larger set of rules (i.e., one describing more telephone services), and the larger set of rules passes the consistency test, then the original scenarios will still work. There are many possible computations (i.e., agent models) that could implement a given set of scenarios, but the refined rules form a model-independent verification condition for all such computations.2 Different finite-state models for the same rules may require different numbers of state transitions to implement any given rule subset. WATSON computes a minimal (fewest states and transitions) model for the specification, which is useful for early acceptance testing, either by software simulation or by use of a special test- bed with actual telephone equipment. To highlight how WATSON converts scenarios into a coherent system definition, we begin by analyzing the structure of a scenario, noting its ambiguities and omissions. Next we examine the telephony domain axioms, which provide the backdrop for all our interpretations. Finally we consider one particular anomaly fixed by WATSON. 2. For a more complete discussion of the final consistency test, the generalizer, detailed models used in the hybrid reasoner, and more case studies see [Kelly 871. A. Scenarios, Episode ~Wach’eS, ad A single scenario defining a simplified version of 66pIain old telephone service”, or P TS3, is given below: First, Joe is on-hook, and Joe goes OH-hook, then Joe starts getting dial-tone. Next, Joe dials Bob; then Bob starts ringing and Joe starts calling Bob and Joe starts getting ringback. Next, Bob goes OH-hook, then Joe gets conrmected to Bob. Next, Joe goes on-hook, then Joe stops being connected to Bob. Joe and Bob are agents of the same type, whose implementations must be isomorphic. The stimuli that drive the scenario are “going-on-hook”, “going-off-hook”, and “dialing”; in our simplified telephone domain, these are the only ways in which a telephone user can possibly affect a telephone instrument.4 Seven different predicates (e.g., “on- hook”, “get-dial-tone”, “connected”) are used in the preceding scenario to describe the changing state of the world. Assertions constructed using these predicates appear in two different tenses in the scenario: a present tense (“Joe is on-hook”) and an immediate-future tense denoting immanent change in the truth-value of the assertion (“Bob starts ringing”), presumably occurring before the next stimulus. A 2-tense logic is thus a natural formalism for capturing this scenario. 1. Episodic Strut The scenario is understood as a sequence of four sequential episodes, each consisting of three parts: e antecedents, assertions known to be true of the agents before a stimulus, 8 a stimarlus, which may have implicit persistent side- effects’, and 0 explicit consequenta, or changes in the truth-values of selected assertions after the stimulus. The consequents of one episode implicitly determine the antecedents of the next. 3. “On-hook” refers to when the telephone handset is resting in its cradle, i.e., “hung-up”. “Off-hook” is the opposite state. When “on-hook”, the phone is disconnected, except for its ringing circuit. “Ringback” is the sound that a caller hears when a called party rings. 4. We omit the stimulus of momentarily “flashing” the phone switch-hook, which is usually recognized by switching systems as different from hanging up and then going off-hook again. 5. For example after “Joe goes off-hook”, we should know implicitly that Joe is no longer on-hook. But the momentary stimulus “dials” has no persistent side-effects on the state of the world. 128 Automated Reasoning Each episode represents a single stimulus-response (S-R) cycle for the system, and is mapped into a set of logic rules, one rule per each consequent. The antecedents of the episode appear in the present tense and the consequents appear in the immediate-future tense, using the modal operators BEGINS and ENDS. The following six rules, numbered by episode, implement the scenario: R 1.1: ‘p’x [on-hook (x) i\ EVENT (goes-of-hook (x)) > BEGINS (dial -tone (x)) 1 R 2.1: vx,y [dial-tone(x) A EVENT (dials (x,y)) I BEGINS (calling (x, y 1) 1 R 2.2: vx [3yhdiaZ-tone (x) A EVENT (dials (x,y))l > BEGINS (ringback Lx)) I R 2.3: vx,y [dial-tone (x) A EVENT (dials (x,y)) 2 BEGINS (ringing <y > 11 R 3.1: vx,y [caZZing <y,x> A riPtgback (Y> A ringing(x) /\ EVENT (goes-ofl-hook (x)1 1 BEGINS (connected (y,x))l R 4.1: // x,y [connected (x,y) A EVENT (goes-on-hook (x)) 1 > ENDS (connected (x,y)) aiies One must distinguish between the level of abstraction of the scenario (e.g., dialing is considered an atomic operation) and its informality. WATSON does not change a scenario’s abstraction level, but rather corrects some of the following anomalies due to informality, e.g.: Antecedents may omitted from over-generalized rules, such as R2.3. episodes, causing e Consequents may be missing from episodes, leading to missing rules; for instance, it is not stated that Bob stops ringing when he goes off-hook. in an episode, @ Irrelevant leading to antecedents may be included under-generalized rules. 0 Causal links among antecedents and consequents are not always made explicit. Coincidence and causality may be confused. Specifications for a particular agent type are split over several agents (Joe and Bob), and are not harmonized. For example, both Rl .l and R3.1 have to do with telephones going off-hook: Joe in Rl .l and Bob in R3.1. The “common-sense” domain axioms describing telephone services combine several different sorts of mformation: 1. Axioms and axiom schemas that embed temporal reasoning with 2-tense logic into standard first-order logic (FOL) resolution, for example: 0 \dA E ASSERTIONS [[A 1 -BEGINS(A)] A [-A 1 -ENDS (A)]], 8 serialization axioms for stimuli. 2. Declarations for telephone terminology: @ types of agents, @ stimuli and their side-effects, 0 predicates assertions. and argument types for constructing 3. Hardware constraints on telephone devices: telephones must be on-hook or off-hook, but not both. telephones can’t ring when off-hook, and can’t dial or accept audio services (like dial-tone, busy signal, or ringback) while on-hook. “X dials Y” is a null operation unless X has dial-tone. 4. Etiquette rules for telephone services: e telephones time. Q) telephones should someone is calling. receive at most one audio service at a not start to ring unless While this body of knowledge appears complex, it can be written down compactly (about a dozen axiom schemas, excluding declarations) and in a scenario- independent form. This means not writing FOL axioms directly, but inferring default terminology declarations from the scenario itself (as in [Goldman 7711, and using second-order schema notation extensively. C. Csnsistelocy and Co al ysis 1. Types of Correction Atiem WATSON applies four main corrections to sets of rules: a. Repairing ineomistemt rules. Rules contradict if they have the same stimulus, their antecedents are compatible, but their consequents are incompatible. The correction usually involves strengthening the antecedents of one or more of the contradictory rules. b. IFtishing incompkte episodes by adding new r For instance, in the second episode of the POTS scenario, no rule ends Joe’s dial-tone, yet it clearly must end. Kelly and Nonnenmann 129 c. Ehninatiug unreachable “states”. Some combinations of assertions may be consistent with the domain axioms but never happen in any scenario. For instance, Joe may call Bob and not get a ringback (i.e., if he gets a busy-signal instead), but our scenario does not show this. WATSON will solicit a new scenario that accesses such stat!es, or modify existing rules. d. Ensuring all stimuli are handled in afl states. Suppose Joe hangs up during dial-tone. The domain axioms completely determine which “state” Joe would then be in (on-hook but not ringing). So, WATSON can build this entire episode from first principles. In more complex cases, WATSON cannot determine the exact outcome, and must get help from the user, either by proposing several alternatives or asking for a new scenario. The procedure is similar for each of these four oases. First the rules and agent models are used to detect potential problems. Then a heuristic search is performed for a “simplest” workable fix. Next, the fix is explained to the user, who is asked for approval. Finally, the rules and models are updated. 2. Example: Fixing the Inconsistency of RI. 1 and R3.1 Consider the contradiction between Rl .l and R3.1. After detecting the inconsistency, WATSON notes that the antecedents of R1.l are strictly more general than those of R3.1. WATSON attempts to strengthen the antecedents of R1.l until the antecedents of the two rules become incompatible. The most obvious way, by finding all antecedents of R3.1 not in Rl .l, negating them, and conjoining them to Rl .l, produces something correct but verbose: R l.la: tjx Ion-hook (X1 A [ -vinging (xl V [ Vy [vaZZing(y,x) V -ringback (y)lll A EVENT (goes~o~~hook (x))l 3 BEGINS (dial -tone (x)1 1. WATSON searches for simpler versions that are “negligibly” stronger then R 1.1 a. Considering rules involving only a single negated antecedent from R3.1, it finds the following candidates which are simpler than R 1.1 a, but stronger by varying amounts: R l.lb: VX Ion-hook (x1 A try [-ringback (y)l n EVENT (goes-off-hook (x)) 3 BEGINS (dial -tone (x)) I R 1.1~: k/x [on-hook(x) A Vy I~caZZing(y,x)l A EVENT (goes-ofl-hook (x)) 3 BEGINS (dial -tone (x)) I R l.ld: Vx [on-hook(x) A -ringing(x) A EVENT (goes-ofl-hook (xl) I) BEGINS (dial -tone (x)) I It heuristically ranks Rl .ld as the simplest, since it doesn’t introduce new variables to Rl .l. It then tries to show that Rl . Id is “virtually” as general as Rl . la by establishing CWR F vx [ringing (x) I 3 [calling <y,x> A ringback <y> II, where CWR is an expanded set of “closed world rules” constructed on the fly from the scenario rules to support reasoning both forward and backward in time over a single S-R cycle. These contain an implicit assumption that the presently known rules are the only rules. Since according to the known rules there is no way for a phone to ring without another phone initiating a call and getting ringback, this proof succeeds, leading to a conclusion that Rl.ld is safe to use. At this point, WATSON knows it has a “maximally simple” acceptable solution. It now asks the user for approval before replacing Rl . 1 with Rl . Id. It paraphrases Rl . 1 and R3.1 back into “scenario-ese”, describes the contradiction, and asks for permission to replace Rl . 1 with Rl . Id. If WATSON had found two equally desirable solutions, the user would be asked to make the choice. D. Saamnmary of the POTS Case Study The simplified POTS scenario requires fourteen rules to pass the final consistency check. WATSON can obtain these by rewriting three of the original six and generating the remaining eight from first principles. Four of these eight are required to complete unfinished episodes, and the other four handle telephones that hang up midway through the scenario (unanticipated stimuli). This requires 5 minutes of machine time on a Symbolics 3600 -workstation, 96% of which is spent proving theorems. Our home-brew theorem prover runs faster than similar resolution engines (i.e., full FOL, breadth- first or depth-first clause selection) that we have used in the past, but the response time still taxes user patience. 130 Automated Reasoning monotonicity. Similar problems arise in WATSON’s 2- tense logic and extended models. For example, if a new scenario provided a rule that would allow a telephone to ring without some other phone calling it and receiving a ringback (say, a line-test feature), that would invalidate the assumption by which we chose Rl.ld as the best replacement for R 1.1. easming Not only is theorem proving slow, but WATSON’s temporal reasoning is confined to a single stimulus- response cycle of the system. This weak logic was necessitated by exponential blowups when using more powerful, episode-chaining logics (e.g., situation calculi) with the initially under-constrained scenario rules. Our solution integrates model-based reasoning into WATSON’s inference methods. WATSON’s most important models are the minimal (fewest states) automatons required to implement each type of agent. Each state corresponds to a particular assignment of truth-values to every known assertion about the agent. State transitions are governed by the logical rules generalized from the scenario. These automatons are initially fragmentary and underconstrained, but evolve into connected, deterministic state transition graphs as WATSON edits the rules. The models are stored in a form that facilitates their use even when incomplete. They consist of a list of states telling what assertions hold in each state, a list of constraints on the possible states of agents both before and after each episode, and groupings of rules according to which state transitions they influence. Model-based reasoning increases both the strength and speed of WATSON’s reasoning. Of the queries posed during the correction/completion stage, about 20% require reasoning over multiple episodes, which cannot be done by theorem proving in 2-tense logic, but can be answered in the model. Another 65% are fully instantiated queries asking whether some property P holds in some model; for these, querying the model is typically 50 times faster using the theorem prover. The major caveat is incompleteness: a query might fail in WATSON’s minimal model, and still hold in some other model. Fortunately, many important properties, such as graph connectivity, hold in minimal models or not at all. In other cases, models can at least filter the set of theorems to be proved. Still, care is needed when interpreting the results of model-based reasoning. In WATSON, such circumspection is presently hard-coded, but explicit automated meta-reasoning about model limitations is on our future research agenda. WATSON’s non-monotonicity is nevertheless simpler than the general case. Most treatments of non- monotonicity assume retractions are forced asynchronously by evidence external to the system (new data). WATSON’s problems all arise from internal decisions to apply closed-world reasoning. Thus, non- monotonicity can be minimized (but not eliminated) by careful static ordering of process steps, and by exploiting meta-knowledge about which model properties are stable. For instance, once WATSON has eliminated all unreachable states, it is safe to assume that no presently known state will ever become unreachable. The burden of this meta-reasoning should also be transferred eventually from WATSON’s designers to WATSON. For those relatively few determinations vulnerable to later retraction, WATSON applies brute force -- re- validating them all after every rule and model update. Fortunately, the speed of model-based reasoning reduces this overhead. Furthermore, WATSON batches unrelated rule updates together to minimize the frequency of update cycles. The POTS scenario, for example, can be processed in only four update-revalidate cycles. The general goals of WATSON recall the SAFE project at ISI ([Balzer 771, [Goldman 771). SAFE attempted to understand an English prose description of a process, inferring the relevant entities mentioned in the process and their key relationships. It then generated an algorithm to implement the process. Of the six types of specification ambiguity corrected by SAFE, four of them, accounting for 88% of those corrections, were artifacts of using fairly unconstrained natural language input. Conversely, the scenario ambiguities corrected by WATSON did not arise in the SAFE case studies, because SAFE’s initial specifications were more expansive (100-200 words). onstonicity . Acquiring Programs from Examples There are well-known relationships between reasoning with a minimum Herbrand model and theorem proving with a closed-world assumption in ordinary first- order logic. Either one of these techniques, when applied to an evolving knowledge base, opens the door to non- Several approaches have been used to “learn” programs based on sample traces: the pattern-matching approaches of Biermann & Krishnawamy [Biermann 761 and Bauer [Bauer 791, the language recognizer generators Kelly and Nonnenmann 131 of Angluin [Angluin 781 and Berwick [Berwick 861, and Andreae’s NODDY system [Andreae 851. Of the three, the work of Andreae is much the closest to the spirit of WATSON by its use of explicit domain knowledge. The other approaches attempt domain-independence, entailing that their input examples must be either numerous or meticulously annotated. its domain kndwl d t NODDY, like WATSON, uses e ge o constrain the generality of the programs it writes, much like the function of negative examples in an example-based learning system. NODDY writes robot programs, and its domain knowledge is solid geometry. One distinctive feature of WATSON, compared with other systems, is its handling of multi-agent interactions; the others are restricted to single-agent worlds. Another difference is that WATSON produces a specification for a class of computations, not just a single program. The final implementation, adapted to a particular machine architecture, may have much more internal complexity than WATSON’s minimal model. The rules form an model-independent post-hoc verification criterion, and guidance for further transformational development. VI. STATUS AN LAN3 WATSON evolved from PHOAN [DeTreville 841, which provided its axiomatization style, exhaustive consistency test, and FSA synthesis procedure. PHQAN successfully programmed POTS service for an experimental ethernet-based telephone switch [DeTreville 831. By argument from parentage, we conclude that the same hardware should execute WATSON’s code, but this has not yet been verified. Our WATSQN prototype can handle the POTS scenario and several extensions, such as busy-signals, billing, and multi-call contention. We are extending it to cover the “toy telephone” suite of telephone services defined in [IEEE 831. An important feature of WATSON is its ability to detect unforeseen interactions among different services. The “toy telephone” domain contains several examples of such interactions. We have previously noted ongoing research issues in meta-reasoning about model errors and non- monotonicity. We would also like to generalize the style of scenario WATSON can accept; i.e., closer to idiomatic English. Therefore, a very flexible user-customizable English parser used to generate FOL database queries [Ballard 861 will soon be adapted for WATSON. [Andreae 851 Peter Andreae, “Justified Generalization: Acquiring Procedures From Examples”, MIT AI Lab Technical Report 834, 1985. [Angluin 781 Dana Angluin, “Inductive Inference of Formal Languages from Positive Data”, Information and Control, 1978, Vol 45, pp. 117- 135. [Ballard 861 Bruce Ballard and Douglas Stumberger, “Semantic Acquisition in TELI: A Transportable, User-Customizable Natural Language Processor”, in ACL-24 Proceedings, Association For Computer Linguistics, 1986, pp. 20-29. [Balzer 771 Robert Balzer, Neil Goldman, and David Wile, “Informality in Program Specifications”, in Proceedings of IJCAI-5, 1977, pp. 389-397. [Bauer 791 Michael A. Bauer, “Programming by Examples,” Artificial Intelligence, May 1979, vol. 12, no. 1, pp. 1-21. [Berwick 861 Robert C. Berwick, ‘“Learning from Positive-Only Examples: The Subset Principle and Three Case Studies,” in Machine Learning: An Artificial Intelligence Approach, Volume II Michalski, Carbonell, and Mitchell, eds.; Morgan Kaufmann, 1986 [Biermann 761 Alan W. Biermann and Ramachandran Krishnawamy, “Constructing Programs From Example Computations,” IEEE Transactions on Software Engineering, Sept. 1976, Vol. SE-2, no. 3, pp. 141-153 [DeTreville 831 John DeTreville and W. David Sincoskie, “A Distributed Experimental Communications System”, IEEE Transactions on Communication, Dec. 1983, Vol. COM-3 1, no. 12. [DeTreville 841 John DeTreville, “Phoan: An Intelligent System For Distributed Control Synthesis”, ACM SIGSOFTBZGPLAIV Software Engineering Symposium on Practical Software Development Environments, P. Henderson, ed.; 1984, pp. 96-103. [Goldman 771 Neil Goldman, Robert Balzer, and David Wile, “The Inference of Domain Structure from Informal Process Descriptions”, USC-IS1 Research Report 77-64, October 1977. [IEEE 831 JSP & JSD: The Jackson Approach TO Software Development, IEEE Computer Society, 1983 [Kelly 871 Van E. Kelly and Uwe Nonnenmann, “From Scenarios To Formal Specifications: the WATSON System” Computer Technology Research Laboratory Technical Report (forthcoming). 132 Automated Reasoning
1987
27
618
On the Expressiveness of Rule-based Systems for Reasoning with Uncertainty David E. Heckerman and Eric J. Horvitz Medical Computer Science Group Knowledge Systems Laboratory Stanford University Stanford, California 94305 Abstract We demonstrate that classes of dependencies among beliefs held with uncertainty cannot be represented in rule-based systems in a natural or efficient manner. We trace these limitations to a fundamental difference between certain and uncertain reasoning. In particular, we show that beliefs held with certainty are more modular than uncertain beliefs. We argue that the limitations of the rule-based approach for expressing dependencies are a consequence of forcing non- modular knowledge into a representation scheme originally c!esigtrect to represent modular beliefs. Finally, we describe cl representation technique that is related .to the rule-based framework yet is not limited in the types of dependencies that it can represent. I Introduction Original research on expert systems relied primarily on techniques for reasoning with propositional logic. Popular approaches included the rule-based and frame- based representation frameworks. As artificial in tell igence researchers extended their focus beyond deterministic problems, the early representation methods were augmented with techniques for reasoning with uncertainty. Such extensions left the underlyin g structure of the representations largely in tact. In this paper, we examine the rule-bastd approach to reasonrng with uncertainty. Within this context, we describe a fundamental difference between beliefs which are uncertain and beliefs which are held with certainty in the sense of monotonic propositional logic. In particular, we show that beliefs which are certain are more modular than uncertain beliefs. We demonstrate that because of this difference, simple augmentations to the rule-based approach are inadequate for reasoning with uncertainty. We exhibit this Inadequacy in the context of the MYClN certainty factor model [Shortliffe 7.51, an adaptation to the rule-based approach for reasoning with uncertainty which has seen widespread use in expert systems research. We show that this adaptation does not have the expressiveness necessary to represent certain classes of dependencies that can exist among beliefs held with uncertainty. After demonstrating the limitations of the certainty factor model, we describe a representation technique called belief networks that is not similarly limited in its ability to express uncertain relationships among propositions. II The MYCIN certainty factor model In this section, we describe the aspects of the MYCJN certainty factor model that are central to our discussion. The knowledge in MYCTN is stored in rules of the form “IF E THEN H” where E denotes a piece of evidence for hypothesis H. A certainty factor is attached to each rule that represents the change in belief in the hypothesis given the piece of evidence for the hypothesis. Certainty factors range between -1 and l. Positive numbers correspond to an increase in the belief in a hypothesis while negative quantitres correspond to a decrease in belief. It is important to note that certainty factors do not correspond to measures of ubsolute belief This distinction, with respect to certainty factors as well as other measures of uncertainty, has often been overlooked* in the artificial intelligence literature [Horvitz 861 We will sometimes use the following notion to represent the rule “IF E THEN H”: CFU-W E&H where CF(H,E) is the certainty factor for the rule. In the certainty factor model, as in any rule-based framework, multiple pieces of evidence may bear on the same hypothesis and a hypothesis may serve as evidence for yet another hypothesis. The result is a network of rules such as the one shown in Figure 1. This structure IS called an inference net [Duda 76-j. Figure 1: An inference net The certainty factor model contains a prescription for propagating uncertainty through such an Inference net. For example, the CF model can be used to compute the change in belief in hypotheses G and H when A and R are true (see Figure 1). In this paper, we will focus on two types of propagation, parallel combination and divergent propagation. Parallei combination occurs when two or more pieces of evidence impinge on a single hypothesis, as shown .in Figure 2(a). In this case, the certainty factors on two rules are combined with the parallel combination function to generate a certainty factor for the hypothetical rule “IF E, AND E, THEN H.“’ Divergent propagation occurs when one piece of evidence I The comblnatton ftrncr~on IS given 11, the FMYClh’ manual [FMYCIN 791. Heckerman and Horvitz 121 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. bears on two or more hypotheses as shown in Figure 2(b). In this case, the updating of each hypothesis occurs independently. More generally, if two sub-nets diverge from a common piece of evidence, uncertainty is propagated in each sub-net independently. cannot be represented in a natural or efficient manner within the rule-based framework.’ In this section, we examine two such classes, termed mutual exclusivity and multiple causation, which occur commonly in real-world domains. A set of hypotheses is said to be mutually exclusive and exhaustive when one and only one hypothesis from the set is E, CF(hE1 ) \ H CW 1 ,E) 1 H1 true. We examine the case where two or more pieces of evidence are relevant to a set of three or more mutually txclusive and exhaustive hypotheses. We will show that E parallel combination cannot be used to efficiently represent this situation. Mulriple causalion occurs when a piece of evidence has two or more independent causal or explanatory hypotheses. We will show that divergent propagation cannot be used to efficiently represent this situation. Rather than (a) Parallel combination (b) Divergent propagation present proofs of these results which can be found elsewhere [Heckerman 861, we will present examples that facilitate an Figure 2: Two types of propagation in the CF model intuitive understanding of the limitations. A. Mutual exclusivity To illustrate a difficulty with representing mutual exclusivity in a rule-based framework, consider an example III A fundamental difference Let us now explore a fundamental difference between rules which represent deterministic relationships among propositions that harkens back to simpler days. Suppose you are given one and rules which reflect uncertain relationships. Again of three opaque jars containing mixtures of black licorice and consider the case of parallel combination as shown in Figure white peppermint jelly beans: 2(a). Suppose the certainty factor for the rule involving E, is equal to 1. This corresponds to the situation where E, proves H with certainty. In this case, E, also proves H if E, is OO already known when E, is discovered. In other words, the certainty factor CF(H,E,) does not depend on whether or not E, is known when the rule involving E, is invoked. Note we are assuming that deterministic beliefs are monotonic, a E typical assumption of rule-based frameworks associated with schemes for reasoning with uncertainty. In contrast, suppose 5 H2 H3 the certainty factor for this same rule lies between -1 and 1. This corresponds to the situation where E, potentially updates The first jar contains one white jelly bean and one black jelly the belief in H but does not prove or disprove the hypothesis. bean, the second jar contains two white jelly beans, and the In this case, it is reasonable to expect that the certainty factor third jar contains two black jelly beans. You are not allowed for the rule may depend on the degree of belief assigned to to look inside the jar, but :you are allowed to draw beans from the jar, one at a time, wilh replacement. That is, you must E, when the rule is invoked. The above is an instance of a fundamental difference between rules that are certain and those that are not. We say that deterministic or logical rules are modular while rules replace each jelly bean you draw before sampling another. Let H, be the hypothesis that you are holding the izh jar. As you are told that the jars were selected at random, you believe each H, is equally likely before you begin to draw jelly beans. reflecting an uncertain relationship are non-modular. We use the term modular to emphasize that rules which are certain stand alone; the truth or validity of a deterministic rule does It seems natural to represent this situation with the following rules for each hypothesis I-l,: not depend upon beliefs about other propositions in the net. As mentioned above, the modularity of deterministic rules is a consequence of the assumption of monotonicity. We introduce the term modularity in lieu of monotonicity because we do not wish to confuse the notion of non-modularity, a Black q(H i Black) concept we apply to uncertain beliefs, with non-monotonicity, a concept traditionally reserved for beliefs that are held with certainty. / White CF(H i ,White) That indeterministic rules are less modular than deterministic rules is also demonstrated in the case of divergent propagation (see Figure 2(b)). In particular, if E proves H, with certainty when the status of H, is unknown, then E will also prove H, with certainty when H, is known to That is, each time a black jelly bean is observed, the belief in each hypothesis is revised using the certainty factors CF(Hi, Black) in conjunction with the parallel combination rule. Beliefs are similarly revised for each white jelly bean observed. be true or false. However, if E does not prove or disprove Hi conclusively, the certainty factor for the rule involving H, may depend upon the belief assigned to H, at the time E is discovered. 2We note that some classes of dependencies can be represented efficiently. In another paper [Heckerman SG], several of these classes are identtfied 111 a IV Limitations of the rule-based representation probabilistic context. It IS shown that if relationships among proposltlons satisfy certain strong forms of condillonal Independence, then these by based framework. relationships are accommodated Unfortunately, such conditions are naturally rarely met the rule- pracllce. As a result of this fundamental difference concerning modularity, there are certain classes of dependencies that 122 Automated Reasoning Unfortunately, such a representation is not possible because the modularity of rules imposed by parallel combination is too restrictive. To see this, suppose a black jelly bean is selected on the first draw. In this case, the belief in H, increases, the belief in HZ decreases to complete falsity, while the belief in H, remains relatively unchanged. Thus, the certainty factor for the rule “IF Black THEN 14,” IS close to zero.3 In contrast, suppose a black jelly bean is selected following the draw of a white jelly bean. In this case, the certainty factor for the rule “IF Black THEN H,” should be set to 1 as H, is confirmed with certainty. As only one certainty factor can be assigned to each rule, it is clear that the above representation fails to capture the dependencies among beliefs inherent in the problem. This result can be generalized. It has been shown that parallel combination cannot be used to represent the situation where two or more pieces of evidence bear on a hypothesis which is a member of a set of 3 or more mutually exclusive and exhaustive hypotheses [Johnson 86, Heckerman 861. We should mention that the above problem can be forced into the rule-based framework. For example, it can be shown that the following set of rules accurately represents the situation for Ii,. It seems natural to represent this situation with the inference net shown in Figure 3. However, a problem arises in trying to assign a certainty factor to the rule “IF Alarm THEN Burglary.” Had Mr. Holmes not heard the radio announcement, the alarm sound would have strongly supported the burglary hypothesis. However, since Mr. Holmes heard the announcement, support for the burglary hypothesis is diminished because the earthquake hypothesis tends to “explain away” the alarm sound. Thus, it is necessary to attach two certainty factors to the same rule; one for the case where Mr. Holmes hears the announcement and another for the case where he does not. As only one certainty factor can be assigned to each rule, the inference net in Figure 3 fails to capture the situation. 7 Burglary / Alarm \ YA Y Earthquake Radio ’ announcement IF 1st draw Black THEN H,, 0 Figure 3: An inference net for Mr. Holmes’ situation The problem of Mr. Holmes can be generalized. It has been shown that divergent propagation cannot be used to represent the-case where a single piece of evidence is caused by two explanatory hypotheses if either of these hypotheses ‘can be updated with independent evidence [ Heckerman S6 J. As in the jelly bean problem, Mr. Holmes situation can be forced into a rule-based representation. For example, the case can be represented by writing a rule for almost every possible combination of observations. IF AND Phone call Announcement THEN Burglary, .l IF AND Phone call No announcement THEN Burglary, .8 IF AND No phone call Announcement THEN Burglary, -.Oi IF AND No phone call No announcement THEN Burglary, -.05 IF 1st draw White THEN H,, 0 IF AND 1st draw Black Current draw Black THEN H,, -.5 IF 1st draw Black Current draw Wh ite THEN H,, 1 IF AND 1st draw White Current draw Black THEN H,, 1 IF AND 1st dr aw White Curren t draw White THEN H 1' -.5 Unfortunately, this representation is inefficient and awkward. The simplicity of the underlying structure of the problem is lost. We note that there are even more pathological examples of inefficient representation. For example, if we add another white and black jelly bean to each jar in the above problem, it can be shown that the number of rules required to represent N draws is greater than N. R. Multiple causation In discussing another limitation of the rule-based framework, let us move from the simple world of jelly beans to a more captivating situation. Consider the following story from Kim and Pearl [Kim 83): IF Announcement THEN Earthquake, 1 IF AND Phone No an call ounceme nt THEN Earthquake, .Ol IF AND No phone call No announcement THEN Earthquake,-.01 As in the previous problem, however, this representation is undesirable. In particular, the underlying causal relationships among the propositions are completely obscured. Moreover, the representation will become inefficient as the problem is modified to include additional pieces of ;evidence. For example, suppose the radio announcement is garbled and Mr. Holmes makes use of many small clues to infer that an earthquake is likely to have occurred. In this case, the number of rules required would be an exponential function of the number of clues considered. Mr. Holmes received a telephone call from his neighbor notifying him that she heard a burglar alarm sound from the direction of his home. As he was preparing to rush home, Mr. Holmes recalled that last time the alarm had been triggered by an earthquake. On his way driving home, he heard a radio newscast reporting an earthquake 200 miles away. 3 In another paper [Heckerman 873, a method for calculating numertcal values for certainty factors is described. The results of’ this method are consistent with the rntult!ve results presented here. Heckerman and Horwiti 123 V A more appropriate representation no arcs into the “Identity of jar” node, an unconditional or marginal distribution for this variable is given. We will now describe a representation technique that is closely related to the rule-based framework yet is not limited in the types of dependencies among propositions that it can represent. The representation, termed belief networks, has recently become a focus of investigation within the artificial intelligence community [Pearl 86].4 After briefly describing belief networks, we will show how the examples discussed above are represented within the methodology. We will then define a weaker notion of modularity that is more appropriate for uncertain knowledge in the context of belief networks. Finally, we wiIl show how this weaker notion of modularity can facilitate efficient knowledge base maintenance. A belief network is a two-level structure. The upper level of a belief network consists of a directed acyclic graph that represents the uncertain vuriables relevant to a problem as well as the relationships amon, 0 the variables. Nodes (circles) are used to represent variables and directed arcs are used to represent dependencies among the variables. The bottom level represents all possible values or outcomes for each uncertain variable together with a probability distribution for each variable. The arcs in the upper level represent the notion of probabilistic conditioning. In particular, an arc from variable A to variable B means that the probability distribution for B may depend on the values of A. If there is no arc from A to B, the probability distribution for B is independent of the values of A. To illustrate these concepts, consider once again the jelly bean problem. An belief network for this problem is shown in Figure 4. Level 1: Level 2: ” Figure 4: A belief network for the jelly bean problem The two nodes labeled “Identity of jar” and “Color drawn” in the upper level of the belief network represent the uncertain variables reIevant to the problem. The tables in the lower level list the possible values for each variable. The arc between the two nodes in the upper level means that the probability distribution for “Color drawn” depends on the value of “Identity of jar.” Consequently, the probability distribution for “Color drawn” given in the second level of the network is conditioned on each of the three possible values of “Identity of jar”: H,, H,, and H,. Since there are Note that the same jar problem can belief network with the arc reversed. be represen ted by a In this case, an unconditional probability distribution would be assigned to “Color drawn” and a conditional probability distribution would be assigned to “Identity of jar.” This highlights a distinction between inference nets and belief networks. Inference networks require dependencies to be represented in the evidence-to-hypothesis direction. III a belief network, dependencies may be represented in whatever direction the expert is most comfortable.5 As discussed earlier, it is difficult to represent this situation in an inference net because the three hypotheses reflecting the identity of the jar are mutually exclusive. In a belief network, however, this class of dependency is represented naturally. Rather than attempt to list each hypothesis in the upper level, these mutually exclusive hypotheses are moved to the second level and are considered together under the single variable, “Color drawn.” Now let us reexamine the story of Mr. Holmes to see how a belief network can be used to represent multiple causation. The upper level of a is shown in Figure 5. belief network for Mr. Holmes’ situation Figure 5: A belief network for Mr. Holmes’ situation The lower level of the belief network contains value lists and probability distributions as in the previous problem. For example, associated with the nodes “Phone call,” “Alarm,” “Burglary,” and “Earthquake” are the value lists (Received, Not received}, {Sounded, Not sounded), (True, False), and {True, False) respectively. Associated with the node “Phone call” are the two probability distributions p(Phone call I Alarm p(Phone call I Alarm q Not sounded).h= Sounded) and As mentioned earlier, an inference net cannot represent this situation in a straightforward manner because there are two causes affecting the same piece of evidence. However, this dependency is represented naturally in a belief network. In this example, the dependency is reflected in the probability distributions for “Alarm.” In particular, a probability distribution for each combination of the values of the two variables “Burglary” and “Earthquake” is associated with the “Alarm” variable. That is, the following probability 5Typically, the direction of arcs relationships [Pearl 86, Shachter 871. in belief nelworks reflect causal 4We note that the influence diagrams of Howard [Howard Sl] and the probabilistic causal nefworks of Cooper [Cooper S43 are closely related to belief networks. G Note that we are using a short-hand notation for probabrlrty distrrbutrons. For example, p(Phone call I Alarm = sounded) is an abbrevration for the two probabilities p(Phone call = Received 1 Alarm = sounded) and p(Phone call = Not received I Alarm = sounded). 124 Automated Reasoning distributions network: will be included in the lower level of the belief p(Alarm 1 Burglary=False AND Earthquake=False) p(Alarm 1 Burglary=False AND Earthquake=True) p(Alarm 1 Burglary=True AND Earthquake=False) p(Alarm 1 Burglary=True AND Earthquake=True). The interaction between the “Burglary,” “Earthquake,” and “Alarm” variables is completely captured by these probability distributions. The above example points out that the representation of dependencies in a belief network does not come without increased costs. In particular, additional probabilities must be assessed and computational costs will increase. However, these costs are no greater and typically less than costs incurred in attempting to represent the same dependencies within the rule-based representation. For example, in the case of the garbled radio announcement discussed above, the belief network approach will generally not suffer the same exponential blow-up which occurs in the inference net representation. P(iIS’,) = P(ilP,). (1) Relation ii) rays th?t if :he ou:comes of the direct predecessors of a node i are known with certainty, then the probability distribution for node i is independent of all nodes that are not successors of node i. Thus, whenever an arc is omitted from a non-successor of node i to node i, an assertion of conditional independence is being made. It is important to remember that such assertions are under the control of the knowledge engineer or expert. For example, in the belief network for Mr. Holmes, arcs from “Burglary,” “Earthquake,” and “Radio announcement” to “Phone call“ are omitted because it IS believed that “Phone call” is independent of these variables once the status of “Alarm” is known. We identify relation (1) as a weaker notion of modularity more appropriate for uncertain reasoning. Note that (1) is a local notion of modularity; assertions of conditional independence are made about each variable individually. This is in contrast with the modularity associated with inference nets where straightforward representation of uncertain relationships requires global assumptions of independence [ Heckerman 861. VII Knowledge maintenance in a belief network VI A weaker notion of modularity Notice that many of the nodes in Figure 5 are not directly connected by arcs. The missing arcs are interpreted as statements of conditional independence. For example, the absence of a direct arc between “Burglary” and “Phone call” indicates “Burglary” influences “Phone call” only through its influence on “Alarm.” In other words, “Burglary” and “Phone call” are conditionally independent given “Alarm.” This would not be true if, for example, Mr. Holmes believed his neighbor might be the thief. Thus, belief networks provide a flexible means by which a knowledge engineer or expert can represent assertions of conditional independence. The concept that a variable may depend on a subset of other variables in the network is the essence of a weaker notion of modularity more appropriate for representing uncertain relationships. In this section, we define this concept more formally. To define weak modularity, we first need several auxiliary definitions: 1. A node j is a direct predecessor of node i if there is an arc from j to i. 2. A node k is a mccessor of node i if there is a directed path from i to k. 3. Pi is the set of all direct predecessors of i. 4. Si is the set of all successors of i. 5. Si is the complement of the set of all successors of i excluding i. As a result of (1) and the fact that the graph component of a belief network is acyclic, it is not difficult to show that the probability distributions found at the second level of the belief network are all that is needed to construct the full joint prolkbility dislribz~tion of the variables in the network [Shachter 863. Formally, if a belief network consisis of n uncertain variables i,, i,, . . . , and i,,, then p(i, AND i, . . . in) = n, p(i,(P, ) (2) m where p(i”,lP, ) is the probability distribution associated with m node i,, at the second level of the belief network. As the joint probability distribution for a given problem implicitly encodes all information relevant to the problem, property (2) can be used to simplify the task of modifying a belief network. To see this, imagine that a proposition is added to a belief network. When this occurs, the expert must first reassess the dependency structure of the netw0r.k. For example, the new node may be influenced by other nodes, may itself influence other nodes, or may introduce conditional independencies or conditional dependencies among nodes already in the network. Then, in order to construct the new joint probability distribution, the expert need only reassess the probability distribution for each node which had its incoming arcs modified. Given (2), there is no need to reassess the probability distributions for the nodes in the network whose incoming arcs were not modified. Similarly, if a proposition is deleted from a belief network, the expert must first reassess dependencies in the network and then reassess only the probability distributions for those nodes which had their incoming arcs modified. To illustrate this point, consider the following modification to Mr. Holmes’ dilemma: For example, in the belief network for Mr. Holmes’ situation, P Phone call = {Alarm} Shortly after hearing the radio announcement, Mr. Holmes realizes that it is April first. He then recalls the April fools prank he perpetrated on his S Phone call = ci, neighbor the previous year and reconsiders the nature of the phone call. Now consider a particular node i. The conditional independence assertion associated with this node is S’ Phone call = {Alarm, Burglary, Earthquake, With this new information, an “April fools” node should be Radio announcement) added to the belief network and a conditioning arc should be added from the new node to "Phone call” (see-Figure 6). No other arcs need be added. For example, the arc from “April fools” to “Radio announcement" is absent reflecting the belief Heckerman and Howitz 125 that radio announcers take their jobs somewhat seriously. The networks, a representation scheme related to the I ule-based arc from “April fools” to “Burglary” is absent because it is approach. We believe artificial intelligence researchers ~111 assumed that burglars don’t observe this holiday. The absence find belief networks an expressive representation for of an arc from “April fools” to “Earthquake” reflects certain capturing the complex dependencies associated with uncertain beliefs about the supernatural. knowledge. Given the new graph, we see from (2) that the following probability distributions are needed to construct the new joint probability distribution: Acknowledgements We wish to thank Judea Pearl and Peter Hart for discussions p(Phone call 1 April fools AND Alarm) concerning divergence. We also thank Greg Cooper, Michael Wellman, Curt Langlotz, Edward Shortliffe, and Larry Fagan for tneir comments. Support for this work was provided by the NASA-Ames, the National Library of Medicine under grant ROl-LM04529, the Josiah Macy, Jr. Foundation, the Henry J. Kaiser Family Foundation, and the Ford Aerospace Corporation. Computing facilities were provided by the SUMEX-AIM resource under NIH grant RR-00785. p(Alarm 1 Burglary AND Earthquake) p(Radio announcement 1 Earthquake) WuWw) p(Earthquake) Fortunately, all nodes except for “Phone call” have retained the same predecessor nodes and so, by (2), the probability distributions corresponding to these nodes are available from the old belief network (see Figure 5). Only the probability distribution for “Phone call” needs to be reassessed. Figure 6: MJ. Holmes revisited The modification above should be compared with the modification required in a rule-based framework. Because divergent propagation cannot be used to represent multiple causation in this framework, we are limited to an unnatural representation such as constructing a rule for each possible combination of observations. In this representation, modification to include the consideration of April fools results in a doubling in the number of rules. Thus, it is clear that the local modularity property associated with belief networks can help to reduce the burden of knowledge base maintenance. VIII Summary In this paper, we demonstrated that particular classes of dependencies among uncertain beliefs cannot be represented in the certainty factor model in an efficient or natural manner. We should emphasize that, to our knowledge, all uncertainty mechanisms designed as incremental extensions to the rule-based approach suffer similar limitations. Also, we identified a fundamental difference between reasoning with beliefs that are certain and reasoning with beliefs that are uncertain. We demonstrated that rules representing deterministic relationships between evidence and hypothesis are more modular than rules reflecting uncertain relationships. We showed that the limitations of the rule-based approach for representing uncertainty is a consequence of forcing non- modular knowledge into a representation scheme designed to represent modular beliefs. Finally, we described belief References [Cooper 84) Cooper, G. F. NESTOR: A Computer-based Medical Diagnostic Aid that Integrates Causal and Probabilistic Knowledge. Ph.D. Th., Computer Science Department, Stanford University, Nov. 1984. Rep. No. STAN-CS-84-48. Also numbered HPP-84-48. [Duda 761 Duda, R., Hart, P., and Nilsson, N. Subjective Bayesian Methods for Rule-based Inference Systems. Proceedings 1976 National Computer Conference, AFIPS, 1976, pp. 1075-1082. [EMYCIN 791 Van Melle, W. EMYCIN Manual. Stanford, 1979. [Heckerman 861 Heckerman, D.E. Probabilistic Interpretations for MYCIN’s Certainty Factors. In Uncertainty in Artificial Intelligence, Kanal, L. and Lemmer, J., Eds., North Holland, 1986. [Heckerman 871 Heckerman, D.E., and Horvitz, E. J. The Myth of Modularity in Rule-Based Systems. In Uncertainty in Artificial Intelligence, Lemmer, J. and Kanal, L., Eds., North Holland, 1987. [Horvitz 861 I!orvitz, E. J., and Heckerman, D. E. The inconsistent Use of Measures of Certainty in Artli‘iclal Intelligence Research. In Uncertainty in Artificial Intelligence, Kanal, L. and Lemmer, J., Eds., North Holland, 1986. [Howard 811 Howard, R. A., Matheson, J. E. Influence Diagrams. In Readings on the Principles and Applications of Decision Analysis, Howard, R. A., Matheson, J. E., Eds., Strategic Decisions GJOUP, Menlo Park, CA, 1981, ch. 37, pp. 721-762. [Johnson 861 Johnson, R. Independence and Bayesian updating methods. In Uncertainty in Artificial Intelligence, Kanal, L. and Lemmer, J., Eds., North Holland, 1986. [Kim 831 Kim, J.H., and Pearl, J. A computational model for causal and diagnostic reasoning in inference engines. Proceedings 8th international joint conference on artificial intelligence, IJCAI, 1983, pp. 190-193. [Pearl 863 Pearl, J. Fusion, propagation, and structuring in belief networks. Artificial Intelligence 29, 3 (September 1986), 241-288. [Shachter 861 Shachter, R.D. Intelligent probabilistic inference. In Uncertainty in Artificial Intelligence, Kanal, L. and Lemmer, J., Eds., North Holland, 1986. [Shachter 871 Shachter, R., and Heckerman, D. A backwards view for assessment. In Uncertainty in Artificial lnteliigence, Lemmer, J. and Kanal, L., Eds.. North Holland, 1987. [Shortliffe 753 Shortliffe, E. H. and Buchanan, B. G. A model of inexact reasoning in medicine. Mathematical Biosciences 23 (1975), 351-379. 126 Automated Reasoning
1987
28
619
Filming a Terrain under Uncertainty Using Temporal and Probabilistic easoning Raymond D. Gumb Department of Computer Science University of Lowell Lowell, Massachusetts Abstract We address the problem of interpreting sensor data under uncertainty, using temporal and spatial con- text to facilitate the identification of objects. We seek to identify the type of an object presented in an ambiguous image by reasoning about conditional probabilities and the possible movements objects can make. A conditional probability (that an object is of a certain type given that some of its properties have been recognized) is used in conflict resolution, and an object is assigned an alternative type when an impos- sible movement is detected. Think of a map as being a frame and a sequence of frames as being a film. The idea is to construct a consistent and plausible (coher- ent and highly probable) film in which an object of one type does not mysteriously change into an object of another type. . l[ntroduction In this paper, we describe TEMPRO, a system that em- ploys temporal reasoning and probabilities in conflict res- olution. ’ TEMPRO f ocuses on the information manage- ment aspects of interpreting sensor data under uncertainty. TEMPRO uses conditional probabilities to order conflict- ing rules, and diachronic inconsistencies (impossible move- ments) to trigger the selection of alternative rules. TEMPRO has been tested in a Monte Carlo simula- tion. Sensitivity analysis of the experimental results indi- cates that the system would be less reliable if checks for diachronic consistency were not in place, and both less re- liable and more inefficient if the conditional probabilities were ignored in conflict resolution. The development of TEMPRO was motivated by con- cerns similar to those motivating such works as [Ferrante, . 19851 and [Durfee and Lesser, 19861. TEMPRO’s error correction facilities bear some similarity to the devices of [Ferrante, 19851, which integrates techniques for reasoning about uncertainty and constraint propagation. However, the constraints embedded within TEMPRO are of a tem- ’ This research was tional Laboratories. supported by grant 95-2931 from Sandia Na- poral as well as of a spatial nature. In the terminology of [Durfee and Lesser, 19861, we consider only reasoning centralized at a single node. The logical and probabilistic features of TEMPRO can be formalized in a natural manner. Systems similar to TEMPRO can be applied whenever conditional proba- bilities can be ordered using, say, the methods of [Nilsson, 19861 for probabilistic logic (called probabilistic semantics in some of the literature [Leblanc, 1981]). A necessary con- dition for the application of systems similar to TEMPRO is that universal laws codifying the rules be expressible in a language having a probabilistic semantics. For exam- ple, first-order languages and most of the usual first-order intensional languages, such as the one implicit in this pa- per, have a probabilistic semantics, but, for second-order languages, no probabilistic semantics is known. TEMPRO can be formalized in terms of the probabil- ities of alternative Hintikka model systems in a quantified temporal logic with identity and a past tense operator. In formal terms, we construct the most probable Hintikka model system [Leblanc, 19811 (the most probable corrected film) extending a given consistent evolving theory [Gumb, 19781 (a given noisy film). II. Objectives TEMPRO is designed to determine the types of ob- jects situated within a two-dimensional world. The two- dimensional world consists of areas laid out in a grid, with zero or more objects contained within an area. Some of the objects are permanent (i.e. have a fixed location), whereas other objects are mobile. The i&own objects are perma- nent objects whose existence has been previously estab- lished (i.e. prior to the simulation). Permanent objects which are not known and all mobile objects are called un- Icnown objects. An area containing one or more unknown objects is said to be \occupied. During one unit of time, a mobile object can move to an (immediately) adjacent area (in a horizontal, vertical, or diagonal direction) or it can remain stationary, depending upon its type. An inspector is assigned the task of filming the terrain. The inspector is restricted to moving in a 116 Automated Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. straight line from one area on the grid to another at the rate of one unit of distance per unit of time. In particular, the inspector travels down row 0 and, at time t, is located in area (0, t). The inspector films the terrain in his field of vision, taking snapshots at the rate of one frame per unit of time. The inspector’s field of vision is limited, as each snapshot covers only the area where the inspector is currently located and the adjacent areas. The inspector is given (1) an initial map showing the location of the known objects on the terrain and (2) instructions to traverse a path of length n. On his its path, the inspector’s objectives are (1) to film as faithfully as possible both the permanent and movable objects and (2) to construct a more complete map of the permanent objects. The inspector is provided with TEMPRO for use as an error correction system. In the simulation, the identification of unknown ob- jects takes place in the presence of uncertainty. The type of an object is determined by the properties it has, and an error occurs when information regarding an object’s prop- erties is lost in the sensor input, making the object’s type indeterminate. If the inspector was located at the previous moment in the same area where the object in question is now, TEMPRO might be able to correct the present frame by reasoning about what objects in the immediately pre- ceding frame could now be in that area. (Note that, in the previous moment, the area where the object is now and every area adjacent to that area were in the inspec- tor’s field of vision. Hence, in the preceding frame, the inspector could see every object which now could be in the area in question, and the area in question is included in the present frame.) Similarly, TEMPRO might be able to correct past frames based on the present and future frames, Sometimes, TEMPRO is unable to identify the type of an unknown object with certainty. However, knowing some of the properties of an object allows TEMPRO to make informed guesses about its probable type. TEMPRO is given conditional probabilities to facilitate its guesses. Even when temporal reasoning can identify the type of an object with certainty, the conditional probabilities are use- ful because they are used in conflict resolution, minimizing the need for chronological backtracking. The current goal of TEMPRO is to correct errors where, for any occupied area, at most one of its objects can be in error, and, for any object in error, at most one of its properties cannot be identified. Simulation Figure 1 depicts the four phases in the simulation testing TEMPRO’s error correction abilities. In phase 1, the user supplies a (possibly incomplete) map of the permanent ob- jects and other information which the system uses to gener- Figure 1: The Four Phases in the Simulation ate the history and a correct film of the terrain. Errors are introduced at random in phase 2, resulting in a noisy film, and, in phase 3, TEMPRO attempts to eliminate these er- rors, producing a corrected film and a more complete map of the permanent objects. In phase 4, TEMPRO’s cor- rected film is checked for correspondence with the correct film (phase 1) and evaluated with a grade and a perfor- mance index. The grade gives the accuracy of TEMPRO’s corrected film, and the performance index measures TEM- PRO’s efficiency. In the first phase, generating the history of the terrain, the user is asked to supply the following information: 1. the number of time periods (n), 2. the location of the known objects on the terrain, 3. the average number of unknown objects per area, 4. for each type, the absolute a priori probability that one of the unknown objects is of that type, and 5. the absolute probability ing an occupied area. of losing information regard- The information entered in step (2) provides the initial map of the permanent objects. The number of unknown objects is determined by the information given in steps (1) and (3), and the total number of objects is the sum of the known and unknown objects. Using the information given in step (4), the system chooses at random the type of each unknown object, and then each object is placed at random on the grid, which, together with the initial map of permanent objects, gives the initial (time 1) configuration of the objects on the terrain. Proceeding inductively, the next configuration (time 2,. . . , n) of objects on the terrain is obtained by choosing, for each object, one of its possible moves at random. In phase 2, generating noise in a frame, attention is restricted to the inspector’s field of vision. For each point in time t (1 < t < n), the restriction of the configuration of the objects on the terrain to these areas gives the correct frame at time t. A noisy frame is generated from a correct frame by selecting occupied areas (at random) for an error using the error rate specified in step (5) of phase 1, select- ing an unknown object for an error in each selected area, and losing one property of each selected object. Gumb 117 In phase 3, correcting errors using temporal and prob- abilistic reasoning, TEMPRO’s corrective action depends upon the time. First, without using any temporal rea- soning, TEMPRO determines all possible object configu- rations that are compatible with the information provided in the noisy frame. Second, TEMPRO orders the possi- ble configurations on a list using conditional probabilities computed from the information entered by the user in step (4) of phase 1. There is one such list of all possible con- figurations for each time from 1 to n. Third, TEMPRO removes the configuration first on the list, taking this (for the moment at least) to be the corrected frame. If this is time 1 and the list for time 1 is empty, TEMPRO termi- nates, reporting an error in its program logic. If this is time 1 and the list for time 1 is not empty, TEMPRO pro- ceeds to time 2. If this is time t, t > 1, and the list for time t is empty, TEMPRO backtracks to time t - 1, the config- uration first on the list for time t - 1 is removed and taken to be the (new) corrected frame. If this is time t, t > 1, and the list for time t is not empty, the first configuration on the list is removed and checked for compatibility with the corrected frame for time t - 1. If it is not compatible, it is rejected, and the next configuration on the list is re- moved, taken to be the corrected frame, and checked. If it is compatible and t < n, TEMPRO proceeds to time t + 1. If it is compatible and t = n, TEMPRO reports a map of the permanent objects on the terrain along his path, and the simulation proceeds to phase 4. The final corrected film (i.e. the final sequence of cor- rected frames) and the more complete map of the perma- nent objects are printed. The more complete map gives the location of the known objects as well as the location of those unknown objects that are judged to be of perma- nent. The final corrected film represents a consistent and plausible (highly probable if not completely correct) evolv- ing theory [Gumb, 19781. The user is given the option of tracing the corrected frames as they are selected. In phase four, the correct and corrected frames are compared, and TEMPRO is assigned a grade and a per- formance index. The grade is the number of errors in the noisy film that were properly corrected in the corrected film divided by the total number of errors in the noisy film. The performance index is an ordered pair (b, T), where b is the number of backtracks and T is the number of re- jected configurations. A rough ranking of the TEMPRO’s performance under various conditions can be had by ar- ranging performance indices in lexicographic order. The number of backtracks b is the first item in the ordered pair constituting the performance index because backtracking debilitates efficiency as well as real-time veracity. Type Properties t1 Pl P3 t2 Pl P4 t3 P2 P3 t4 I P2 P4 1 Figure 2: Types and their Properties Uwiversal Law Rule Pair If PI(X), then tl(X) iff <t,(X) if pi(X); not t2(X). t2(X) if PI(X)> If m(X), then h(X) iff <t,(X) if pz(X); not t4(X). t4(X) if 232(X)> Ifn(X), then h(X) iff <t,(X) if ps(X); not t3(X). t3(X) if p3(X)> If p4(X), then t2(X) iff <t,(X) if p4(X); not t4(X). t4(X) if p4(X)> 1 Figure 3: TEMPRO’s Rules are Extracted from Universal Laws V. Types of To illustrate TEMPRO’s error correction techniques, we consider the following simple universe: There are only four types of objects (tl,. . . ,t4) and four properties (PI,. . . ,p4), which characterize the four types. In Figure 2, note that each type is characterized by two properties. If information regarding a property of an object is lost in the sensor input, TEMPRO can narrow the object’s pos- sible type down to two types. For example, if an unknown object is really of type tl and property pl is lost, the in- spector can determine that the object is either of type tl or t3 because the inspector knows that property p3 holds. The information in Figure 2 determines four universal laws (Figure 3) that state that, if one of the four properties hold of an object o, then o is of exactly one of two types. From each universal law, a pair of conflicting TEMPRO rules is extracted as shown in Figure 3. The universals laws are said to codify TEMPRO’s rules. Within each pair of rules, conditional probabilities re- solve conflicts. For example, regarding the first pair of rules, if object Q is observed to have property pl and the conditional probability of an object’s being of type tl given that it has property pl is greater than the conditional prob- ability of its being type t2 given ~1, then object Q is taken to be of type tl. Permanent objects are of type tl. Objects of types t2 - t4 are mobile and, during one time period, can remain stationary or move to an adjacent area. 118 Automated Reasoning Suppose k errors occur in a noisy frame, and, if Ic 2 1, that the i-th object in error (1 5 i 2 Ic) is observed to have exactly one property. Then, the unknown object can be of one of two possible types. In general, there are 2k possible configurations of the objects (i.e. possible cor- rected frames). Each possible configuration is compati- ble with the information provided in the noisy frame. In a possible configuration (prior to making the compatibil- ity checks), if pi,, . . . , pik are the properties observed of the k unknown objects in error, ti,, . . . , ti, are their as- sumed types, and P(tij , pij ) is the conditional probability Of tij given J&j, then (assuming independence) we have p(til7 Pi,) X ’ ’ ’ x P(ti,, pi,) as the probability of this con- figuration. The ordering of the possible configurations in- duced by these probabilities is used in conflict resolution as described earlier. Each of two frames, when viewed in temporal iso- lation, might be (synchronically) consistent, but, when viewed in temporal succession, might not be (diachroni- tally) consistent. The possible movements of objects serve to determine compatibility checks for adjacent frames in a film. For example, an unknown mobile object located in area (i, j) has 9 possible movements available to it if it is not on an edge of the grid. The nine areas to which it can move are (; - 1,j - 1), (i - l,j), (; - 1, j + l), (;,j - l), (i,j>, (i,j+l), (i+Lj-I), (i+l,j), and(i+l,j+l). TEMPRO employs three compatibility checks con- cerning the possible movements of objects in the inspec- tor’s field of vision. If the simulation is just beginning and so the time is t = 1, then TEMPRO can make no com- patibility checks because there is not (yet) a past frame to provide temporal context. If the time is t 2 2, there are three compatibility checks: In each of the 6 areas covered in both the frame for time t - 1 and and the frame for time t, there must be the same number of permanent (type tl) objects. (The6commonareas are(-l,t-1), (-l,t), (&t-l), (0, t>, (1, t - 11, and (1, t)). For each type from t2 to t4, in the frame for time t - 1, the number of objects of that type located in the area (0,t) must be less than or equal to the sum in the frame for time t of the objects of that type in that and adjacent areas (i.e. the areas (-1, t - 1), (-1, t), t-v + I>, to, t - l>, to, t>, (0, t + l), (1, t - 11, (1, 0, and (1, t + 1)). For each type from t2 to t4, in the frame for time t, the number of objects of that type located in the area (0,t - 1) must be less than or equal to the sum in the frame for time t - 1 of the objects of that type in that and adjacent areas (i.e. the areas (-1, t - 2), time 1 2 3 Figure 4: A Noisy Film Figure 5: A Corrected Film t-1, t - 11, t-1, t>, C-4 t - 21, V-4 t - l>, to, t>, (1, t - 21, (1, t - l), and (1, t)). If t > 1 and the corrected frame for time t is not compatible with the corrected frame for time t - 1, TEMPRO rejects the corrected frame for time t. Consider the noisy film in Figure 4 consisting of the frames for times 1, 2, and 3. In each of the three frames, the in- spector (1) is in the middle of the areas in his field of view. A question mark (7) indicates those areas in which information about an object has been lost. The following objects are observed with certainty: A boulder (B) at time 2 in area (1,3), a car (C) at time 2 in area (0, l), and a truck (T) at time 3 in area (1,4). TEMPRO’s compatibil- ity checks enable the types of all three unidentified objects (?‘s) to be determined: 1. In frame 1, a car (C) is in area (-1,O) because the car at time 2 in area (0,l) must have come from there (compatibility check (3)). 2. In frame 2, a truck (T) is in area (0,3) because it must have moved at time 3 to area (1,4) and, at time 3, a truck is the only object in area (1,4) (compatibility check (2)). 3. In frame 3, a boulder (B) is in area (1,3) because it was there at time 2 (compatibility check (1)). TEMPRO constructs the corrected film as shown in Figure 5. To facilitate sensitivity analysis, an option is provided ennabling the user to run three variants of TEMPRO with Gumb 119 the same terrain history and the same noisy film. First, TEMPRO can be run with the standard conflict resolu- tion and compatibility checks as described above (STAN- DARD). Second, TEMPRO can be run with the stan- dard compatibility checks but with conflict resolution done by inversely ordering each list of possible configurations (REVERSED-PROBABILITIES). Third, TEMPRO can be run with standard conflict resolution but with no com- patibility checks (NO-COMPATIBILITY-CHECKS). Under a variety of conditions, the grades and perfor- mance indices achieved by STANDARD have been com- pared with those for the other two variants of TEM- PRO, yielding some insight into the value of using tem- poral reasoning and conditional probabilities in conflict resolution. In eleven sample runs, STANDARD achieved a grade of .73, whereas REVERSED-PROBABILITIES (NO-COMPATIBILITY-CHECKS) had a grade of .58 (.68, respectively). (A grade of .5 might be expected by chance.) On the average, STANDARD chronologically backtracked one time and rejected 58 configurations, while REVERSED-PROBABILITIES backtracked 10 times as much and rejected 5 times as many configurations. STAN- DARD’s grade advantage over NO-COMPATIBILITY- CHECKS (REVERSED-PROBABILITIES) is more (less) pronounced when the types are, roughly, equally likely. The average grade, number of temporal backtracks, and number of rejected configurations in 93 runs of (STAN- DARD) TEMPRO (without also running (REVERSED- PROBABILITIES) and (NO-COMPATIBILITY-CHECKS) ) were .81, .62, and 56. Analysis of these and other runs revealed that: 1. Compatibility check (1) (“Permanent objects never move”), as expected, caught more errors than the other two compatibility checks. 2. Compatibility checks 2 and 3 were more effective when there was a sparse distribution of unknown objects (<.2 expected per area). 3. Conditions (in combination) that overwhelm the com- patibility checks (resulting in poor grades and perfor- mance indices) are large error rates (>.8 per area), dense unknown object distributions (>2 per area), and a large number of time periods (>lO). For ex- ample, in one run with an error rate of .9 and a aver- age density of 2 objects per area, TEMPRO received a grade of -76 (respectable) and a performance index of (16,4096) (poor). Further analysis of TEMPRO’s performance (and details of the Franz LISP implemen- tation) can be found in [Gumb, 19861. One of the most promising system enhancements in- volves making the resolution of the inspector’s sensor vari- able, so that the inspector’s field of view could be carved more finely into as many as, say, 25 small areas instead 120 Automated Reasoning of the current 9 large areas. The second and third com- patibility checks (suitably modified) should become much more effective, and, with the resolution fine enough, the expected number of unknown objects per area might be plausibly restricted to a maximum of one. The algorithm could be made more efficient by pro- jecting into the future the number of permanent objects in each previously observed areas. Regarding extensions of TEMPRO (incorporating, for example, more types and more sophisticated compatibility checks), substantial changes in the underlying algorithm are required to guar- antee that, in the general case, TEMPRO will produce the most probable corrected film. Acknowledgements Thanks go to Sarah Bottomley and Alex Trujillo for their work on implementing TEMPRO in Franz LISP, Arun Arya and Alex and Sarah for their assistance in preparing [Gumb, 19861, Ric D avis and Rick Craft for their advise on administrative as well as technical matters, Pierre Bieber for his suggestions on the present paper, and Gary David- son, Christos Katsaounis, and Peter F. Patel-Schneider for lending a hand in the preparation of this paper. References [Durfee and Lesser, 19861 E. H. Durfee and V. R. Lesser. Incremental planning to control a black-board based problem solver. In Proceedings of the Fifth National Conference on Artificial Intelligence, pages 58-64, 1986. [Durfee and Lesser, 19871 E. H. Durfee and V. R. Lesser. Using Partial Global Plans to Coordinate Distributed Problem Solvers. Technical Report 87-06, Computer and Information Science Department, University of Massachusetts, Amherst, MA, January 1987. [Ferrante, 19851 R. D. Ferrante. The characteristic error approach to conflict resolution. In Proceedings of the Ninth International Joint Conference on Artificial In- telligence, pages 331-334, 1985. [Gumb, 19781 R. D. Gumb. Summary of research on computational aspects of evolving theories. ACM SIGART Newsletter, (67):13, 1978. [Gumb, 19861 R. D. G umb. Filming a Terrain Under Un- certainty Using Temporal and Probabilistic Reason- ing. Technical Report 172, Computer Science Depart- ment, NMIMT, Socorro, NM, August 1986. [Leblanc, 19811 H. Leblanc. Alternatives to standard first- order semantics. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, pages 189- 274, Reidel, Dordrecht, 1981. [Nilsson, 19861 N. J. Nilsson. Probabilistic logic. Artificial Intelligence, 28~71-88, 1986.
1987
29
620
A Parallel Resolution Procedure Based on Connection Graph P. Daniel Cheng J. Y. Juang Advanced Information Services Inc. 1512 Candletree Dr. Peoria, IL 61614 Dept. of Electrical Engineering & Computer Science Northwestern University Evanston, IL 60201 ABSTRACT In this paper, we present a new approach towards a parallel resolution procedure which explores another dimension of parallelism in addition to the AND/OR for- mulation and special hardware constructs. The approach organizes the input clauses of a problem domain into a connection graph. The connection graph is then partitioned and each partition is worked on by a different processor of a multiprocessor system. These processors execute the resolution procedure indepen- dently on its partition, and exchange intermediate results via clause migrations. pair of literals which have the same predicate symbol and are complementary in sign. If the unification attempt between two literals succeeded, these two unifiable literals are marked by a link and the resulting MGU (the most general unifier) is used to label this link. Given the clause set of Figure l.a, the corresponding graph structure is shown in Figure 1.b. Preliminary test results and qualitative assessments of this procedure are also given. 1. Introduction Resolution procedure has been the basis of automatic theorem-provin first introduction in 1965 ‘T and logic inference since its I]. However, its execution on today’s computers is too slow to be effective, primarily due to the long resolution cycle time and exponential nature. Although exponential explosion remains una- voidable, several parallel schemes have been proposed to improve the speed performance of the resolution process. Among them, the most current topic is the approach of AND/OR parallelism. However, because of the impedance of shared variables between AND branches and the small number of OR branches found in most existing programs [2-41, concurrency from AND/OR parallelism approach is very limited in practice. The graph representation offers several merits over those represented in plain clause set. Among them, the most notable one is the clause matching process in which clauses unifiable with a key clause are to be identified in each resolution step. Using the plain clause set represen- tation, a set-wide search is needed every time a key clause is presented. Although some efficient data structures can be imposed (e.g., the FPA [6] lists) to res- trict the search on relevant clauses only, unification still has to be performed on each candidate clause and is sub- ject to failure. Furthermore, the complexity each time is proportional to the number of clauses at that state. Cl. 1. -G(a,f) Cl. 2. G(x,y) -Fky) -M(w) Cl. 3. F(u,v) -P(u,w) -Q(w,V) Cl. 4. F(u,v) -P(u,v) cl. 5. M(c,v) -WV) Cl. 6. H(u,v) -2(&z) -N&w) Cl. 7. P(x,y) -L(x,y) Cl. 8. Q(x.y) -Sky) -K(w,v) (a) The input clause set In this paper, we propose a new approach towards a parallel resolution procedure which in essence explores another dimension of parallelism in addition to the AND/OR formulation and special hardware constructs. The approach organizes the input clauses of a problem formulation into a connection graph graph is then partitioned and loade d 51. The connection into multiple pro- Cl. lo) cessors. These processors execute the resolution pro- cedure independently on its partition, and exchange intermediate results via clause migrations. The construc- tion of connection graph is described in Section. 2. A resolution procedure based on this graph is also briefed in this section. In Section 3, we present a paraliel model for executing the procedure on multiprocessor systems. The parallel procedure is evaluated in Section 4, and conclusions are drawn in Section 5. Cl. 10 Z(x,y) -B(x,y) Cl. 11 N(x,y) -J(x,r) cm c-2 2. Resolution Based ou Connection Graph 2.1 Graph Representation A graph structure of an input clause set is con- structed as follows: each literal of clauses in the input clause set is represented as a node in this graph, and the nodes representing literals of a clause are grouped together. Unification is then conducted to match every (b) Graph representation of input clause set Figure 1. An example problem. cl. 9. K(x,y) -Wky) cl. lo. i3x.y) -8ky) cl. 1 I. N(x,y) -J(x,y) cl. 12. L(d,e) cl. 13. S(e,f) cl. 14. B(a,b) cl. 15. J(b,c) cl. 16. W(c,d) Cheng and juang 13 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Since the number of clauses grows rapidly during the resolution process, this turns out very inefficient. This problem, however, can be eliminated using graph representation in which unifiable clauses are dynamically maintained and the associated MGUs are immediately available. For each new clause generated by resolving upon one of the link, clauses possibly unifiable with the resol- vent can be easily identified through the links of its parent clauses. No extensive matching is needed and nore importantly, most of the new MGUs of the resolvent’s links can be simply obtained through compo- sition of substitutions. 2.2 Resolution on Connection Graph After the connection graph is constructed, the reso- lution procedure then repeatedly selects a link, resolves upon this link, generates the associated resolvent, and finally inserts this resolvent into the connection graph. This process repeats until a null resolvent is generated or no more resolution is possible. This process is outlined in Figure 2. A connection graph is solved if it contains the empty clause To solve a connection graph which does not contain the empty clause if there is a clause containing an unlinked literal delete this clause together with its associated links otherwise select a link delete the link and generate the resolvent if the resolvent is a tautology delete the resolvent otherwise add the resolvent together with its new links to the graph solve the resulting connection graph 3. Parallel Resolution on Connection Graph We take the algorithmic approach of the connection-graph procedure described above in formulat- ing a parallel resolution procedure. For the resulting procedure, we impose no special architecture require- ments; therefore, any speed advantages obtained from hardware enhancements can also be incorporated. 3.1 The Parallel Approach In conventional parallel resolution procedures, clauses are stored at shared memory, and all the proces- sors access the same store to obtain a clause pair (see Figure 3.a). This approach incurs serious memory conflicts, and results in a very long resolution cycle. To reduce resolution cycle time, clauses can be partitioned into smaller subsets. Each is stored in the local memory of a processor, and resolved by the processor in parallel with others see Figure 3.b). A clause set can be parti- 6 tioned in sue a way that each subset forms a conceptual cluster. Therefore, each processor can concentrate on a concept and keep busy all the time. Nevertheless, a sub- set so obtained may not always contain sufficient clauses for a successful proof. It has to request necessary clauses from others from time to time as resolution proceeds. The migration of clauses adjusts the partition dynami- cally so that a proof can be found by one of the partici- pating processors. Thus, clause migration is essentially a robust scheme that explores conceptual clusters automat- ically. Via clause migrations, a processor in the above procedure conducts resolution virtually on the whole clause set, though it is in fact working only on a small subset of clauses. Thus, this parallel resolution pro- cedure can be seen as a form of virtual resolutio?, mechanism. This allows processors to work on the same Figure 2. The sequential resolution procedure. Each resolvent generated inherits the unifiable links from its two parent clauses, and the new MGUs of these links are obtained by the composition of the old MGU and the MGU used in the resolution. Substitution com- ‘patibility is checked in the mean time and incompatible links are not inherited. After the resolvent and its links are generated, the link previously used to conduct the resolution is removed from the two parent clauses. If the resolvent is not an empty clause, it is checked for deletion due to tautology or pure liter&. Because tautologies do not positively contribute to the solution of problems, they can be deleted from a set of clauses without affecting the inconsistency. A literal in the resol- vent becomes pure when it fails to inherit any link from the parent clauses. A clause containing a pure literal can not contribute to a refutation because the unlinked literal can never be resolved upon [l]. Either parent clauses can also become pure after the removal of the resolution link. These clauses are subsequently deleted from the connection graph. Deletion of clauses containing pure literals is an important feature of the connection graph proof pro- cedure. In addition to the clause itself, all links con- netted to its literals must also be deleted from the graph. Deletion of such links, however, may cause literals in other clauses to become unlinked. Thus dele- tion of clauses can create a chain reaction in which a succession of clauses is deleted from the graph. Deletion of clauses simplifies the connection graph, reduces the search space, and makes it easier to find a solution. Figure 3.a A conventional parallel resolution procedure- with clauses stored in shared memory Figure 3.b Proposed parallel resolution procedure with partitioned clause subsets search path, which is impossible in the AND OR tree search procedure. Furthermore, neither share d variable nor synchronization is necessary. 3.2 Initial Graph Partition In response to the problem decomposition in parallel processing, the first task of this parallel procedure is the decomposition of the initial connection graph. The gen- era1 rule of problem decomposition in parallel processing 14 Al Architectures is to allow as much parallelism and as less interprocessor communication as possible. However, because of the non-deterministic nature of the resolution process, fully loading each processor with a sufficient workable subtask is not necessarily most productive. Although the com- munication overhead is our concern, least interprocessor communication alone can’t be efficient either. In the context of theorem-proving or logic inference, resolution process is usually guided by a heuristic in order to get a prompt proof. Problem decomposition should also follow this discipline such that each subtask can work effectively and cooperatively, not just fully utilize the processor resources. In this version of the parallel model, we provide a preliminary scheme of problem decomposition through which the initial connection graph is decomposed into distinct partitions. First of all, we assign each link a preference measure whose value is determined based on the resolution strategy or heuristics in use. For example, if unit preference strategy is used, the preference meas- ure of a link can be directly set to the no. of residual literals of that link. If set-of-support (SOS) strategy is used, preference measures of links can be placed at three different levels depending on whether both of the linked clauses are supported, only one of them is supported, or neither is supported. These levels can differ by an order of magnitude with a secondary strategy, e.g., unit prefer- ence, ordering the links within a level. Notice that these preference measures can be used in selecting links during the resolution process as well. After the preference measures are established, an inclusion process is invoked to group clauses starting from some seed clauses. Unit clauses or clauses having support can be used as the seeds and potentially, one partition will grow from each of the seeds. The inclusion process will run on each partition in turn and allocate one clause to that partition at a time. (Multiple alloca- tion may be desirable in some cases.) During the inclu- sion process, clauses adjacent to that partition are identified first. The clause with the best preference measure and not allocated is then included to that parti- tion. (For the case of multiple inclusion, clauses having the same preference measure can all be allocated. Con- 2 tention of clauses, i.e., clauses having the best pre erence measure but were allocated to other partitions, is also marked and used later to determine the final partition pattern. . . . . . . . . . ...:::... . . . . . . . . . (‘1 . . ..I e. . . “.. t. . 5 . . Figure 4. Process of initial partition. The basic philosophy behind this inclusion process is to avoid the situation that a clause is allocated to a par- tition and has no links with any clause of that partition. A clause under this situation is called an isolated clause in this paper. Therefore, only clauses linked with that partition are considered at each step of the inclusion pro- cess. The inclusion process terminates when all the input clauses are allocated. Executing this process on the connection graph of previous example is illustrated in Figure 4, where six unit clauses are used as the seeds. Each partition thereafter formed can ideally be used as a subtask for the parallel resolution. However, we further analyze the overlapping of clauses between parti- tions to determine the optimal partition pattern. A basic criterion is that if two partitions have a moderate degree of overlapping and each is small in terms of the no. of clauses, we merge these two partitions into one as illustrated by the merge of two partitions in Figure 4, where finally four final partitions are formed, see Figure 5. Partitions with extensive overlap are merged to reduced the communication overhead. Partitions with small no. of clauses are merged in order to maintain a feasible no. of clauses in each partition and to avoid pro- cessors running out of clauses. 3.3 The Parallel Resolution Procedure After the initial connection graph is decomposed, each partition is loaded into a different PE of a mul- tiprocessor system for execution. Each of these PEs will perform the conventional connection graph proof pro- cedure on its local partition, together with from time to time interprocessor communications. Figure 5. Initial partition on example clause set. Cheng and Juang 15 Typically, an interprocessor communication need arises when the linkage structures between partitions change. For example, when a resolvent is generated, its new external links are established through the interpro- cessor communication. If a clause is deleted due to pure literals or subsumption, all of its external links are also broken through the interprocessor communication. For the needs of these communication occasions, a protocol set is designed to handled these works [7]. This protocol is asynchronous type, and is running as a child process of the resolution process in the current design. If a hardware module can be built around it, the communica- tion delay can be significantly reduced. During the resolution process, each PE will repeat- edly select a link (based on the preference measures), generate the associated resolvent, and update the graph structure. This process repeats until an empty clause is generated. The news of empty clause is immediately broadcast to all other PEs to stop the whole process. This broadcasting is done through the communication protocol also. If there exists no empty clause for the problem, manual interruption is needed. 3.4 Clause Migration In each cycle of the resolution process, a link is selected from those belonged (completely or partially) to the local partition. If the link selected is an external link, this indicates that the local resolution has run to the point where a remote clause can contribute to the local resolution. In response to this, we provide the clause migration mechanism through which clauses are transferred between partitions. Through this mechan- ism, we survive the problem of completeness resulted from the decomposition of the input clauses. Further- more, if the remote clause is an intermediate result of other partition, we get the intended speedup by having someone else doing that derivation. The generation of isolated clauses is another occa- sion for clause migration where all the internal links of a clause are resolved away by local resolution. Since iso- lated clauses are no longer useful in the local partition, it is desirable to transfer them to other partitions. In determining the destination of an isolated clause, we can migrate the isolated clause to the partition which has the largest no. of links with it, or to the partition which has the maximal preference value on the link. The former potentially minimizes the communication overhead afterward while the latter could be more effective to the whole resolution procedure. We summarize our introduction of this parallel pro- cedure in the following algorithm where highlighted steps are intended for comparison with the sequential pro- cedure of Figure 2. The rate of clause migrations is considered an over- head in this parallel procedure and it can be minimized through a proper decomposition in the initial partition stage. The procedure we devise for initial partition is found satisfactory. Also worth mention here is the lock procedure embedded in the clause migration mechanism. That is, before a clause can be migrated, all clauses linked with it are locked from being resolved. This avoids losing track of the link structures while clauses are migrating. It also prevents the situation that two clauses are migrating to each other. Although better schemes can be devised to get around this restriction, it is this method used in current design. Figure 6. The parallel resolution procedure. 4. Performance Evaluation A preliminary performance assessment of this paral- lel procedure is conducted based on a series of program verification problems suggested by McCharen et al [g]. The execution of the parallel procedure is emulated by a simplified prototype which creates one logic process to simulate a physical processor. In this prototype, clause migration takes place only when a clause is isolated in a partition. The solution time, in terms of resolution step, is used as the primary measurement. Each problem is solved several times by slightly varying the number of partitions in order to observe the performance fluctua- tion under different partition numbers. The solutions with single partition are used to resemble what would have been obtained from the sequential procedure. The test results are summarized in Table 1. because of the restriction of migration upon isolation only in current implementation, a clause needs to exhaust all of its internal links to becomes isolated and available to other partitions. This may generate a vast amount of clauses in the local PE and delay its timely effect on other PEs. 5. Concluding Remarks We have described a new approach to the parallel- ism of resolution procedure for theorem-proving and logic inference. The approach explores another dimen- sion of parallelism in addition to the pipelined architec- ture constructs and the AND/OR parallelism. The con- trol over the individual clause level also provides us the flexibility in incorporating existing resolution strategies developed from the theoretic study of theorem-proving. The formulation of this parallel model does not either impose any special hardware requirement, and can thus be easily realized on any multi-computer system or local computer network. Observing that this parallelism is only limited by the number of PEs and the communica- tion support, an ultimate speedup can be achieved when the resolution process is guided by an elaborate strategy. 16 Al Architectures Secondly, we want to address the innovation idea of clause migration. This clause migration capability sup- ports the effectiveness of resolution strategies, and pro- vides an illusion of virtual resolution even though the clauses are distributed over different sites. That is, as soon as a clause becomes material to the resolution pro- cess of one partition, this clause can be made available to that partition immediately without any concern of its residency. Finally, from this investigation we also identified another advantage of the connection graph representa- tion which we do benefit in the formulation of this paral- lel procedure. The link structure of the connection graph facilitates our work of clause partition by provid- ing us information about clause interrelation. With these clause interrelations established beforehand, every effort can be made to group relevant clauses into the same partition. Each link itself is also an indicator of how heavy a clause is relative to other clause. Thus, communication overhead can be reduced by simply minimizing the number of links between partitions. With only these links maintained in each partition, conversations between partitions are easily conducted along these links without knowing each other’s whole clause set whatsoever. Also implied in our presentations are several enhancements to the parallel procedure, like program- ming each PE to use a different resolution strategy, relaxation of the lock restriction, dynamic partition split on heavy-loaded PEs, and finally the complete realization of this parallel procedure on a real multipro- cessor system. Those are the major topics of our further research on this parallel approach. REFERENCES J. A. Robinson, “A Machine Oriented Logic Based on the Resolution Principle,” JACM, vol. 12, pp. 23-41, Jan. 1965. S. J. Stolfo and D. Miranker, “DADO: A Parallel Processor for Expert Systems,” in Proc. 1984 Int’l Conf. Parallel Processing, pp. 74-82, Aug. 1984. J. S. Conery and D. F. Kibler, “AND Parallelism and Nondeterminism in Logic Programs,” New Generation Computing, vol. 3, pp. 43-70, 1985. K. Murakami, T. Kakuta, and R. Onai, “Archi- tecture and Hardware System: Parallel Inference Machine,” in Proc. Int’l Conf. Fifth-Generation Computer Systems, pp. 18-36, Tokyo, 1984. R. Kowalski, Logic jor Problem Solving, North- Holland, New York, 1979. E. Lusk and R. Overbeek, “Data structures and control architecture for the implementation of theorem-proving programs,” in Proc. 5th Cons. Automated Deduction, pp. 232-249, 1980. P. Daniel Cheng, A Parallel Theorem Prover Based on Connection Graph, Master’s Thesis, Nothwestern University, Evanston, Illinois, Dec. 1986. J. D. McCharen, R. A. Overbeek, and L. A. Wos, “Problems and Experiments for and with Automated Theorem-Proving Programs,” IEEE Trans. Comput., vol. C-25, pp. 773-781, Aug. 1976. Initial Resolution Steps Taken Best Normalized I 21 191 I 96 I 83 I 67 I 49 I 75 I 3.9a I 0.98 I Table 1. Results of testing the proposed model on a set of program-verification problems in non-Horn clauses. <Note> The speedup of the proposed model over AURA is adjusted by a factor (a > 1) to account the following two facts: (1) Hyper-resolution is used in AURA, which may resolve more than one pair of literals in each step; (2) Resolution cycle of proposed method is shorter than that of AURA since no string matching is necessary. Cheng and juang 17
1987
3
621
Computational Costs versus enefit s of Control Reasoning1 Alan Garvey, Craig Cornelius, and Barbara Hayes-Roth Knowledge Systems Laboratory Stanford University ‘This research was supported by the following grants: NIH Grant RR-00785; NIH Grant RR-00711; Boeing Grant W266875; NASA/Ames Grant NCC 2-274; DARPA Contract N00039-83-C- 0136; ONR Contract N00014-86-K-0652. We thank Micheal Hewett, M. Vaughan Johnson Jr., Robert Schulman and Jeff Harvey for their work on BBl. We thank Russ Altman, Jim Brinkley, Bruce Duncan, Olivier Lichtarge, John Brugge, and Oleg Jardetzky for their work on PROTEAN. Special thanks to Bruce Buchanan and Ed Feigenbaum for sponsoring the work within the Knowledge Systems Laboratory. 110 Automated Reasoning ments of objects to satisfy constraints, which is layered upon the BBl blackboard control architecture[Hayes-Roth, 19851. A. The BBP Blackboard Control Archi- tecture The BBl blackboard control architecture provides a uniform mechanism for reasoning about problems and problem-solving actions. Functionally independent knowledge sources (KSs) co- operate to solve problems by recording and modifying solution elements in a global data structure, the bbackboaTd. Domain KSs solve domain problems on the domain blackboard. Con- trol KSs construct control plans for the system’s own actions on the control blackboard. All KSs, when triggered, generate knowledge source activation records (KSARs) that compete for scheduling priority. The BBl execution cycle has three steps: (a) The inter- preter executes the action of the scheduled KSAR, thereby changing the blackboard. (b) T ‘he agenda-manager adds KSARs to the agenda for all KSs triggered by the blackboard changes and rates each one against the current control plan. (c) The scheduler chooses the highest-rated KSAR to execute its action next. If it schedules a control KSAR, that KSAR may change the criteria used to rate pending KSARs on subsequent cycles. Given this architecture, an application system can exploit the full power of the blackboard architecture to construct and follow control plans for its own actions in real time. For exam- ple, it can incrementally refine a general strategy as a sequence of specific objectives. It can pursue multiple plans simulta- neously. It can integrate opportunistic, goal-driven, and data- driven objectives in its plans[Johnson and Hayes-Roth, 19861. It can modify, interrupt, depart from, resume, or abandon plans. B. The ACCQRD Framework ACCORD provides a domain-independent framework for per- forming arrangement-assembly tasks. Within ACCORD, a problem-solver defines several partial arrangements, each com- prising some of the objects and constraints specified in a prob- lem. It declares one object the anchor and positions other objects (anchorees) relative to it. It reduces the family of legal positions for each anchoree by anchoring it with constraints to the anchor and yoking it with constraints to other anchorees. Eventually, the problem solver integrates multiple partial ar- rangements with constraints among their constituent objects. To support arrangement assembly, ACCORD provides: (a) a skeletal concept network in which to define domain- specific objects and constraints; (b) a vocabulary of partial arrangements (e.g., anchor, anchoree); (c) a type hierarchy of From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. assembly actions, events, and states (e.g., do-anchor is-a do- position action); (c) linguistic templates for instantiating ac- tions, events, and states (e.g., Do-anchor anchoree to anchor in partial-arrangement with constraints). ACCORD enables an application system to reason about its problem-solving actions and control plans in an inter- pretable, English-like representation. For example, PROTEAN represents one of its actions as: Do-Anchor Helix-l-l to Helix-2-l in PA1 with NOE6. It represents one of its control decisions as: Do-Position Long Helix in PA1 with Strong Con- straint. BBl determines that the action matches the control decision because: Do-Anchor is-a Do-Position action. Helix-l-l is Long. Helix-l-l is-a Helix. PA1 is PAl. NOE6 is Strong. NOE6 is-a Constraint. Finally, BBl translates the action into its executable language of blackboard modifications. TEAN System 1. Knowledge PROTEAN attempts to identify the three-dimensional conformations of proteins based on a variety of constraints, us- ing four kinds of knowledge. It instantiates ACCORD’s concept network with biochemistry objects (e.g., Helix is-a Object) and constraints (e.g., NOE is-a Constraint). It specifies domain KSs that generate feasible problem-solving actions. For exam- ple, one KS specifies: Trigger: Did-Position Anchoree (The-Object) in PA (The-PA) Context: For The-Partner in: Includes The-PA Anchoree (The-Partner) Constrains The-Object The-Partner with Constraints (The-Constraints) Action: Do-Yoke The-Object with The-Partner in The-PA with The-Constraints It specifies a geometry system (GS) [Brinkley et al., 19861 (dis- cussed below) that performs the numerical operations underly- ing certain actions. It specifies control KSs that generate the control plan in Figure 1 (discussed below). 2. Geometry System PROTEAN’s GS performs two operations. To support an- choring actions, the GS searches space, generating all possible locations for an anchoree (at some resolution) that satisfy the anchoring constraints. Since six parameters specify the posi- tion and orientation of an object in space, both the computa- tion time to search space and the number of locations returned increase roughly as the sixth power of the sampling resolution. PROTEAN determines resolution by instructing the GS to: (a) begin searching at low resolution; and (b) repeat the search at progressively higher resolutions until it returns a threshold number of locations. To support yoking and other positioning actions, the GS prunes locations, testing each location against Level of Ab- strac- tion I StrategyA v I SSl ,, ss2 8, ss3 II II v I I F6 I I I F7 I I El v I I Problem-solving Cycle StrategyA: Assemble One Partial-Arrangement Sub-Sbategyl (SSI 1: Define one Partial-Arrangement Sub-Strategy2 (SS2): Position Structured-Secondary-Structure Sub-Strategy3 (SS3): Position Secondary-Structure FOCUS1 &I): Create Anyname Focus2 (F2): Include Secondary-Structure in PA1 Focus3 (F3): Orient PA1 about long contraining constrained Structured-Secondary-Structure Focus4 (F4): Anchor {Ol) Structured-Secondary-Structure to Helixl-l in PA1 with {C) Constraint-Set Focus5 (F5): Yoke several{cG) Structured-Secondary-Structure in PA1 with {C) Constraint-Set Focus6 (F6): Anchor {02) Random-Coil to Helixl-l in PA1 with {c) Constraint-Set Focus7 (r-7): Yoke several{o3) Secondary-Structure in PA1 with {c] Constraint-Set Focus8 (F3): Restrict Secondary-Structure in PA1 with Constraint-Set Figure 1: Basic PROTEAN Control Plan the specified constraints and returning all locations that satisfy them. Since yoking compares each pair of locations for two pre- viously anchored objects, yoking time increases as the twelfth power of the resolution. 3. Control Plan PROTEAN’s strategy (see Figure 1) comprises a sequence of three sub-strategies, each comprising a sequence or set of foci. (The next section explains the bracketed characters in Figure 1.) During SSl, PROTEAN creates a partial-arrangement, in- cludes objects in it, and orients it around a particular anchor. During SS2, PROTEAN positions structured objects (alpha- he&es and beta-strands) by anchoring and yoking them. During SS3, PROTEAN positions all objects (including non-structured coils) by anchoring and yoking them. Whenever PROTEAN generates an intractably large number of locations for an an- choree, it introduces an opportunistic focus (e.g., F8) to restrict (statistically sample) the locations. PROTEAN generated the control plan in Figure 1 for the lac-repressor headpiece protein at low resolution. For other proteins or resolutions, sub-strategies and foci appear and ter- minate on different cycles. Garvey, Cornelius, and Hayes-Roth 111 III. Experimental Manipulations A. Control Knowledge We studied four variations on PROTEAN’s basic control plan. Strategy A generated the plan in Figure 1. Strategy B intro- duced the constraint modifier strong at points indicated by c. Strategy C introduced object modifiers as follows: long, infEe%- ible, constrained, and constraining at 01, constraining at 02, and long, inflexible, constraining, and recently-reduced at 03. Strategy D introduced all modifiers. For example, here are the four versions of Focus 7: Strategy A: Yoke several Secondary-Structure in PA1 with Constraint-Set. Strategy B: Yoke several Secondary-Structure in PA1 with Strong Constraint-Set. Strategy C: Yoke several Long Inflexible Constraining Recently-Reduced Secondary-Structure in PA1 with Constraint-Set. Strategy D: Yoke several Long Inflexible Constraining Recently-Reduced Secondary-Structure in PA1 with Strong Constraint-Set. Modifiers increase the precision with which the strategy discriminates among competing actions. For example, sen- tence A gives equal ratings to all actions that yoke secondary- structures in partial-arrangement PA1 with any constraints, while sentence B gives higher ratings to actions that use strong constraints. Domain experts recommend these particular modifiers to favor positioning actions that rapidly reduce the number of lo- cations for each object. While these modifiers don’t affect PRO- TEAN’s ultimate solution, they should reduce the number of positioning actions it performs and the cost of later actions. On the other hand, they should increase the cost of each rat- ing action. The question is: Does the benefit of performing fewer, more effective positioning actions outweigh the cost of identifying those actions? l3. Proteins We compared PROTEAN’s four strategies on each of two pro- teins: the lac-repressor headpiece and myoglobin. They differ on: size (51 amino acids vs. 153), number of structured objects (3 versus 5), number of NOE constraints (17 vs. 21), sequence of amino acids, and pattern of constraints. IV. nalysis Total Symbolic Reasoning Time = F(Total Number of Cycles & Number of KSARs per Cycle & Identities of KSARs per Cycle) Total Rating Time = F(Tota1 Number of Cycles & Number of KSARs per Cycle & Rating Time per KSAR) Total GS Time = F (GS Operations Perf armed & Threshold) We assess the effects of each knowledge strategy by com- paring it to strategy A as follows: 1. 2. 3. 4. A. Does the control knowledge affect the identities or number of actions PROTEAN schedules? Do differences in scheduling decisions affect the efficiency (total GS time or total symbolic reasoning time) of PRO- TEAN’s problem solving? Does the control knowledge affect the cost (rating time per KSAR) of PROTEAN’s scheduling decisions? Do the combined effects produce a net computational ef- ficiency (total time) in PROTEAN’s performance? V. Results Results for the Lac-Repressor piece Table 1 shows PROTEAN’s performance on the lac-repressor headpiece. Notice that the cost of control knowledge is negligi- ble (.05-.64 second increase in rating time per KSAR) compared to total time (1587-5171 seconds). It does not significantly af- fect PROTEAN’s overall efficiency. The top panel of Table 1 shows the complete results. As indicated by total time, Strategy B produced a net efficiency compared to Strategy A. Strategy B reduced PROTEAN’s to- tal number of actions by nine, thereby reducing both GS and symbolic reasoning times. These effects outweighed the small increase in rating time. By contrast, Strategy C produced a net inefficiency. It reduced the number of actions by one, slightly reducing GS time. But increased symbolic reasoning and rat- ing times outweighed this savings. Strategy D, which included all modifiers, combined Strategy B’s reductions in GS and sym- bolic reasoning times with Strategy C’s increase in rating times, producing an intermediate net efficiency. The middle panel of Table 1 shows the results for sub- strategy SS2. Here PROTEAN anchored and yoked two he- lices relative to a third, performing exactly the same actions (in slightly different orders) under all four strategies. As a conse- quence, the knowledge strategies did not affect GS or symbolic reasoning times, but did slightly increase rating times. The strategies did not significantly affect total times. The bottom panel of Table 1 shows the results for sub- strategy SS3. Here PROTEAN anchored the four coils and yoked all of the anchorees to one another. These results par- allel and actually determine the complete results in the top panel of Table 1. Strategy B reduced PROTEAN’s total num- ber of actions by nine, thereby reducing GS and symbolic rea- soning times. While slightly increasing ratings times, Strategy B produced a net efficiency. Strategy C reduced the number 112 Automated Reasoning A. B. C. D. No Constraint Object All Modifiers Modifiers Modifiers Modifiers Costs During All Sub-strategies Total Time 4881 4265 5171 4281 Number of Cycles 87 78 86 78 GS Time 3322 2858 3294 2864 Symbolic Reasoning Time” 1381 1240 1561 1191 Average KSAR 0.35 0.41 0.87 0.93 Costs During Sub-strategy 2 2266 2271 2270 2268 Rating Time Total Time Number of Cycles GS Time Symbolic Reasoning Time” Average KSAR 7 7 7 7 2186 2186 2186 2186 71 73 70 68 Rating Time 0.35 0.40 0.85 0.91 Costs During Sub-strategy 3 Total Time 2215 1587 2518 1635 Number of Cycles GS Time Symbolic Reasoning Time” Average KSAR 37 28 36 28 1136 666 1108 678 928 768 1123 761 Rating Time 0.34 0.41 0.89 0.95 A. B. C. D. No Constraint Object All Modifiers Modifiers Modifiers Modifiers Costs During All Sub-strategies ” Total Time 13930” 11985 12460 11816 Number of Cycles 116 104 111 103 GS Time 7278 6898 6275 6304 Symbolic Reasoning Time” 6018 4506 5500 4795 Average KSAR Rating Time 0.35 0.42 1.45 1.51 Costs During Sub-strategy 2 Total Time 5062 5327 5410 5100 Number of Cycles 22 21 23 20 GS Time 4383 4699 4595 4453 Symbolic Reasoning Timea 602 559 687 524 Average KSAR Rating Time 0.35 0.41 1.70 1.74 Costs During Sub-strategy 3 Total Time 7536 5413 5869 5467 Number of Cycles 43 32 37 32 GS Time 2895 2199 1680 1851 Symbolic Reasoning Time0 4121 2735 3665 3056 Average KSAR Rsiting Time 0.35 0.42 1.20 1.28 AU times are in seconds. AU times are in seconds. =This is all symbolic computing time, except rating time. OThis is all symbolic computing time, except rating time. Table 1: Computational Costs of Four Strategies for As- sembling the Lac-Repressor Headpiece Table 2: Computational Costs of Four Strategies for As- sembling Myoglobin of actions by one, slightly decreasing GS time, but increasing symbolic reasoning time. Increasing rating times as well, Strat- cies in total time for all three knowledge strategies. We are egy C produced a net inefficiency. Strategy D combined these conducting experiments that combine control knowledge of the effects to produce an intermediate net efficiency. cost of positioning actions with current control knowledge of their effectiveness. * esults for Myoglobin Table 2 shows PROTEAN’s performance on myoglobin. Again, the cost of control knowledge is negligible (.07-1.39 seconds in- crease in rating time per KSAR) compared to total time (5062- 13930 seconds). It does not significantly affect PROTEAN’s overall efficiency. The top panel of Table 2 shows the complete results. As indicated by total time, all three knowledge strategies reduced the number of actions PROTEAN performed, reducing both GS and symbolic reasoning times. These effects outweighed in- creases in rating times, producing a net advantage in efficiency. As for the lac-repressor headpiece, Strategy B’s constraint mod- ifiers were more effective than Strategy C’s object modifiers. Here, however, Strategy D’s combined modifiers produced the greatest efficiency. The middle panel of Table 2 shows the results for sub- strategy SS2. Here PROTEAN positioned five structured objects. All three knowledge strategies produced about the same number of actions, but increased GS time. PROTEAN’s scheduling records show that PROTEAN performed several specific yoking actions earlier under the knowledge strategies than it did under Strategy A. At the earlier times, these partic- ular actions required more expensive GS operations than they required later in problem solving, but did not reduce the total number of actions required to solve the problem. These costs, combined with increased ratings times, produced net inefficien- The bottom panel of Table 2 shows the results for sub- strategy SS3. Here PROTEAN positioned the five structured objects and four coils. All three knowledge strategies reduced the number of actions PROTEAN performed, substantially re- ducing GS and symbolic reasoning times. These savings out- weighed the increased cost of rating, producing a net efficiency in total time. c. Table 3 shows PROTEAN’s performance on the lac-repressor headpiece during sub-strategy SS3 under all four strategies at each of three resolutions. Because higher resolutions entail more expensive GS computations, the knowledge strategies pro- duce larger net efficiencies. Thus, maximum savings in total time range from 628 seconds at low resolution to 1771 seconds at medium resolution to 4110 seconds at high resolution. Table 3 also shows an interaction between strategy and resolution. Strategies B and D produced essentially the same effects at all resolutions. They reduced the number of actions PROTEAN performed, thereby reducing GS and symbolic rea- soning times. In all cases, these savings outweighed the in- creased rating times, producing net efficiencies in total time. By contrast, Strategy C produced different effects at different resolutions. At low resolution, Strategy C reduced the num- ber of actions by only one, slightly reduced GS time, increased symbolic reasoning time, and increased rating time. It pro- duced a net inefficiency in total time. At medium and high Garvey, Cornelius, and Hayes-Roth 113 A. B. C. D. No Constraint Object All Modifiers Modifiers Modifiers Modifiers Costs During Sub-strategy 3 at Low Resolution Total Time 2215 1587 2518 1635 Number of Cycles 37 28 36 28 GS Time 1136 666 1108 678 Symbolic Reasoning Time= 928 768 1123 761 Average KSAR Rat&g Time 0.34 0.41 0.89 0.95 Costs During Sub-strateEy 3 at Medium Resolution - -- Total Time 5268 3831 4059 Number of Cycles 37 31 32 GS Time 4191 2718 3140 Symbolic Reasoning Time” 917 895 766 Average KSAR Rating Time 0.34 0.41 0.89 Costs During Sub-strategy 3 at High Resolution Total Time 9956 7729 5846 Number of Cycles 38 33 29 GS Time 8794 6746 4819 Symbolic Reasoning Time= 997 840 847 Average KSAR Rating Time 0.34 0.41 0.89 3497 28 2550 759 0.95 5925 28 4939 790 0.95 All times are in seconds. aThis is all symbolic computing time, except rating time. Table 3: Computational Costs of Sub-strategy 3 at Three Resolutions for Four Strategies for the Lac-Repressor Headpiece resolution, Strategy C reduced the number of actions by five and nine, thereby substantially reducing GS and symbolic rea- soning times. These savings outweighed increased rating time, producing a net efficiency in total time. Although we do not fully understand the GS properties that lead to this interac- tion, we have a hypothesis. Strategy C includes the only mod- ifier, recently-reduced, that is sensitive to intermediate solution states. Perhaps the GS produces results that differentially sat- isfy this modifier at different resolutions. We are investigating this hypothesis. D. Effects sf Control on Symbolic Rea- soning Per Se PROTEAN differs from many knowledge-based systems in its dependence upon expensive computations performed by the geometry system. As discussed above, the three knowledge strategies produce substantial efficiencies in PROTEAN’s per- formance largely by producing savings on GS computations. However, the knowledge strategies produce net efficiencies in performance independent of the cost of GS computations. Ta- ble 4 shows net symbolic reasoning time (total time - GS time) for all cases in which the knowledge strategies produced a net overall efficiency. In all cases but one, the knowledge strate- gies produced more efficient symbolic reasoning per se, simply because they allowed PROTEAN to solve problems in fewer problem-solving cycles. A. B. C. D. No Constraint Object All Modifiers Modifiers Modifiers Modifiers Lac-Repressor at Low Resolution Net, Symbolic Reasoning Time 1559 1407 - 1417 Lac-Repressor at Medium Resolution Net Symbolic Reasoning Time 1539 1540 1335 1402 Lac-Repressor at High Resolution Net Symbolic Reasoning Time 1622 1416 1470 1447 Myoglobin at Low Resolution Net Symbolic Reasoning Time 6652 5087 6185 5512 All times are in seconds. Table 4: Net Symbolic Reasoning Time for Cases where Knowledge Strategies Produced Net Efficiency E. Effects of Control on Identified Pro- tein Structures The four strategies examined in these experiments had no ef- fect on the protein structures PROTEAN identified. At a given level of resolution, it identified exactly the same structure for the lac-repressor under all four strategies. Similarly, it iden- tified exactly the same structure for myoglobin under all four strategies. VI. Implications Our results confirm that intelligent control reasoning can induce computational efficiency in AI systems. In particular, we found that: Control knowledge, including object and constraint mod- ifiers, reduces total problem-solving time by reducing the number of actions performed, GS time, and symbolic rea- soning time. The cost of using control knowledge-rating actions against modifiers-is negligible compared to total problem-solving time. Control knowledge is most effective when the scheduler chooses among many possible actions and its choices alter the number or cost of subsequent actions. Constraint modifiers usually reduce GS time more effec- tively than object modifiers. Sometimes the most effective actions entail disproportion- ately expensive GS operations. Control knowledge produces larger savings at higher GS resolutions. Modifiers that measure intermediate solution states may operate more effectively at higher GS resolutions. We plan to incorporate these results into PROTEAN’s knowledge base, so that it can decide which modifiers to use in particular problem-solving situations. We conjecture that PROTEAN’s control knowledge would have similar effects in other “arrangement-assembly systems.” For example, the SIGHTPLAN system[Tommelein et QZ., 19871 designs construction site layouts by assembling arrangements of 114 Automated Reasoning construction areas and equipment in a two-dimensional spatial context. Since SIGHTPLAN also is implemented in BB*, we can easily determine whether object and constraint modifiers analogous to those defined for PROTEAN have similar effects on its efficiency. Speculating more broadly, we conjecture that control knowledge of the sort used in these experiments (modifiers in- serted into focus decisions) would improve the efficiency of any application in which: (a) actions require expensive computa- tions; and (b) h c oice of actions affects the number or cost of subsequent actions. The experiments illustrate a method for analyzing the util- ity of control knowledge. Here we introduced modifiers to the strategic parameters of a basic control plan. In new ex- periments, we manipulate the structure of the control plan itself. The BB* environment facilitates these experiments. BBl provides tools for building control plans of any hierarchi- cal/heterarchical complexity. ACCORD provides a language for representing and reasoning about plans. Both BBl and ACCORD permit modular variations on the form and content of control plans. In our investigation of control reasoning, we also need to assess costs and benefits at the architectural level. Could an- other architecture exploit the necessary control knowledge more efficiently than BBl? How do alternative architectures compare on: ease of system development, clarity of knowledge represen- tation, support for explanation capabilities, and support for learning capabilities. We are conducting experiments that ad- dress these questions. Finally, our experience in conducting these experiments argues strongly for experimental investigation of theoretical as- sertions. Although we thoroughly understand the operation of BBl, ACCORD, PROTEAN, and GS, we could not reliably predict important details of their performance. For example: (a) we mistakenly expected increases in rating time to substan- tially limit the net advantages of control knowledge; and (b) we still do not fully understand why Strategy C produced net inefficiency at low resolution, but net efficiency at medium and high resolution. Given the inherent complexity of contempo- rary AI systems and the weakness and potential bias of human efforts to anticipate their behavior, experimental methods must play a key role in our research. eferences [Brinkley et ab., 19861 J. Brinkley, 6. Cornelius, R. Altman, B. Hayes-Roth, 0. Lichtarge, B. Buchanan, and 0. Jardetzky. Application of Con&a&t Satisfaction Techniques to the Determination of Protein Tertiary Structure. Technical Report, Stanford University, 1986. [Durfee and Lesser, 19861 E.H. Durfee and V.R. Lesser. Incre- mental planning to control a blackboard-based problem solver. Proceedings of the Fifth National Conference on Artificial Intelligence, $8-64, 1986. [Erman et al., 19811 L.D. Erman, P.E. London, and S.F. Fickas. The design and an example use of Hearsay-III. Proceedings of the Seventh International Joint Conference on Arti;ficial Intelligence, :409-415, 1981. [Genesereth and Smith, 19821 M.R. Geneseret h and D.E. Smith. Meta-level architecture. Technical Re- port HPP-81-6, Stanford University, 1982. [Hayes-Roth, 19851 B. Hayes-Roth. A blackboard architecture for control. Artificial Intelligence Journal, 26~251-321, 1985. [Hayes-Roth et al., 1986a] B. Hayes-Roth, A. Garvey, M.V. Johnson, and M. Hewett. A Layered Environment for Rea- soning about Action. Technical Report KSL-86-38, Stan- ford University, 1986. [Hayes-Roth et aZ., 1986b] B. Hayes-Roth, B.G. Buchanan, 0. Lichtarge, M. Hewett, R. Altman, J. Brinkley, C. Cor- nelius, B. Duncan, and 0. Jardetzky. PROTEAN: Deriv- ing protein structure from constraints. Proceedings of the AAAI, 1986. [Hayes-Roth and Lesser, 19771 F. Hayes-Roth and V.R. Lesser. Focus of attention in the Hearsay-II speech understanding system. Proceedings of the Fifth International Joint Con- ference on Artificial Intelligence, ~27-35, 1977. [Johnson and Hayes-Roth, 19861 M.V. Johnson and B. Hayes- Roth. Integrating Diverse Reasoning Methods in the BBl Blackboard Control Architecture. Technical Report KSL- 86-76, Stanford University, 1986. [Smith and Genesereth, 19851 D.E. Smith and M.R. Gene- sereth. Ordering conjunctive queries. Artificial Intelli- gence, 25:171-215, 1985. [Tommelein et al., 19871 I.D. Tommelein, R.E. Levitt, and B. Hayes-Roth. Using expert systems for the layout of tem- porary facilities on construction sites. CIB W-65 Sym- posium, Organization and Management of Construction, Berkshire, U. K, 1987. Garvey, Cornelius, and Hayes-Roth 115
1987
30
622
edundancies i s’ Avi Dechter Department of Management Science California State University, Northridge, CA 9 1330 and Cognitive Systems Laboratory Computer Science Department University of California, Los Angeles, CA 90024 The removal of inconsistencies from the problem’s representation, which has been emphasized as a means of improving the performance of backhacking algorithms in solving constraint satisfaction prob- lems, increases the amount of redundancy in the problem. In this paper we argue that some solution methods might actually benefit from using an opposing strategy, namely, the removal of redundan- cies from the representation. We present various ways in which redundancies may be identified. In particular, we show how the path-consistency method, developed for removing inconsistencies can be reversed for the purpose of identifying redundancies, and discuss the ways in which redundancy removal can be beneficial in solving constraint satisfaction problems. A binary Constraint Satisfaction Problem (CSP) is concerned with the task of finding either one or all of the n-tuples allowed by a given network of binary constraints (or finding that no such n-tuple exists). A network R of binary constraints defined on a set of variables @I, . . . ,XJ is a set of relations Wij from every variable Xi to every variable Xj. A network of constraints R represents a unique (possibly empty) n-ax-y relation p (i.e., a subset of the space X = dom (Xl) x * * * x dom (X,), where abm(Xi) denotes the domain of Xi) such that an n-tuple t is allowed by p if and only if its projections on all the uni- dimensional and two-dimensional subspaces of X simultane- ously satisfy the binary constraints of the network R. A constraint graph corresponding to a network of con- straints consists of a vertex for each variable and an edge for each binary constraint which is not the universal constraint (i.e., comprising the entire subspace). CSPs are inherently difficult problems to solve and typically are solved using some sort of a backtracking search algorithm. The issue of improving the performance of these algorithms has been on the agenda of researchers in Artificial Intelligence for quite some time (e.g., [Gaschnigl979, Maral- ick1980, Bruynooghel9811 , as many AI tasks can be forrnu- lated as CSPs (e.g., line-drawing analysis [Waltz19751 and reasoning about temporal intervals [Allen19851 ). Observing that there are possibly many equivalent network representa- tions of a given n-ary relation p, attempts were made at finding ways for moving from some initial representation to one which is better suited to be solved by backtracking. A central tieme in the litemture on this sarbjec~ is that of the benefit of removing local inconsistencies from the problem’s 1 This work was supported in part by the National Science l%urdati. Grant #lXR 8501234, and by the Cdifomia State University, Northridge. Artificial Intelligence Center Hughes Research Laboratories, Calabasas, CA 91302 and Cognitive Systems Laboratory Computer Science Department University of California, Los Angeles, CA 90024 representation. Such inconsistencies may be discovd either prior to, QP during, Dechterl986b]. An inconsistency, in general, is a state of affairs where a certain action (i.e., instantiating a variable to a certain value during backtracking) is permitted by one piece of data (con- sidered in isolation), and prohibited by another. If the data permitting the action is consulted first, then the algorithm might expand much work based on the assumption that the particular action is globally permitted only to discover later that this assumption if false. An inconsistency, once discovered, is eliminated by recording the fact that the action is not permitted. The removal of an inconsistency results in a redun- dancy in the database, namely, a situation whereby the fact that an action is prohibited is expressed by more than one piece of data. Excessive redundancy in the data has its own potential adverse effects on our ability to solve the problem efficiently. First, it often increases the amount of data that has to be stored, and second, it tends to obscure any special struc- ture the problem might have, and which may be exploited by the solution procedure. Of particular importance is the copr- nective structure of the constraint graph, which strongly affects to the tractability of the problem. The strength of the relationships between the connective structure of the con- straint network and the complexity of it solution is most vividly demonstrated by Freuder’s Fpeuder1982] conditions for backtrack-free search. in particular, he shows that if the constraint graph forms a tree, then backtrack-free search can be guaranteed (by performing minor pre-processing). Thus, if the problem already has some desirable struc- ture, then it might be beneficial to modify the process of obtaining consistency so that the structure is preserved. I? ermore, by eliminating certain types of redundancies, while adding inconsistencies, it may be possible to bring about an improvement in the representation of the problem. This paper is concerned with issue manipulating the representation of a given CSP by identifying redundant con- straints, namely, constraints whose removal from the network, while changing the connective structure of the problem, does n0t affect the set of solutions p. works of constraints relies on a distinction that can between the direct co int Rig between tW0 d~~~ernby~~~ can be thcbught Qf Dechter and Decker 105 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. relation consisting of all the instantiations of the variable-pair (Xi, Xi) which are consistent with some subset of the other constraints. The intersection of all the constraints induced on (Xi, Xi>, i.e., the constraint induced by all the constraints except the directsonstraint, is called the network-induced con- straint, denoted Rij. is the intersection The global constraint between Xi and Xj of the direct and the network-induced con- straints. An inconsistency in a constraint occurs when some pair of values is permitted by (i.e., is part of) the direct con- straint between two variables but prohibited by their network- induced constraint. An inconsistency can be eliminated by simply erasing the value-pair from the direct constraint between the variables. Clearly, such a change does not alter the set of all solutions in any way. When none of its pairs are inconsistent with the network-induced constraint, a direct con- straint is said to be explicit. Definition A direct constraint Rij is said to be explicit if the remainder of the network does not add any further res- triction_s on the global constraint between Xi and Xi, i.e., if Rij E; Rij. A pair of values is permitted by an explicit constraint if, and only if, it is part of at least one solution. Montanary [Montanaril974] has shown that if a relation can be represented by a binary network of constraints then there is a unique representation where all the constraints are explicit, called the minimal network of constraints. In contrast, redundancy in a constraint occurs when a pair of values which is prohibited by the network-induced constraint is also prohibited by (i.e., absent from) the direct constraint between two variables. A redundancy is eliminated by adding the pair to the direct constraint. This change, too, cannot have any effect on the set of all solutions. Of special interest is the case where all the pairs that are prohibited by direct constraint are already prohibited by the induced constraint. In this case the entire constraint is said to be redundant. JMnition: A direct constraint Rii is said to be redun- dant if it does not add any further Bstrrctions on the global constraint between Xi and Xi, i.e., if RG s Rij. Improving the representati0n of a CSB involves two types of operations: (1) Eliminating inconsistencies and mak- ing each constraint as explicit as possible, and (2) Eliminating redundancies by identifying and removing redundant con- straints. Although these operations represent two opposing objectives they are, in large part, orthogonal to each other. First, a constraint which is redundant will never become non- redundant as a result of making any other constraint more explicit, because this can possibly only tighten the network- induced constraint and thus make the constraint more redun- dant. Second, a constraint can never become less explicit as a result of the removal of another, redundant, constraint since this can possibly only loosen the induced constraint and make the constraint even more explicit. It is clear that in the “ideal” representation of a CSP the constraint between any two variables should be either explicit or universal (i.e., non-existing). Such networks of constraints are called U-minimal. efinition: A network of constraints for which a sub- set U of constraints are universal and all other constraints are explicit is said to be U-minimal. The properties of U-minimal networks are discussed in [Dechterl986a]. The task of obtaining a U-minimal represen- tation of a given network of constraints is as illusive as that of finding the minimal network of constraints because it requires the knowledge of the network-induced constraint for all pairs of variables. For the same reason, the task of deciding whether a given constraint is redundant or not is a difficult one. An approximate method, based on the notion of path- consistency is presented in the following section. Montanary suggested that the minimal network of constraints may be approximated by replacing the requirement that each constraint be explicit by a weaker condition, called path con- sistency. efinition: A pair of values (Xi, Xi) is said to be allowed by a path of length m through nodes (vi=vkosvkl 9 - - * 9 Vk-, 9 Vk=Vj) if there is a sequence of values (zr,zz; . . . ,2,-r) such that All redundancies associated with a redundant con- straint are eliminated by simply removing the entire constraint from the network. Rkokl(~irZl) mdRk,k,(zl,z2) ad* * * ~dR~,_,~(zm-l,xj) - efinition: A nair of values (x;. x;) which is allowed The defmitions of explicit and redundant constraints are given graphical representation in the Venn diagrams of Figure 1. Observe that a constraint can be explicit and redun- dant at the same time. In this case the direct constraint and the network-induced constraints coincide. by every path from &de Vi to node’ vj & the complete net- work R is called path-induced. Otherwise, it is called path- illegal Definition: A binary constraint Rij is said to be path- consistent if all of its pairs are path-induced. A network of constraints R is path-consistent if all of its constraints are path The requirement of a constraint being path-consistent cd b3 consistent. is weaker than that of being explicit because every explicit constramt must be path-consistent, but not every path- consistent constraint is necessarily explicit. (a) Explicit constnint 0 -t Montanary showed that a pair of values is path- induced if, and only if, it is allowed by all paths of length m=2. Path consistency algorithms repeatedly check all paths Figure 1: Explicit and Redundant Constraints of length ppl=2 and remove all path-illegal pairs until no such pairs remain [Montanaril974, Mackworthl977]. 106 Automated Reasoning The task of recognizing all the redundant constraints in a network can be approximated by replacing redundancy with a stronger requirement called path-redundancy. tion Rii by pairs that are found to be path-illegal. If this aug- mentatlon process results with the constraint becoming the universal constraint, then the original constraint is redundant. efinition: A constraint Rij is path-induced pair is already perrmtted path-redundant by it. if every The condition of a constraint being path-redundant is stronger than that of being redundant, since every path- redundant constraint must be redundant, but not every redun- dant constraint is necessarily path-redundant. The definitions of path-consistency and path- redundancy, and their relationships to those of consistency and redundancy are shown graphically in Figure 2. As the diagrams show, a path-consistent constraint is not necessarily explicit, but a path-redundant constraint must be redundant. (a) Path-Consistent Constraint (b) Path-Redundant Constraint Figure 2: Path-Consistent and Path-Redundant Constraints A convenient way to check whether a given constraint is path-redundant or not is to consider a set of value-pairs which is guaranteed to contain the path-induced constraint (for example, the Cartesian product of the domains of the two vari- ables involved), and to check the path-legality status of each pair of this set which is not in the direct constraint. The direct constraint is path-redundant if, and only if, all such pairs are path-illegal. This process is, essentially, the reverse of achiev- ing path-consistency, since it can be thought of as the process of adding path-illegal pairs to the direct constraint in an attempt to make it universal. If this attempt is successful, then the constraint is redundant. An algorithm for determining whether a constraint Rij is path-redundant is given bellow. The algorithm returns bp~ if the constraint is path-redundant and false otherwise. begin = dom(Xi) x dom(Xj) - Rii for each $ E Q-’ @I, (i, j), k) let Rij = Rij U {Pa) end if Rij end = Uij return &sue return false end begin EWMIWp, (i,j), k) let & be the domain of Xk let p be (pi, Pj> forallv E Dk if (pi, V) E Rik and (V, Pi) E Rkj return true end return false end n is the number of variables. all legal values of is found consistent IT is 0 (k), where k variable. The com- erefore, 0 (nk3) where The status of being redundant, for a given constraint, is dependent on the other constraints in the network, and any change in the network may tiect this status. In particular, the removal of one redundant constraint, while not the set of solutions, may cause other redundant c become non-redundant. Therefore, a set of constraints whiz are found to be path-redundant in a given network, may not be removed simultaneously, but rather in sequence, where the path-redundancy of the constraints in the set needs to & re- examined after the removal of each one them. Constraints that were non-redundant to start with, or that became non- redundant in the process, need not be checked again as they cannot become redundant again. The number of constraints that can be removed in this way is dependent on the sequence in which they are considered. To demonstrate, we considered the task of removing path-redundant constraints from a CSP representation of the Z-queen problem. The task in this problem is to place 5 queens on a 5x5 chessboard so that no queen is on the same row, column, or diagonal as any other. A standard f0rmula- tion of this problem as a binary CSP associates a variable wi each r0w (e.g., variables A, B, C, D, and E), each of whit may be assigned one of five values (say, a, b, c, d, and corresponding to the columns. There is a constraint between every pair of variables (for a total of ten constraints), con ing of all pairs of values which are not in direct conflict. constraint graph of this formulation is the complete graph shown in Figure 3(a). We fist performed path consistency on this problem method with two different orderings are shown in Figures 3(b) and 3(c). The procedure PER @.A (i, j), k) (given below) returns true if the the pairp is permitted for the variable pair (Xi, X*) by the path (i-k-j), and f&e (i.e.9 the p&p is path- illegab otherwise. Thus, the algorithm examines all the paths of length 2 anchored at nodes i and j, and augments the rela- When the removal of any constraint in some set of redundant constraints does not diminish the redundancy of any of the other constraints in the set, we say that they are all independently redundant. path-consistency resu that were universal in Dechter and Dechter 107 constraints are independently redundant since their simultane- ous removal will not affect the solution set. Notice, however, that while these constraints are redundant, they are not neces- sarily path-redundant, and thus the algorithm PATH- EDUNDANT is not guaranteed to recognize their redun- dancy. On the other hand it is easy to see that a set of path- redundant constraints which are not adjacent to one another, i.e., no two of them share a common variable, are indepen- dently path-redundant. This is so because only paths of length C D (a) A A B E B E (d Figure 3: Redundancy removal in the S-Queen Problem M = 2 are used to determine the path redundancy of a con- straint. The property of a set of constraints being indepen- dently redundant is desirable because it alleviates the need to search among all possible ordering of the constraints for some subset of them that may be removed. e Use of Redundancy Ellimination in Problem Solving Elimination of redundant constraints is expected to be beneficial for solution methods that rely on the connective structure of the problem as depicted by its constraint graph. Backtracking algorithms benefit from consulting the constraint graph in two ways. First, it provides a simple way of backjumping [Gaschnigl979, Dxhterl986b]. Backjump- ing is an improvement of standard backtracking whereby, at a deadend, the algorithm goes back to the 8rst variable which could be the “reason” for the deadend (rather than to previ- ous variable in the stack, as called for by standard backtrack- ing). A variable which is not connected (directly or indirectly) to the deadend variable cannot possibly be the source of the deadend, and thus it is always safe to jump back to the first available variable which is connected to the deadend variable while pruning the search tree. Second, consulting the constraint graph can reduce the amount of work required for the expansion of nodes in the search tree. This is so, because no consistency checks are re@ed between the node being expanded and nodes with which it is not directly connected. Wilizing these features has resulted in substantial improvements in backtracking performance. The amount of improvement was directly related to the sparseness of the con- straint graph [Dechterl986b]. Thus, the removal of redundant constraints, and, consequently, their corresponding edges in the constraint graph has a potential for improving the perfor- mance of backjumping. As an example, consider the CSP given in Figure 4(a) consisting of three variables, A, B, and C, all havmg the same domain {a, b, c). The constraint between the variables B and C is redundant and may be removed, resulting in the graph of Figure 4(b). (a) Figure 4: A Problem Exhibiting Redundancy The search performed by a backjumping algorithm which con- siders the nodes in the order A, B, C is shown in Figure S(a). If the redundant constraint is removed, then the search is reduced to that shown in Figure 5(b) because the first occurrence of a deadend permits jumping back to variable A. a aa t-md) ab ( nd) ac (deadend) b ba (deadend) bb ‘@e&end) C ca cab (solution) a aa (deadend) b ba (deadcnd) C ca cab (solution) 00 03 Figure 5: Backjumping Search on Figure 4 Problems Removal of redundant constraints, while potentially beneficial, generally results in increasing the search space, which may wipe out its benefits. What is needed, therefore, 1s a way to identify redundant constraints whose removal will not cause the search space to increase. This can be accom- plished by tying the notion of redundancy to the order m which the backtracking algorithm instantiates the variables. Eetd=Xi,,..., Xill be an ordering of the variables. Definition: A constraint Ri,i, is said to be directional- with respect to d-if its red considering only paths of len form (Xi,-Xi,-X4 ), fm I xnax{j, k}. 108 Automated Reasoning The removal of a constraint which is directional-nath- redundant with respect to an ordering which is precisely the reverse of the order used by a backtracking algorithm, will not affect the search space explored by the algorithm. To this is so, see why consider a network whose constraint graph is given in Figure 6: x4 X3 x2 Xl Figure 6 - An Ordered Constraint Graph Suppose that backtrack algorithm considers the variables in the order Xl, X2 X3, X4. If the constraint RB is redundant with respect to an ordering d =X4,X3,X2,X1 (i.e., using Xi alone), then its removal will not cause the algorithm to develop any more nodes because all the relevant information of this constraint must be contained in constraints R12 and R 13, which will be known to the algorithm by the time it gets to X3. By contrast, if X4 is also needed to establish the redun- dancy of constraint R23, then the removal of the constraint may cause the algorithm to consider values of X3 that would not be considered had it remained in the network. The notion of directional-path-redundancy has the additional advantage that all the constraints found to be directional-path-redundant with resuect to some direction d. are independently redundant and thus mav be removed simul: taneously. To prove this, it is enough to ihow that there is an order by which the constraints can be removed so that the removal of each constraint cannot possibly interfere with the directional-path-redundancy of the remaining: constraints. For d =Xi, 9 . . .&‘Xi-, any order-such that a const&int Ri,i, , checked before’.any constraint Ri,i- k >j, is 111 >I’ if k <m, l& this pro- . m perty. To see this, refer back to the network of Figure 5,-and assume again that d =X4,X3,X2,X1. Further assume that both constraints RM and RB are directional path-redundant with respect to d. The removal of constraint R*u cannot DOS- sibly interfere with the redundancy of R 23 with respect to X 1. Directional-path-redundancy can be found by slightly modifying algorithm PATH-WEDUN Am. Applying it before backjumping may improve its performance and is guaranteed not to cause it to deteriorate. Another method that could benefit from removing redundancy is the cycle-cutset approach mechter 19871. This method is using the fact that tree-structured CSPs can be solved in linear time by switching to a specialized tree- algorithm whenever the set of variables instantiated by a back- tracking (or a backjumping) algorithm forms a cutset of the constraint graph. The efficiency of this approach depends of the sparseness of the constraint graph, and therefore this method too should perform better if redundant constraints are removed. For example, consider again the equivalent net- works of Figure 3. In order to cut all the cycles in the network of Figure 3(a), a minimum of three variables must be instan- tiated, but it takes the instantiation of only two variables to cut all the cycles in the network of Figure 3(b), and only one vari- able (A or B) to cut all the cycles in Figure 3(c). Researchers in the area of solving constraint satisfaction prob- lems have emphasized the advantages of increasing the amount of redundancy in the network representation of prob- lems. In this paper we point out that some benefits can be obtained by removing redundancies. We extend the idea of path-consistency to enable identifying redundancies and present some ways in which redundancy elimination is useful. The authors would like to earlier version of this paper thank Judea Pearl for reading an and for his thoughtful comments. [Allen1985]Allen, J. F., poral Intervals,” ‘ ‘Maintaining Knowledge about Tem- in Readings in Knowledge Representation, R. J. Brachman H.J. Levesque, Ed. Los Altos, CA: Morgan Kaufman Publishers, Inc., 1985, pp. 509-521. [Bruynooghel981]Bruynooghe, Maurice, “Solving Combina- torial Search Problems by Intelligent Backtracking,” Infoma- don Processing Letters, Vol. 12, No. 1, February 1981. erl986a]Dechter, A. and R. Dechter, “Mnimal Con- Graphs,” UCLA, Computer Science Departme nitive Systams Laboratory, Los Angeles, CA, Tech. 74, December, 1986. pechterl986b]Dechter, R., “Learning While Searching in Constraint Satisfaction Problems,” in Proceedings Philadelphia, PA: August, 1986. -86, IP>echterl987]Dechter, R. and J. Pearl, “The Cycle-cutset Method for Improving Search Performance in AI Applica- tions,” in Proceedings ,the 3rd IEEE Conference on AI Appli- cations, Orlando, FL: February, 1987, pp. 224-230. [Freuderl982]Freuder, E.C., ‘“A Suf8cient Condition of Backtrack-free Search.,” Journal of the ACM, Vol. 29, No. 1, January 1982, pp. 24-32. [Gaschnig 1979]Gaschnig, J., ‘ ‘Performance Measurement and Analysis of Certain Search Algorithms,” Department of Com- puter Science, Carnegie-Mellon University, Pittsburgh, PA’ Tech. Rep. CMU-CS-79-124,1979. ~aralick1980]Haralick, R. WI. and G.L. Elliot, Tree Search Efficiency for Constraint Satisfaction AI Journal, Vol. 14, 1980, pp. 263-313. “Increasing Problems,” [lvlackworthl977]Mackworth, AK., ‘Consistency in Net- works of Relations,” 1977, pp. 99-118. Artificial intelligence, Vol. 8, No. 1’ 74]IMontanari, U., “Networks of nstraints: Properties and Applications to Pie: Process- ing,” Information Science, Vol. 7, 1974, pp. 95-132. ~altzll975]Waltz, D., “Understanding Line Scenes with Shadows,” in 7%~ &ychology Vi5ion, P. I-I. Winston, Ed. New York, NY: Book Company, 1975. Dechter and Dechter 109
1987
31
623
Comparing Minimax and Product in a Variety of Ping-Ching @hi1 and Dana S. NaU2 University of Maryland Collge Park, Md 20742 Abstract This paper describes comparisons of the minimax back- up rule and the product back-up rule on a wide variety of games, including P-games, G-games, three-hole kalah, Othello, and Ballard’s incremental game. In three-hole kalah, the product rule plays better than a minimax search to the same depth. This is a remarkable result, since it is the first widely known game in which product has been found to yield better play than minimax. Furthermore, the relative performance of minimsx and product is related to a parameter called the rate of heuristic flaw (rhf). Thus, rhf has potential use in predicting when to use a back-up rule other than minimax. I. Introduction The discovery of pathological games [Nau, 19801 has sparked interest in the possibility that various alterna- tives to the minimax back-up rule might be better than minimax. For example, the product rule (originally sug- gested by Pearl [1981, 1984]), was shown by Nau, Pur- dom, and Tzeng [1985] to do better than minimax in a class of board splitting games. Slagle and Dixon [1970] found that a back-up pro- cedure called “M & N” performed significantly better than minimax. However, the M & N rule closely resem- bles minimax. Until recently, poor performance of minimax relative to back-up rules significantly different from minimax has not been observed in commonly known games such as kalah. (1) This paper presents the following results: For a wide variety of games, a parameter called the rate of heuristic flaw appears to be a good predic- tor of how well minimax performs against the pro- duct rule. These games include three-hole kalah, Othello, P-games, G-games, and possibly others. This suggests that rhf may serve not only as a guideline for whether it will be worthwhile to con- sider alternatives to minimax, but also as a way to ’ This work has been supported in part by a Systems Research Center fellowship. 2 This work has been supported in part by the following sources: an NSF Presidential Young Investigator Award to Dana Nau, NSF NSFD CDR-85-00108 to the University of Maryland Systems Research Center, IBM Research, and General Motors Research La- boratories. relate other characteristics of game trees to the per- formance of minimax and other back-up rules. (2) In studies of three-hole kalah, the product rule played better than a minimax search to the same search depth. This is the first widely known game in which product has been found to play better than minimax. The product rule still has a major drawback: no tree-pruning algorithm has been developed for it, and no correct pruning algorithm for it can conceivably do as much pruning as the various pruning algorithms that exist for minimax. However, the performance of the product rule in kalah suggests the possibility of exploiting non- minimax back-up rules to achieve better perfor- mance in other games. efinitions By a game, we mean a two person, zero sum, per- fect information game having a finite game tree. All of the games studied in this paper satisfy this restriction. Let G be a game between two players called mux and min. To keep the discussion simple, we assume that G has no ties, but this restriction could easily be removed. If n is a board position in G, let u(.) be the utility function defined as t 1 if n is a forced win node u(n) = 0 if n is a forced loss node. We consider an evaluation function to be a func- tion from the set of all possible positions in G into the closed interval [O,l]. If e is an evaluation function and n is a node of G, then the higher the value e(n), the better ii looks according to e. We assume that every evaluation function produces perfect results on terminal game positions (i.e., e(n) = u(n) for terminal nodes). If m is a node of G, then the depth d minimax and product values of m are (m) if depth(m)=d or m is terminal M(m,d) = in, M(n) if min has the move at m ax, M(n) if max has the move at m 100 Automated Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. e(m) if depth(m)=- or m is terminal P(m,d) = II, M(n) if min has the move at m l- II, (l-M(n)) if max has the move where n is taken over the set of children of m. Pr[u(t&m,n)) < u(-l,(m,n))J = 0. Therefore, since there are only a finite number of nodes at depth d, there is a value kE(O,l) such that for every node m at depth d, Let m and n be any two nodes chosen at random u(m) = 1 if and only if e(m) 2 k. from a uniform distribution over the nodes at depth cl of G. Let te(m,n) (and Le(m,n)) be whichever of m and n looks better (or worse, respectively) according to e. Thus if e(m) > e(n), then te(m,n) = m and -Ce(m,n) = n. If e(m) = e(n), we still assign values to t,(m,n) and Ae(m,n), but the assignment is at random, with the fol- lowing two possibilities each having probability 0.5: Cl) t eCmyn> = m and le(m,n) = n (2) “1 ,(m,n) = n and -Ce(m,n) = m. Since e may make errors, exhaustive search of the game tree may reveal that t,(m,n) is worse than Ae(m,n), i.e., that uct ,h-d) < u(~eb9nN. In this case, a heuristic flaw has occurred: the evalua- tion function has failed to give a correct opinion about m and n. The rate of heuristic flaw at depth d, denoted by rhf(d), is defined to be the quantity W-G ,(md) < u(~,(m9n)ll. hesretical Considerations erminating at depth d of a game tree. If rhf(d) is small, it is intuitively apparent that this search should perform quite well. The ques- tion is whether it will perform better than some other back-up rule. For simplicity, assume that the game tree is binary. Assume further that it is max’s move at some node c, and let m and n be the children of c. Let d be the depth of m and n. Then (1) Pr[u(c)=l] = Pr[u(t Jm,n))=l or u(.l,(m,n))=l] By mathematical induction, it follows that forced win nodes will always receive minimax values larger than forced loss nodes, so a player using a minimax search will play perfectly. n u=o P=.16 111 A n2 p”!!i2 nll n12 n21 n22 u=o u=o u=o u=l e=.4 e=.4 e=.2 e=.6 FIGURE 1: A case where product makes the wrong choice. But if the search is a product rule search rather than a minimax search, then the search will not always result in perfect play. For example, consider the tree shown in Figure 1. By looking at the four leaf nodes, it is evident that rhf=O with k=0.5. Thus, a minimax search at node n must result in a correct decision. How- ever, a nroduct rule search would result in incorrectly choosing the forced loss node nl. This suggests that when rhf is small, the minimax rule should perform ter than the product rule. . When Rhf is Large Let m and n be any two nodes at depth d. In general, rhf can take on any value between 0 and 1. But if e is a reasonable evaluation function, and if t ,(m,n) is a forced loss, this should make it more likely that le(m,n) is also a forced loss. Thus, we assume that Pr[u(Ae(m,n))=l I u(f e(m,n))=O] < Pr(u(Le(m,n))=l]. Thus since u(.) must be either 0 or 1, = Pr[u(t e(m,n>>=ll + Pr[u(~,(m,n))~u(t ,(m,n))] M Pr[u(t,(m,n))=l] + rhf(d). The smallest possible value for rhf(d) is zero. If rhf(d) is close to zero, then from (1) we have rhf = Pr[u(le(m,n))=l 2% u(t ,(m,n))=O] < Pr[u(t ,(m,n))=O] Pr[u(4e(m,n))=1]. Suppose rhf is large, i.e., Pr[u(c)=l] M Pr[u(t ,(m,n))=l], which says that the utility value of c is closely approxi- mated by the utility value of its best child. But accord- ing to the minimax rule, the minimax value of c is the minimax value of the best child. This suggests that in this case one might prefer the minimax back-up rule to other back-up rules. More specifically, consider the extreme case where rhf M Pr[u(t e(m,n))=O] Pr[u(le(m,n))=l]. Then from (l), Pr[u(c)=l] M Pr[u(t ,(m,n))=l] + Pr[u(f e(m,n))=O] Pr[u(le(m,n))=l]. Thus, if e(te(m,n)) and e(re(m,n)) are good approxima- tions of Pr[u(t e(m,n))=l] and Pr[u(-Ce(m,n))=l], then rhf(d)=O. In this case, whenever m and n are two Wu(c)=ll M e(t &w4) + (1 - e(t ebb4>) @-&w>) nodes at depth d of 6, = 1 - (1 - 4t ,(m,n))> (1 - e(l,(m,n))), Cki and Nau 101 which is precisely the formula for the product rule given in Section II. This suggests that when rhf is large, the ,product rule might be preferable. IV. Empirical Considerations The arguments given in Section III suggest that minimax shoul$ do better against product when rhf is low than it does when rhf is high. To test this conjec- ture, we have examined five different classes of games. Space does not permit us to state the rules of each of these games here. However, detailed descriptions of these games may be found in the following references: G-games [Nau, 19831, Ballard’s incremental game [Bal- lard, 19831, Othello [Hasagawa, 19771, P-games [Nau, 19821, kalah [Slagle & Dixon, 19691. A. G-Games A G-game is a board-splitting game investigated in [Nau, 19831, where two evaluation functions er and es were used to compare the performance of minimax and product. The product rule did better than minimax when er was used, and product did worse than minimax when es was used. For our purposes, the significance of this study is this: it can be proven that for every depth d, rhf(d) is higher using er than it is using es. Thus, on G-games, product performed better against minimax when using the evaluation function having the higher rhf. This matches our conjecture. Ballard’s Experiments used a class of incremental games with uniform branching factor to study the behavior of minimax and non-minimax back-up rules. One of the non-minimax back-up rules was a weighted combina- tion of the computational schemes used in the minimax and product rules. Among other results, he claimed that “lowering the accuracy of either max’s or min’s static evaluations, or both, serves to increase the amount of improvement produced by a non-minimax strategy.” Since low accuracy is directly related to a high rhf, this would seem to support our conjecture. But since Ballard did not test the product rule itself, we cannot make a conclusive statement. c. Othello Teague [1985] did experiments on the game of Othello, using both a “weak evaluation” and a “strong evalua- tion.” The weak evaluation was simply a piece count, while the strong one incorporated more knowledge about the nature of the game. According to Teague’s study, minimax performed better than product 82.8% of the time with the strong evaluation, but only 63.170 of the time with the weak evaluation. It would be difficult to measure the rhf values for Othello, because of the immense computational overhead of determining whether or not playing positions in Othello are forced wins. However, since rhf is a measure of the probability that an evaluation function assigns 102 Automated Reasoning forced win nodes higher values than forced loss nodes, it seems clear that the stronger an evaluation function is, the lower its rhf value should be. Thus, Teague’s results suggest that our conjecture is true for the game of Othello. D. P-Games A P-game is a board-splitting game whose game tree is a complete binary tree with random independent assign- ments of “win” and “loss” to the terminal nodes. P- games have been shown to be pathological when using a rather obvious evaluation function el for the games [Nau, 1982]-and in this case, the minimax rule per- forms more poorly than the product rule [Nau, Purdom, and Tzeng, 19851. However, pathology in P-games disappears when a stronger evaluation function, e2, is used [Abramson, 1985]. It can be proven that e2 has a lower rhf than el. Both el and e2 return values between 0 and 1, and the only difference between el and e2 is that e2 can detect certain kinds of forced wins and forced losses (in which case it returns 1 or 0, respectively). Let m and n be any two nodes. If e2(t ,s(m,n)) = 0, then it must also be that e2(AeJm,n)) = 0. But it can be shown that e2(x) = 0 only if x is a forced loss. Thus u(Lez(n, m))=O, so there is no heuristic flaw. It can also be shown that e2(x) = 1 only if x is a forced win. Thus if e2(tJm,n)) = 1, then u(tRz(m,n))=l, so there is no heuristic flaw. Analogous arguments hold for the cases where e2($&-4> = 0 or e2(&ez(m,n)) = 1. The cases described above are the only possible cases where e2 returns a different value from el. No heuristic flaw occurs for e2 in any of these cases, but heuristic flaws do occur for el in many of these cases. Thus, the rhf for e2 is less than the rhf for el. TABLE 1: P-game simulation results. TO wins for % wins for Search minimax minimax depth using el using e2 2 51.0% 52.1% 3 52.5% 51.8% 4 49.9% 50.3% 5 50.7% 49.3% 6 46.2% 48.1% 7 46.7% 48.4% 8 44.9% 48.6% 9 47.2% 50.0% We tested the performance of minimax against the product rule using el and e2, in binary P-games of depths 9, 10, and 11, at all possible search depths. For each combination of game depth and search depth, we examined 3200 pairs of games. The study showed that for most (but not all) search depths, minimax performed better against product when the stronger evaluation function was used (for example, Table P shows the results for P-games of depth 11). Thus, this result sup- ports our conjecture. E. alah Slagle and Dixon [1969] states that “Kalah is a moderately complex game, perhaps on a par with check- ers.” But if a smaller-than-normal kalah playing board is used, the game tree is small enough that one can search all the way to the end of the game tree. This allows one to determine whether a node is a forced win or forced loss. Thus, rhf can be estimated by measuring the number of heuristic flaws that occur in a random sample of games. By playing minimax against product in this same sample of games, information can be gath- ered about the performance of minimax against product as a function of rhf. To get a smaller-than-normal playing board, we used three-hole kalah (i.e., a playing board with three bottom holes instead of the usual six), with each hole containing at most six stones. One obvious evaluation function for kalah is the “kalah advantage” used by Slagle and Dixon [1969]. We let e, be the evaluation function which uses a linear scaling to map the kalah advantage into the interval [O,l].l If P(m,2) is computed using e,(m), the resulting value is generally more accurate than e,(m). Thus, weighted averages of e,(m) and P(m,2) can be used to get evaluation functions with different rhf values: e:(m) = w e,(m) + (1-w) P(m,2), for w between 0 and 1. We measured rhf(4), and played minimax against product with a search depth of 2, using the following values for w: 0, 0.5, 0.95, and 1. This was done using 1000 randomly generated initial game boards for three- hole kalah. For each game board and each value of w, two games were played, giving each player a chance to start first. The results are summarized in Table 2. TABLE 2: kalah simulation results W rhf(4) % games won % games won by product by minimax 1 0.135 63.4% 36.6% 0.95 0.1115 55.5% 44.5% 0 0.08 53.6% 46.4% 0.5 0.0765 51.2% 48.8% _ Note that the lowest rhf was obtained with w = 0.5. This suggests that a judicious combination of direct evaluation with tree search might do better than either individually. This idea needs to be investigated more fully. ’ A preliminary study of rhf [Chi & Nau, 19861 compared minimax to the product rule using ea in three different variants of kalah. This study, which used a somewhat different definition of rhf than the one used here, motivated the more extensive studies reported in the current paper. Note also that product performs better than minimax with all four evaluation functions.2 This sug- gests that product might be of practical value in kalah and other games. Also, the performance of product against minimax increases as rhf increases. This matches our conjecture about the relation between rhf and the performance of minimax and product. ve -Games ith Varying Section Iv shows that in a variety of games, minimax performs better against product when rhf is low than when rhf is high. To investigate the specific relation- ship between rhf and performance of minimax versus product, we did a Monte Carlo study of the perfor- mance of minimax against product on binary P-games, using an evaluation function whose rhf could be varied easily. For each node n, let r(n) be a random value, uniformly distributed over the interval [O,l]. The evaluation function ew is a weighted average of u and r: e”(a) = w u(n) + (l-w) r(n). When the weight w = 0, ew is a completely random evaluation. When w = 1, ew provides perfect evalua- tions. For o < w < 0.5, the relationship between w and rhf is approximately linear. For w 2 0.5, rhf = 0 (i.e., ew gives perfect performance with the minimax back-up rule). In the Monte Carlo study, 8000 randomly gen- erated initial game boards were used, and w was varied between 0 and 0.5 in steps of 0.01. For each initial board and each value of w, two games were played: one with minimax starting first, and one with product start- ing first. Both players were searching to depth 2. Fig- ure 2 graphs the fraction of games won by minimax against product, as a function of rhf. Figure 2 shows that minimax does significantly better than product when rhf is small, and product does significantly better than minimax when rhf is large.3 Thus, in a general sense, Figure 2 supports our conjecture about rhf. But Figure 2 also demonstrates that the relationship between rhf and the performance of minimax against product is not always monotone, and may be rather complex. 2 Table 2 shows results only for search depth 2. We examined depth 2 to 7 and product rule played better than minimax in all of them except with less statistical significance for depth 3 and 6. 3 Furthermore, the poor performance of minimax when rhf is large corroborates previous studies which showed that product did better than minimax in P-games using a different evaluation function [Nau, Purdom, and Tzeng, 19851. Chi and Nau 103 % minimax wins against product References 0.533 h 0.5 . . . . . . . . .._ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.473 ‘, I l rhf 0.0 0.236 FIGURE 2: Performance of minimax against product using e w as r;hf varies. VI. Conclusions and Speculations The results presented in this paper are summarized below: (1) Theoretical considerations suggest that for evalua- tion functions with low rhf values, minimax should perform better against product than it does when rhf is high. Our investigations on a variety of games confirm this conjecture. (2) In the game of kalah with three bottom holes, the product rule plays better than a minimax search to the same search depth. This is the first widely known game in which product has been found to yield better play than minimax. Previous investigations have proposed two hypotheses for why minimax might perform better in some games than in others: dependence/independence of siblings [Nau, 19821 and detection/non-detection of traps [Pearl, 19841. Since sibling dependence generally makes rhf lower and early trap detection always makes rhf lower, these two hypotheses are more closely related than has previously been realized. One could argue that for most real games it may be computationally intractable to measure rhf, since one would have to search the entire game tree. But since rhf is closely related to the strength of an evaluation function, one can generally make intuitive comparisons of rhf for various evaluation functions without searching the entire game tree. This becomes evident upon exam- ination of the various evaluation functions discussed earlier in this paper. There are several problems with the definition and use of rhf. Since it is a single number, rhf is not neces- sarily an adequate representation for the behavior we are trying to study. Furthermore, since the definition of rhf is tailored to the properties of minimax, it is not necessarily the best predictor of the performance of the product rule. Thus, the relationship between rhf and the performance of minimax versus product can be rather complex (as was shown in Section V). Further study might lead to better ways of predicting the per- formance of minimax, product, and other back-up rules. 104 Automated Reasoning [Abramson, 19851 Abramson, B., “A Cure for Patho- logical Behavior in Games that Use Minimax,” First Workshop on Uncertainty and Probability in AI (1985). [Ballard, 19831 Ballard, B. W., “Non-Minimax Search Strategies for Minimax Trees: Theoretical Foundations and Empirical Studies,” Tech. Report, Duke University, (July 1983). [Chi & Nau, 1986] Chi, P. and Nau, D. S., “Predicting the Performance of Minimax and Product in Game Tree Searching,” Second Workshop on Uncertainty and Probability in AI (1986). [Hasagawa, 19771 Hasagawa, G., How to Win at Othello, Jove Publications, Inc., New York (1977). [Nau, 19801 Nau, D. S., “Pathology on Game Trees: A Summary of Results,” Proc AAAI-80 , pp. 102-104 (1980). [Nau, 19821 Nau, D. S., “An Investigation of the Causes of Pathology in Games,” AI Vol. 19 pp. 257- 278 (1982). [Nau, 19831 Nau, D. S., “On Game Graph Structure and Its Influence on Pathology,” Internat. Jour. of Comput. and Info. Sci. Vol. 12(6) pp. 367-383 (1983). [Nau, Purdom, and Tzeng, 19851 Nau, D. S., Purdom, P. W., and Txeng, C. H., “An Evaluation of Two Alternatives to Minimax,” First Workshop on Uncertainty and Probability in AI, (1985). [Pearl, 19811 Pearl, J., “Heuristic Search Theory: Sur- vey of Recent Results,” PTOC. IJCAI-81 ., pp. 554-562 (Aug. 1981). [Pearl, 19841 Pearl, J., Heuristics, Addison-Wesley, Reading, MA (1984). [Slagle & Dixon, 19691 Slagle, J. R. and Dixon, J. K., “Experiments with Some Programs that Search Game Trees,” JACM Vol. 16(2) pp. 189-207 (April 1969). [Slagle & Dixon, 19701 Slagle, J. R. and Dixon, J. K., “Experiments with the M & N Tree-Searching Program,” CACMVol. 13(3) pp. 147-154 [Teague, 19851 Teague, A. H., “Backup Rules for Game Tree Searching: A Comparative Study,” Master’s Thesis, University of Maryland (1985).
1987
32
624
LIFIA, BP 68, 38402 St Martin d’Hbres Cedex, France Telex: USMG 980 134F Abstract A method is presented to express and use syn- tactic analogies between proofs in interactive thee- rem proving and proof checking. Up to now, very few papers have addressed instances of this pro- blem. The paradigm of “proposition as types” is adopted and proofs are represented as terms. The proposed method is to transform a known proof of a theorem into what might become a proof of an %Imlogous” -according to the user- propo- sition, namely the one to be proved. This trans- formation is expressed by means of second order pattern matching (this may be seen as a genersli- sation of rewriting rules), thus allowing the use of variable function symbols. For the moment, it is up to the user to discover the transformation rule, and the paper deals only with the problem of ma- naging it. We explain the proposed analogy treat- ment with a fully developed running example. In looking for a proof of a theorem it is very helpful to find “analogies” with proofs of already proved theorems in order to guide the discovery of the new proof. A typical example which can be found in mathemati- cal texts is the statement “this theorem can be proved as the previous one”. This sentence stands for a proof which is analogous to the designated one. But much more “ana- logy information” may be conveyed by the text. Actually, a larger amount of information seems to be needed if me- chanization is considered. Many questions are raised by these simple intuitive observations, the more important are: 1. What does “analogy” mean in this context? 2. How to formalize in some way this analogy (espe- cially in mechanized theorem proving or proof che- cking)? 3. What is the proof representation adapted to this notion of analogy? Partial support for this work was provided by the Centre National de la Recherche Scientifique (PRC Intelligence Artificielle) most e 4. At which level of abstraction is analogy useful and manageable? 5. Are we interested in syntactic, or semantic analo- gies or both? 6. Are there tools adapted to the handling of this no- tion of analogy? 7. Is it interesting to modify (or &end) these tools in order to make them more powerful for the treat- ment of analogy understood with the adopted mea- ning? In this paper an attempt is made to partially answer of these questions: Obviously, we cannot answer the first question in any general way, but we propose some kind of “syn- tactic analogy” and we leave (by now) to the user the task of discovering “analogies”. A high level language and some flexibility to formalize (with constraints) these analogies in a nondeterministic way are offered to him. , The user must formalize the analogies as second order transformation rules corresponding to the transformation from a proof to what is considered (by the user) as an analogous one. We may consider as analogous proofs in a large spectra with two ends: proofs are analogous just because they are proofs or just because they are the same. But these kinds of analogy -too much or not enough general- are useless. The adopted analogy must therefore not reach one or the other of these ends. We have chosen to emphasize analogies on the proofs structure. We have decided to adopt the so-called Uproposition as types” paradigm, and thus represent a proof as a term the type of which is the proposition being proved. We consider that a second order pattern matching algorithm is a good tool to be used in a first ap- proach to syntactical analogy. Only syntuctic analogies are manageable with the chosen tool. Boy de la Tour and Caferra 95 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. e Some modification of Huet’s second order pattern matching is currently studied. To our knowledge analogy has been considered in theorem proving in very few papers (see for example the pioneer work [Kling, 19711 and also [Plaisted, 1981] for the use of abstraction in the resolution method) and we do not know about papers treating analogy in a “proof as term” approach, which is the one chosen in the present work. In [Constable et al, 19851 it is suggested that << one can imagine writing very general Yransformation tactics” (for details see [Cons- table et al, 19851) to construct proofs by ana- logy to existing proofs >> but no indication is given about how to tackle this pro- blem. The structure of the paper is as follows: In section 2 we present some generalities about ana- logy and explain why we have chosen second order trans- formation rules. In section 3 we expose the notion of proof as term and introduce the example which we fully develop in section 4 to set out our method. Section 5 basically evokes some problems raised by the chosen ap- proach. II Some Remarks About Analogies Between The following diagram shows how analogy is treated: proof -schema1 transformation rule * proof-schema2 A S:set of substitutions (yield by Znd order pattern matching algorithm) ‘1 proof 1 proposed-proofs = (known proof) {a(proof ,schema2) 10 E S) In principle, analogies between proofs may be stated a posteriori in the metalanguage (using a (meta-)sentence expressing that a transformed term obtained from proof1 is itself a proof). This sentence can be proved in the metalanguage. But, in everyday mathematics analogies are used in a nonformal manner. When a mathematician wants to formally use a proof transformation, he does metareasoning and not analogical reasoning. Moreover analogy is intrinsicaZZy an uncertain way of reasoning, which, if used, must be checked. The transformation rule heritates this intrinsic (and hazardous) uncertainty (it can denotes something which is not always true). In some way, the non-unicity of the solutions of the matching, as explained below, brings a 96 Automated Reasoning part of this uncertainty. Three questions arise naturally: o Why natural deduction oriented? e Why proof as term? e Why second order pattern matching? It is a well known fact that natural deduction is a good formalization of mathematical reasoning (see for ex. [Gentzen, 19691) and the representation of proofs as terms reflects the abstract proof structure (see for ex.[Constable et al, 19851, [C onstable et al, 19861, [deBruijn, 19891, [Mil- ler and Felty, 1986]). We have thus adopted this para- digm in our approach. Proof-representing terms are built from functional constant symbols denoting inference rules and first-order constants denoting axioms. Having first-order variables in terms allows represen- tation of partial proofs, which means proofs “contai- ning” unproved lemmas (see [Gordon et d, 19791, [Milner, 19851). That is, these first-order variables range over proof terms. A further generalization will allow us to use va- riables to denote inference rules or composition of infe- rence rules (considered as functions). This is obviously not possible if we restrict ourselves to first order terms, where function symbols are all constants. III We adopt the set of inference rules found in [Miller and Felty, 19861 which is a slightly modified version of Gent- Zen’s LK system (Gentzen, 19691. We list below only the ones we use in the following example. andl I'-,O,A B,A+A. A =+ B,l-‘,A --) O,AampJ imps some-r r axiom These inference rules considered as functional constants have polymorphic types. We write t : T to say that t is a well-formed proof term and T is a ground type (i.e. a sequent) which is an instance of the principal type of t. Actually, it does not say more than: t is a proof of 2’. Well-formedness and type inferencing on terms in po- lymorphic signatures are quite difficult problems and we shall not discuss them here. In the following we shall as- sume available a decision procedure for the correctness of such an expression t : 2’. In the following, we use a second order pattern mat- ching algorithm from [Huet and Lang, 19781. This al- gorithm receives on input a subset of second order X- calculus which is enough here, and computes a complete and minimal set of unifiers. See [Huet and Lang, 19781 for details. We are now going to develop an example (from [Mil- ler and Felty, 19861) to set out the different steps of the proposed method. The starting point is a proof of the se- quent segl: + (p(u) V q(b)) A Vx(p(x) =+ q(x)) =S 3xq(x) The proof, which we shall call proofl, is: QW --) 44 some-r p(u) * p(u) q(u) 4 3xq(x) q(b) -+ q(b) 4mpl some3 P(49PW * t?(a) -+ 3??(x) q(b) 4 3x&) all1 t hinl P(u)Yx(P(x)*q(x))-+3xq(x) q(Wx(p(x)=+q(x)Wxq(x) -_ - P(Q) v q(b),vx(P(x) * a(4) + 344 undl (PM v !#)) A Vx(p(x) =+ q(x)) + 3xq(x) imp3 --) b(Q) v !m A Vx(p(x) * q(x)) =+ 3xq(x) proof1 is represented by the term: imp~(undl(orl(uZll(impl(uxiom(p(u)), somes(uxiom(q(u))))), thinl(some_r(uxiom(q(b))))))) We thus have l- proof1 : seql. Let us now try to prove the following sequent: se@: -+ (P(a) V r(b)) A VX(P(X) =+ +))A Vx(q(x) * r(x)) * 3xr(x) Of course, one can prove it without any knowledge about the proof of seql, but it may be easier to use some information carried by pr00f1. Moreover, we think that the human reader, having read and understood the proof of seql, cannot try to prove seq2 without using proof 1, at least unconsciously. The usual way to use a proof is to have it as a subterm of the proof we are looking for. In our example, we can see that it is certainly possible here also using the lemma: + V~(P(X) * q(x))Avx(q(x) * r(x)) =$ vx(~(x) =+ r(x)) ut this actually implies some metalevel reasoning (one can replace a subformula by an equivalent one, etc.), and all the process of proving metatheorems and using them is a quite difficult and long task. We shall not always want or be able to find and prove general results during the mathematical work. Our goal here is to draw closer to informal remarks we can make after a quick analysis of proofl. 1. The last three rules are imp-r(undl(orJ.. .))). They are used to connect and transform the p(a) case and the q(6) case into the right sequent seql. The only change to prove seq2 will be to add an andl rule to “connect” the extra hypothesis v+lk4 =+ r(x))- On the right hand side of the tree, there is a “quick” reasoning on q(b), which we call g, followed by an application of the thinning. To prove seq2, we shall have to add one thinning. On the left hand side of the tree, we can find the same quick reasoning g, but this time applied to q(u). Then follows something, say k, to get the p(u) case. At this point of analysis of prooj2, or, we could better say, at this level of analogy between proof1 and proofa, we can write a transformation rule: f (o447M4h WnJbMW))H) - f (undl(orl(k’(g(r(u))),i(thinl(ths’nl(g(r(b))))))))) where f 9 g, i, k and k’ are second order variables with types: Of course, one can be more precise in giving a type to these variables, depending on the polymorphic pos- sibilities. As above, we do not discuss this topic. Mo- reover Huet’s second order pattern matching runs on a slightly restricted second order X-calculus with simple types (sorts) (see [Huet and Lang, 19781 and also [Bundy, 19831). We thus cannot (for the moment) use the poly- morphic type discipline in the pattern matching, the only consequence of which is to bring more unifiers, the extra ones being useless. There are some remarks to do concerning this trans- format ion rule: 8 We have introduced the variables f and i because we are not only interested in the analogy between proof 1 and proof 2 (the proof of seq2 we are looking for), but we have in mind a more general analogy. Q, The variable k’ only appears in the right hand side of the rule, and thus it cannot be instantiated by any unifier resulting from the matching with the left hand side. Therefore, this transformation rule does not bring proof terms, but proof schemas. The free variables appearing in them are to be instantiated by a theorem prover (the type of these instancia- tions is known given the sequent to be proved by the schema. Instantiating a first order variable is to prove a lemma, instantiating a second order va- riable is to find a deduction). Boy de la Tour and Caferra 97 The pattern matching applied to proof1 with the left hand side of the rule gives a set of 14 unifiers (we have im- plemented Huet’s matching algorithm in Common-Lisp running on SUN), and thus we obtain 14 terms by ap- plying these unifiers to the right hand side. We do not list them allAas most of them are to be deleted by the type inferencing process (gi& seq2). In this example, the only remaining term is: imp_r(andl(andJ(orl(k’(some~(uxiom(r(a)))), thinl(thinl(some2=(uxiom(r(b)))))))) and k’ must be instantiated with the type: l-u -+3xrx p(u),Vx(p(x) =s q(x)),Vx(q(x) * r(x)) -+ 3xr(x] This is of course possible using the unifier : < k’; Xx.uZZl(impl(uxiom(p(u)), dZJ(impJ(uxiom(q(a)), 5)))) > where x is a first order variable of type r(u) -+ 3xq(x). This gives proof2 of type seq2. Furthermore, we can apply the transformation rule to proof2 to find a proof of seq3, that is: + [(p(a) ” s(b)) A Vx(p(x) * q(x))A Vx(q(x) 3 r(x)) A Vx(r(x) * s(x))] * 3xs(x) For that purpose, the rule must be slightly modified: replacing q by r and r by s. The result of the pattern matching is a set of 21 unifiers, and at the end we obtain the term imp~(undl(undl(undl(orl(k’(some~(uxiom(s(u)))), thinl(thinl(thinl(somej(aziom(s(b)))))))))))) and to get proof 3, k’ is replaced by Xx.uZlJ(impl(uxiom(p(u)), dZJ(impJ(uxiom(q(u)), uW(impJ(uxciom(r(u)), x)))))) This is the more general analogy we were talking about. Now that we feel more comfortable with the problem of expressing analogies with transformation rules, we may refine the analogy between proof 1 and proof 2 to get a better transformation rule, i.e. such that there will be no free variables to be instantiated by a theorem prover. The troublesome fact in the previous analogy lies in the third point, where we didn’t try to look into the “so- mething” to get the p(u) case. But with a further analysis of the proof, we can see how it works. This “something” is built from a repetition of “something else”, say h, on p(u), then on q(u) and so on.. . There is still some fuzzyness remaining in that des- cription. If we want, we surely could be more precise, and so on until we would find (alone.. .) the searched proof. We will rather let the computer do these last steps by its own, and thus we don’t mind that “something else”. Let us write the transformation rule expressing this level of analogy: f (orJ(W(p(4 dqW))h i(thinJ(ddb)))))) _+ f (undJ(orJ(k(h(p(a), h(qb),g(rb))))), i(thinJ(thinJ(g(r(b)))))))) In this rule, all the free variables in the right hand side are free in the left hand side. That is what we were looking for, but does it work, does it actually build the searched proof? The only way to know is to try! The pattern matching with proof 1 computes 64 unifiers. In the 64 terms then obtained we can find proof2. The corresponding unifier is: ((g lambda (x9 (some-r (axiom x99) (i lambda (x9 x9 (h lambda (x y9 (all-1 (imp-1 (axiom x9 y>)> (k lambda (x9 x9 (f lambda (x9 (imp-r (and-1 x9 9 9) Therefore, this analogy is correct to get a proof of seq2 from proof 1. Moreover, we can say it is complete as it doesn’t leave anything to prove. Only proof checking (type inferencing) is needed here. As in the previous analogy, this one can be used to solve some other problems, at least the demonstration of seq3. The pattern matching with proof2 using the rule where we have replaced p, q and r by q, r and s respectively, brings 148 unifiers, among which we find the right one to get proof3: ((g lambda (x9 (some-r (axiom x999 (h lambda (x y) (all-1 (imp-1 (axiom x9 ~999 (i lambda (x9 (thin-1 x99 (k lambda (x9 (all-1 (imp-1 (axiom (p a)> x99) (f lambda (x9 (imp-r (and-1 (and-1 x9>>)) We now set out in an algorithmic way the proposed method: 1. the user already has I- proof 1 : thml and he has a formula (or sequent) thm2 he wants to prove, which he thinks is a problem analogous to the solved one. 2. he writes a (or uses an already written) transfor- mation rule proof -schema1 --+ proof-schema2 containing first or second order variables. 3. the matching is done between proof -schema1 proof 1, computing a finite set S of unifiers. and 4. the corresponding instances of proof schema2 a.re computed. let T = {a(proof,schemu2))a E S}. 5. -The proof checking is attempted on every element of 2’. We then get T’ = {t E TI I- t : thm2). 6. if the terms in T’ have free variables, a theorem prover tries to instantiate them all in every t in T’. We then have T” = {atlt E T’ A ot is ground A I- at : thm2) (equal to T’ if there is no free variables in 7”). 7. if T” is empty, the analogy fails. Otherwise it suc- ceeds and we can choose for example the shortest proof in T” if there are several. At any step from 3 to 6 a failure of the computed set) can be added. test (on emptiness 9% Automated Reasoning We have presented a method which we consider to be a first step towards a partial solution of the problem: “How to formulize a notion as powerful and frequently employed in humun mathematical reasoning as proof analogy 7” Many problems strongly related to the principal subject of the present work have not been treated here. We are now working on them in order to have a deeper grasp of the ideas evoked in this paper. The problems are essen- tially those mentioned in 6,7 in the Introduction and they are: BD Some possible modifications of Huet’s matching al- gorithm: - Try to take into account full types (to make types represent sequents). It will drastically diminish the number of unifiers, thus increa- sing the efficiency of the algorithm. - Maybe eliminate from the result the constant functions (i.e. Xx.t, where x does not appear in t) which are not, a priori, useful in analogy. We do not necessarily need a complete set of unifiers. - The matching algorithm works on A-terms. We only use a subset of this. Can we have the language more powerful expressing facilities to write transformation rules? in Q Is it possible to help the user to improve a firstly wrong or not quite interesting transformation rule, exploiting failures of the matching algorithm? More generally, is it possible to automatically build and use these rules ? Is it possible to incorporate the kind of analogy pre- sented in this paper to help (and hopefully guide) the “transformation tactics” presented in works as those of Constable et al. ([Constable et al, 19851, [Constable et al, 1986])? We thank Ph. Schnoebelen for useful comments on an earlier draft of this paper. [Boyer and Moore, 19841 Robert S. Boyer and J. Strother Moore. Proof Checking, Theorem Proving and Pro- gram Verification. In Automuted Theorem Proving: After 25 Years, pages 119-132. Contemporary Ma- thematics Vol. 29, American Mathematical Society, 1984. [Bundy, 19831 Alan Bundy. The computer modelling of Mathematical Reasoning. Academic Press, 1983. [Constable et al., 19851 Robert L. Constable, Todd B. Knoblock, and Joseph L. Bates. Writing programs that construct proofs. Journal of Automated Reaso- ning P:285-326, 1985. (Constable et al., 19861 Robert L. Constable et al. Im- plementing Mathematics with the Nuprl Proof Deve- lopment System. Prentice-Hall, 1986. [Davis and Schwartz, 19791 Martin Davis and Jacob T. Schwartz. l Metamathematical extensibility for theo- rem verifiers and proof-checkers. Comp. 43 Maths. with Appls., 5:217-230, 1979. [deBruijn, 19801 J. G. deBruijn. A Survey of the project AUTOMATH. In To H.B. Curry: Essays on Com- binatory Logic, Lambda Calculus and Formalism , pages 579606. Academic Press, 1980. [Gentzen, 19691 Gerhard Gentzen. Investigations into Logical Deduction. in The collected papers of Ge- rhard Gentzen, pages 68-131. North-Holland, 1969. [Gordon et al., 19791 Michael J. Gordon, Arthur J. Mil- ner and Christopher P. Wadsworth. Edinburgh LCF: A Mechanized Logic of Computation. Lecture Notes in Computer Science 78, Springer-Verlag, 1979. (Huet and Lang, 19781 G erard Huet and Bernard Lang. Proving and applying program transformations ex- pressed with second-order patterns.Acta 1njormutica P1:31-55, 1978. [Huet, 19861 G erard Huet. Deduction and Computution. Rapport de Recherche INRIA No. 513, April 1986. (Kling, 19711 Robert E. Mling. A paradigm for reasoning by analogy. Artificial Intelligence 2:147-178, 1971. [Mnoblock and Constable, 19861 Todd B. Knoblock and Robert L. Constable. Formalized Metareusoning in type theory. TR 86742, Cornell University, March 1986. [Miller and Felty, 19861 Dale Miller and Amy Felty. An integration of resolution and natural deduction theo- rem proving. In Proceedings AAAI-86, pages 198- 202. Philadelphia, Pennsylvania, American Asso ciation for Artificial Intelligence, August 1986. [Milner, 19851 Robin Milner. The use of machines to as- sist in rigorous proof. in 1Muthematicd Logic and Progrumming Lunguages, pages 77-88, Prentice-Hall 1985. [Plaisted, 19811 David A. Plaisted. Theorem proving with abstraction. Artificial Intelligence 16:47-108, 1981. [Takeuti, 19751 Gaisi Takeuti. Proof Theory. North- Holland 1975. Boy de la Tour and Caderaa 99
1987
33
625
easoning About Exce Carol A. &overman and W. Bruce Croft Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 Abstract In a cooperative problem-solving environment; such as an office, a hierarchical planner can be incorpo- rated into an intelligent interface to accomplish tasks. During plan execution monitoring, user actions may be inconsistent with system expectations. In this pa- per, we present an approach towards reasoning about these exceptions in an attempt to accommodate them into an evolving plan. We propose a representation for plans and domain ing about exceptions. objects facilitates reason- I. Interactive planning and tional occurrences TT. nierarchicai pianners incrementaiiy deveiop a pian at dif- ferent levels of abstraction, imposing linear orderings at each stage of the expansion to eliminate subgoal interac- tions [Sacerdoti( 1977)) Tate( 1977)) Wilkins( 1984)]. The --_----A.: -- -I Ll-- -I--,- --:--1L1--- --A.---- _-----I L- execubwn 01 bne plan s prmu~ive acblons must oe moni- tored to ensure success. Exceptions and interruptions are common occurrences, and the planner must react to new information made available during the various stages of plan construction and execution. Existing plans may re- quire modification or new plans may have to be generated. --- We are concerned with using a pianner as a sup- port tool in a cooperative problem-solving environment such as an office [Broverman and Croft( 1985)) Croft and Lefkowitz( 1984)]. I n such an environment, the planner is not viewed as an omnipotent agent with complete knowl- edge of the domain and procedures for accomplishing all plan steps. Rather, it aids the user in performing correct and consistent tasks . The operation of the planner depends heavily on interaction with the user in order to allow user controi and to draw On the usels’ domain lrnowiedge. IIlter- active planners necessarily interleave plan generation and execution since user actions determine the course of future events. Previous planners have provided general replanning actions which are invoked in response to problems in iThis work is supported by the Air Force Systems Command, Rome Air Development Center, Griffiss Air Force Base, New York 13441-5700, the Air Force Office of Scientific Research, Bolling Air Force Base, District of Columbia 20332, under contract F30602-85- C-0008, and by a contract with Ing. C. Olivetti & C. the plan resulting from the introduction of an arbitrary state predicate or “fact” (Hayes(1975), Sacerdoti(l977)! Wilkins(1985)]. In these systems, the replanning tech- niques provided do not attempt to reason about failing conditions or possible serendipitous effects of the excep- tion. These methods simply make use of the explicitly linked plan rationale to detect problems and determine what violated goals need to be reachieved. We view this type QfrepianniQ 2s $ "rc=mrt.innatQ tactic invnlvinrr little - ..-- "=----a J 1-x 1-1 . "'6 ‘1 u "nx, intelligence, and reserve its use for exceptions generated by external agents2. To address the problems associated with interactive planning, we propose extending the traditional replanning approach. When a user action deviates from the planner’s predictions, the system should exploit available knowledge in an attempt to expiain the exceptionai behavior. Such a constructive approach is preferred to replanning, since replanning, in this case, would attempt to achieve goals that the user deliberately chose not to pursue. This paper discusses reasoning about exceptionai occurrences as an ap- proach towards incorporating exceptions into a consistent plan. In the next two sections, we describe an interactive planner and the elements of our representation which are used to support the reasoning process. We then outline the types of exceptions that can occur and algorithms for handling them, within the context of an example taken from the domain of real estate. Input to our interactive planner is provided as an abstract goal specification, and the output is a partially or fully expanded procedural net, with partial temporal ordering (similar to other hierarchical planners [Sacerdoti(l977), Tate( 1977)) Wilkins( 1984)]). A procedural net contains goal nodes? action nodes? and phantom nodes (goal nodes which are trivially true), along with links representing the causal structure of the plan. Since complete expansion of the initial goal may require additional information from the user, only action nodes are considered primitive, and thus executable. 2The planner attempts to satisfy a number of agents. The user(s) are regarded as internal agents, while agents are considered to be e&err& if the system lacks a model for their behavior (e.g., the real world). 190 Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. We distinguish between those primitive action nodes which the system is able to carry out using available tools (system-executable) and those which must be executed by the user (user-eseczltable). An action node may be both system-executable and user-executable, in which case au- tomation is preferred. An example of an action which may be solely user-executable could be the cancellation of an order; the decision to cancel must be initiated by the user and thus can be modeled as a decision action occurring “offline” [ Broverman et al (1986)]. Transferring informa- tion from a purchase request to an order form, however, is a primitive action which may either be performed by the user or automated. At any point during the planning and execution of a task, an expected-action list contains the set of user- executable primitive actions which are not preceded by unexpanded goal nodes. This is the set of actions which are predicted by the system to occur next. As each system- executable or user-executable action is performed, the pro- cedural net is expanded further, producing an updated expected-actions list. A user action may be inconsistent with system expectations, in which case it is flagged as an exceptional occurrence. III. A representation for plans and domain objects An important part of our approach is a uniform object- based representation of activities, objects, agents and re- Zationships3 [ Broverman and Croft (1985)]. An integrated abstraction hierarchy (see Figure 2) combined with a pow- erful constraint language facilitates the representation and use of more sophisticated knowledge about plans, such as the policies of McDermott [McDermott(l978)]. The reasoning process described in the next section exploits this object-based representation. A similar approach has been used by Alterman [Alterman( 1986)] and Tenenberg [Tenenberg( 1986)] t o represent old plans that are adapted to new situations. The major features of our representation are a tax- oitomic knowledge, aggregation, decomposition, resources, plan rationale and relationships. Each of these is defined and illustrated using an example from the domain of house- purchasing, shown in Figures 1 and 2. Figure 1 depicts a partially expanded procedural net fragment which repre- sents the portion of a house-buying task which remains after a house has been selected for purchase.’ Figure 2 shows a portion of the domain knowledge relevant to this task. Any complex entity can be viewed as a composition of several other objects as well as an aggregation of proper- ties. An abstract activity object which can be decomposed into more detailed substeps has a steps property containing a partial ordering of more detailed activity steps. Decom- 31n the remainder of the paper, we refer tQ plan descriptions as activities and objects of the domain simply as objects. I I . . . I . . ,............................, 8 : : . 8 0 : : : I . . : : I , I : i , I Figure 1: Example procedural net fragment object dater number: fiPPlY Steps8 (follow <go-to-place <place>? <fill-out-form application-form>>) Resources~ <application-foonn> Effectst pendin&<application-form>) application-form app11cant: manipulated-byr upPlY> 5-T 1 mortgage- application- form Apply-for-mortgage steps: (follovs <go-to-place <bank>> <fill-out-form <mortgage- wplkation-fofm> > 1 Figure 2: Fragment of knowledge base , Broverman and Croft 191 position of a domain object into other objects is expressed as a set of object types named in a parts property. The ag- gregation of all properties of either an activity or domain object, including decomposition information, constitutes the object definition. All entities are represented in a type hierarchy, with inheritance along is-a links between types and their sub- types. Entities inherit the properties and constraints of their supertypes. For example, a mortgage-upplicution- form has various fields which are inherited from the more general form object, and obeys the constraint stating that it can be manipulated by an apply type of activity (in- herited from application-form). Activities inherit the pre- conditions and effects of their supertypes, as well as de- composition information. For example, any apply activity may be decomposed into an activity of type go-to-place followed by fill-out-form. Apply-for-mortgage is a sub- type of apply and thus inherits and specializes this de- “,-...#.“:4.:,., A ,,I.. r,.. ,,.A,,,, ,.I..., :-L-,:4.- AL- -Ir--L -I LulllpuDIllIull. flppLy-jv7 -71.507 Lyuyt: CLISU IllllCl Ibb bllt? e11ecll 01 pending(applicution-form). An activity has an associated set of eflects which are asserted upon its completion. Effects are represented as predicates on domain objects. The goal of the activity is a distinguished main effect and is used for matching during plan expansion. An activity schema also includes a decla- ration of the types of domain objects it may manipulate. The inverse of this resources property is the munipuluted- by property expressed in domain objects to indicate which types of activities may affect them. The union of an activ- ity schema with the descriptions of associated object types provides a rich semantic representation of the domain, in- corporating objects and operators. Causal knowledge is represented by goal properties and purpose links. Goals are of a global nature, in that they relate an activity to a representation of its intent; that is +ls,,, cl,-tr\ . ..h.t th:” .x#.t:..:t., .3s.el\-r,l:,.h‘.” ..--m...A , u1r-z;y uuavz YYl‘aII lrlllii a&uvrlly aLLuIII~llDllcz:D lc~adu- less of the context of the current procedural net. Purpose links may be placed between two plan substep nodes in both static and dynamic plan representations, to indicate that a substep of a plan produces a state required for the proper execution of a later substep, much like NONLIN’s goal structure [Tate(1977)]. The purpose links prove to be particularly important in determining whether or not an exception can easily be incorporated into an existing plan. Arbitrary relationships may also exist between do- main objects. For example, a seller relation may be de- picted between an individual and a certain house, express- ing the fact that someone is selling a particular house. A special type of relationship which may exist between two objects is a transformation relation, which contains a pro- cedural attachment for producing the correct instance of one type of object associated with the instance of the sec- ond object type. For example, the abstract class object address may be related ice telephone-number through a special transformation specification which indicates that a phone call using a phone-number may produce the cor- responding address. A user action occurs within the context of predictions made by the system. Exceptions can be generated by unanticipated user actions. Because of the inherent open- endedness of the domain, an unexpected occurrence may in fact be a valid semantic action, not recognized as such because of an inaccurate or incomplete activity descrip- tion. Referring back to our example depicted in Figures l-2: we can imagine the following possible scenarios: Suppose receive-mortgage-approval has occurred. We are expecting an inspect-house action by the user. In- stead, the user executes the first step of the cbose-on- home procedure, go-to-closing-locution. This is an in- stance of a step-out-of-order exception, since this step is expected, but not until later in the plan. Suppose the purchase-and-sale-agreement has been signed, and the system next expects the user to start carrying out the steps to obtain a mortgage (go-to- bank). Instead, a sell-stock action is taken by the user, generating an unexpected-action exception. Suppose that while the user is waiting for his mort- gage to be approved, his friend from the bank stops in the office and hands him a hard-copy of the ap- proval. Since the normal way of receiving approval is in the form of an electronic message, the user simply offers a user-assertion by introducing the predicate upproved(mortguge). Suppose, that while executing the fill-out-form sub- step of the apply-for-mortgage step, the user fills in the address field with a phone-number instead of an address, triggering a constraint violation. This is a case of an expected action, unexpected parameter type of exception, where a static object constraint violation has occurred. Unexpected parameters can result in vi- olations of other types of constraints, such as a static constraint in the activity schema, or a constraint dy- namically posted on a domain-object by an activity instance. The above scenarios illustrate the classes of unex- pected occurrences which can arise. Actions can be out- of-order or completely unexpected. A user-assertion arbi- trarily introduced to the system may have implications for the current plan. A user assertion is modeled as an unex- pected action with the assertion as its main effect, and is treated as an unexpected action. An expected action may occur with an unexpected parameter, resulting in the vio- lation of a static or dynamically posted object constraint, or the violation of a constraint within the plan itself. In the foiiowing sections, we deveiop aigorithms ior reason- ing about the various types of exceptions, and show how each of the above scenarios can be resolved, resulting in a consistent plan. 192 Planning architecture for ~~~~gy-A6,im-l Jy -a--- While this paper focuses primarily on the reasoning process used to handle exceptions, a general architecture designed to accommodate exceptional occurrences is shown in Fig- ure 3. Several of the modules are similar to those described in other hierarchical planners, specifically [ Wilkins( 1985)]. We have extended the basic replanning model to include additional modules (highlighted in Figure 3) to address exception handling. Exceptions are detected by the exe- cution monitor and classified by the exception classifier. Violations in the plan caused by the introduction of an ex- ception are computed by the plan critic. Real-world (not user-generated) exceptions are handled by the replanner. ml-- ---l--f---------L ___- L---- -1--L-1 ?--Z---II--A- AL-L I ne repranning approacn we nave aaopLea IS simuar 60 wit6 of [Wilkins(1985)], where one or more of a set of general replanning actions is invoked in response to a particular type of problem introduced into a plan by an exceptional occurrence. For interactive planning, we extend the set of general replanning actions to include the insertion of a new goal into the plan. The exception analyst applies available domain knowl- edge in an attempt to construct an explanation of an excep- tion. Its primary function is to determine the relationships and compatibility of the actual events to the expected ac- tions, goals and parameters. The particular entity relation- ships investigated by the exception analyst are determined by the type of internal exception. The exception analyst may be triggered by both external and internal exceptions, although it is primarily used for internal exceptions. The paradigm of negotiation [Fikes(1982)] has been Used a;Q a mzOdel for reaching apA agreement; RTTlnTlD ntwnts .aL’L’.sa‘e -b-““” on a method for accomplishing a task. We propose to use negotiation for establishing a consensus among agents who are affected by an exception. The negotiator determines t.hP set nf affortd n0ent.s and IICPC t.ho infnrmntinn nmvic-ld “&IV uvu VI . ..nll” Y “VU ..mb”“‘” -**.A uvvv “IA” 1ALIVA AL-IYUIVII =A v 1 .U”U by the exception analyst to present suggested changes to the original plan. We distinguish between eflecting and aflected agents with regard to the occurrence of an exception. The eflect- ing agent is that agent who has caused the exception. An ufected agent is one whose interests are influenced (either ., . positrveiy or negativeiy) by the exception. Affected agents are those who are “responsible” for the parts of the plan where problems are detected by the plan critic. An exter- nal agent can never be an afected agent, since the system has no modei of an externai agent’s interests or behavior. Using information provided by the exception analyst about relationships between actuai and expected vaiues, the negotiator initiates an exchange between the effecting agent and the affected agents. The negotiator and plan critic execute in a loop in which the plan critic analyzes changes suggested by the negotiator to detect any probiems introduced. This loop is exited when no further problems are detected by the plan critic and all affected agents are satisfied. Figure 3: An architecture for a cooperative planner ‘The negotiator aiso directs the acquisition of iniorma- tion from the user, if required, again using a trace of the exception analyst’s search to guide the questioning. Nego- tiation may also be invoked upon the failure of replanning. If the negotiator or repianner produces a consistent expia- nation of the exception, control is returned to the planner to continue plan execution and generation. A successful negotiation can result in a system which has “learned,” that is, the static domain plans may be augmented with knowledge about the exception and thus enhances the sys- tem’s capability to handle future similar exceptions. The behavior 0i the exception anaiyst is guided by some general principles derived from the type of the exceptional occurrence. A step-out-of-order exception, for example, may imply that the user may be attempting a short-cut, while an unexpected action exception may be eventually recognized as an intentional substitution of the unantici- pated action for the expected action. The exception an- alyst performs a controlled exploration throughout the knowledge base which is guided by the current state of the procedural network as well as the type of exception which has occurred. If a number of strategies are possi- ble, the least costly is attempted first. In the following sections, we present algorithms for handling the various example scenarios developed in section 3. A. hen the action taken doesn’t match an expected OE‘12 If a user performs an action which doesn’t have a match on the expected-actions list, the exception classifier is invoked to determine whether this action is entirely unexpected or Broverman and Croft 193 between the object provided as the actual parameter value and the object which was ezpecte& as the parameter value. The exception analyst attempts to establish the following: l The two objects may both be manipulated-by activi- ties which belong to a common activity superclass. If so, they probably are utilized in similar fashions. There may be any number of other re~at~o~s~~~s be- tween the two objects. Specifically, a trunsjormation relationship may link the object provided with the ex- pected object, describing a method to the obtain the expected parameter value. To handle scenario (d), the exception ana- lyst notes that the ~~one-~u~~er object and &dress objects are linked through a trunsjor- nation relationship, specifying that a proce- dure call may be used on the phone number to produce the corresponding address. 4, The discrepancy between the two parameters may re- sult from d~~er~ng quantities of the object type. If so, an excess may or may not be allowable. The semantics associated with the underlying data type are partic- ularly important when handling quantity discrepan- cies, since commonsense reasoning may be required. For example, if the go~to-~u~~ step was supposed to result in withdrawing 50 do~Zars, emerging with 100 may not be problematic, but baking a cake in a 450 degree oven when the recipe calls for 350 degrees may have unsatisfactory results, The two objects may have a common ancestor in the object hierarchy. If so, the exception analyst con- structs the set of features untrue to the expected ob- ject, since the lack of these features in the object ac- tually provided as the parameter value may be prob- lematic. This information collected by the exception analyst is used during negotiation to establish whether the ex- ceptional parameter should be allowed. The scope of the knowledge base which may be affected by the exception is dependent on the type of constraint. violation which has occurred. Modifications and consequences which may re- sult from a static object constraint violation, for example, are localized to the static knowledge base, while plan con- straint violations and dynamic object constraint violations may have more far-reaching consequences for the remain- der of the plan. VII. Status References Alterman( 1986)] Alterman, R. “An adaptive planner”, ~~ocee~~ngs of AAA~-~6, 65-69, 1986. Broverman and Croft (1985)] Broverman, C,; Croft’ W.B. “A knowledge-based approach to data management for intelligent user interfaces”, Proceedings of VLDB 11, Stockholm, 96-104, 1985. [Broverman et a1(1986)] Broverman, C.A., Huff, K.E., Lesser, V.R. “The role of plan recognition in design of an intelligent user interface”, Proceedings of IEEE: conference on bin, Muc~~ne, and Cybernetics, 863- 868, 1986. [Croft and Lefkowitx(l984)] Croft, W.B.; Lefkowitz, LS “Task support in an office system”, ACM Truns- uct~ons on 0fJice ~~jor~ut~on Systems, 2: 197-212; 1984. I’ Fikes( 1982)] Fikes, R.E. “A commitment-based frame- work for describing informal cooperative work”, Cog- nitive Science, 6: 331-347; 1982. Hayes( 1975)] Hayes, P.J. “A representation for robot plans”, Proceedings LJCAIJ5, 181-188, 1975. I McDermott (1978)] McDermott) D.V. “Planning and Act- ing*’ ~ogn~t~~e Science, 2, 1978. [Sacerdoti( 1977)] S acerdoti, E.D. A Structure for Plans and Be~uu~or, Elsevier North-Holland, Inc., New York, NY, 1977. [Tate(1977)] Tate, A. “G enerating project networks”, Pro- ceedings LJCAI-h, Boston, 888-893, 1977. [Tenenb~rg( 1986)] Tenenberg, J. “Planning with Abstrac- tion*, Proceedings of AAAI-86, 76-80, 1986. f Wilkins( 1984)] Wilkins, D.E. “Domain-independent planning: Representation and plan generation”, Ar- t~~e~uz ~~tez~~ge~ce, 22: 269-301; 1984. f Wilkins( X985)] W lk i ins, D.E. “Recovering from execution errors in SIPE”, SRI International Technical Report 346, 1985. Implementation of a prototype which incorporates the ideas presented in this paper is currently underway. One of the major aims of this work is to augment domain plans with knowledge acquired during exception handling. We are currently looking at the issue of propagating change in an obiect-based representation. Broverman and Croft 195
1987
34
626
Incremental Causal Thomas Dean and Mark Boddyl Department of Computer Science Brown University Providence, RI 02912 Abstract Causal reasoning comprises a large portion of the in- ference performed by automatic planners. In this pa- per, we consider a class of inference systems that are said to be predictive in that they derive certain causal consequences of a base set of premises correspond- ing to a set of events and constraints on their occur- rence. The inference system is provided with a set of rules, referred to as a causal theory, that specifies, with some limited accuracy, the cause and effect re- lationships between objects and processes in a given domain. As modifications are made to the base set of premises, the inference system is responsible for accounting for all and only those inferences licensed by the premises and current causal theory. Unfor- tunately, the general decision problem for nontrivial causal theories involving partially ordered events is NP-complete. As an alternative to a complete but po- tentially exponential-time inference procedure, we de- scribe a limited-inference polynomial-time algorithm capable of dealing with partially ordered events. This algorithm generates a useful subset of those inferences that will be true in all total orders consistent with some specified partial order. The algorithm is incre- mental and, while it is not complete, it is provably sound. I. Introduction We are concerned with the process of incrementally con- structing nonlinear plans (i.e., plans represented as sets of actions whose order is only partially specified). A signifi- cant part of this process involves some means for predicting the consequences of actions and using these consequences to verify whether or not a given partially constructed plan is likely to succeed. Of course, if by “likely to succeed” we mean that the choices made thus far in constructing the partial plan will not require further revision, then it is obvious that this verification step subsumes the entire pro- cess of planning. Usually, by “likely to succeed” we mean something like: given a partially ordered set of tasks and their intended effects, make sure that there is at least one total ordering consistent with the initial partial order such that all of the tasks have their intended effects. At first lThis work was supported in part by the National Science Foun- dation under grant IF&8612644 and by an IBM faculty development award. blush, determining whether a given partially constructed plan satisfies this criterion appears to be a significantly easier problem than the general planning problem. Unfor- tunately, if the language used to represent plans, tasks, and their effects is sufficiently expressive and we use asymp- totic complexity as our measure of difficulty, the problem faced by the temporal reasoning component is just as diffi- cult as the general planning problem. This shouldn’t sur- prise anyone, but neither should it discourage anyone from employing classical planning techniques. It does indicate, however, that we have some way to go in understanding the expressive and computational requirements for effec- tive temporal reasoning systems. A theory for reasoning about the effects of actions (or, more generally, the consequences of events) we refer to as a causal theory. We will describe a language for constructing causal theories that is capable of representing indirect ef- fects and actions whose effects depend upon the situation in which the actions occur. We will consider two algo- rithms for reasoning about such causal theories. These al- gorithms are polynomial-time, incremental, and insensitive to the order in which facts are added to or deleted from the data base. We show that one algorithm is complete for causal theories in which the events are totally ordered, but is potentially inconsistent in cases where the events are not totally ordered. The general problem of reason- ing about the effects of actions that are partially ordered and whose effects depend upon the situation in which the actions occur has been shown to be NP-hard [l]. As an alternative to a complete but potentially exponential-time decision procedure, we provide a partial decision procedure that is provably sound. What this means for a planner is that the procedure is guaranteed not to mislead the plan- ner into committing to a plan that is provably impossible given what is currently known. If the decision procedure answers yes, then the condition in question is guaranteed to hold in every totally ordered extension of the current partial order; if the decision procedure answers no, there is a chance that the condition holds in every total order, but to determine this with certainty might require an ex- ponential amount of time or space. II. Temporal Data Base Management A temporal data base management system (TDBMS) is used to keep track of what is known about the order, duration, 196 Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. and time of occurrence of a set of events and their conse- quences. In the rest of this paper, we will be concerned with a particular type of TDBMS called a time map mun- agement system or TMM. [3]. In the TMM, the classical data base assertion is replaced by the notion of time token corresponding to a particular interval of time during which a general type of occurrence (a fact or event) is said to be true. For any given fact or event type, the data base (or time map) will typically include many tokens of that type. The user of the TMM can specify information concern- ing events that have been observed or are assumed in- evitable and information in the form of general rules that are believed to govern the physics of a particular domain. The user can also specify certain conditional beliefs. If the user explicitly states the conditions for believing in certain propositions, the TMM can ensure that those propositions (and their consequences) are present in the data base just in case the conditions are met. This is achieved through the use of data dependencies [7]. In the TMM, the primary forms of data dependency (in addition to those common in static situations) are concerned with some fact being true at a point in time or throughout an interval. In addition, there is a nonmonotonic form of temporal data dependency concerned with it being consistent to believe that a fact is not true at a point in time or during any part of an inter- val. These forms of temporal data dependency are handled in the TMM using the mechanism of temporal reason main- tenance [3]. Language constructs are supplied in the TMM that allow an application program to query the data base in order to establish certain antecedent conditions (includ- ing temporal conditions) and then, on the basis of these conditions, to assert consequent predictions. These predic- tions remain valid just in case the antecedent conditions continue to hold. Perhaps the most important and most often over- looked characteristic of a temporal reasoning system is its ability to handle incomplete information of the sort one in- variably encounters in realistic applications. For example, we seldom know the exact duration or time of occurrence of most events. Moreover, for those durations and offsets we do know, they are seldom with respect to a global frame of reference such as a clock or calendar. In the TMM, every point is a frame of reference, and it is possible to constrain the distance between any two points simply by specifying bounds, (low, high), on the distance in time separating the two points. By all owing bounds to be both numeric and symbolic, the same framework supports both qualitative and quantitative relationships. Another important aspect of reasoning with incom- plete information has to do with the default character of temporal inference. In general, it is difficult to predict in advance how long a fact made true will persist. It would be convenient to leave it up to the system to decide how long facts persist based upon the simple default rule [9] that a fact made true continues to be so until something serves to make it false. This is exactly what the TMM does. The term persistence is used to refer to an interval corre- sponding to a particular (type of) fact becoming true and remaining so for some length of time. A fact is determined to be true throughout an interval I just in case there is a persistence that begins before the beginning of I and it can’t be shown that the persistence ends before the end of I. Before we continue our discussion it will help to intro- duce some notation. Relations. Let II be the set of points corresponding to the begin and end of events in a particular temporal data base. We define a function DIST to denote the best known bounds on the distance in time separating two points. Given rr,7r2 E II such that DIST(~~, 7r2) = (low, high), we have: -4 57-2 ($ low 1 E2 is ordered before 7r2) = - r2 H (low, high) = (0,O) is coincident with 7~) 5 7r2 H (771 4 7r2) v (7rr = 7r2) precedes or is coincident with 7r2) +it4 7F2 * high 2 E possibly precedes 7r2 ) 5M =2 * high 2 0 possibly precedes or is coincident with 7r2) Tokens. We denote a set of time tokens T = {to, tl, . . . tn} for referring to intervals of time during which certain events occur or certain facts are known to become true and remain so for some period of time. The latter correspond to what we have been calling persistences. For a given token t: @ BEGIN(t), END(t) E II. STATUS(t) E {IN,OUT}, determined by whether the token is warranted (II) or not (OUT) by the current premises and causal theory. TYPE(t) = P where P is an atomic predicate cal- culus formula with no variables. (B DURATION(t) = DIST(BEGIN(t), END(t)) rkypes. As defined above, the type of an individual to- ken is an atomic formula with no variables (e.g., (on block14 table42)). In general, any atomic for- mula, including those containing variables, can be used to specify a type. In describing the user in- terface, universally quantified variables are notated ?variable-name, the scope of the variable being the entire formula in which it is contained (e.g., (on ?x ?y)). In describing the behavior of the inference sys- tem, we will use variables of the form t p to quantify over tokens of type P (i.e., vtp E T TYPE(tp) = P). As we will see in the next section, the TMM allows a user to specify rules (referred to collectively as a causal theory) for inferring additional consequences of the data (referred to as the set of basic facts and notated B). B con- sists of a set of time tokens and a set of constraints on 2The symbol E is meant to denote an infinitesimal: a number greater than 0 and smaller than any positive number. Dean and Boddy Rl: (project (and PI . . . P, (M (not (and &I . . . Qm)))> E R) R2: (proje)ct (and PI . . . Pn) E R) R3: (disable (and &I . . . Qm) (ab W) R4: (disable (and RI. . . R,) (ab W) Figure 1: Hierarchically arranged projection and disabling rules the amount of time separating pairs of points correspond- ing to the begin and end of time tokens. Generally, the causal theory remains fixed for a specific application, and a program interacts with the TMM by adding and removing items from B, and by generating queries. A query consists of a predicate calculus formula corresponding to a question of the form “Could some fact P be true over a particular interval I?” An affirmative answer returned by the TMM in response to such a query will include a set of assumptions necessary for concluding that the fact is indeed true. Any assertions made on the basis of the answer to such a query are made to depend upon these assumptions. The state of a temporal data base is completely de- fined by a temporal constraint graph (TCG), consisting of the points in II and constraints between them, and a causal dependency graph (CDG), consisting of dependency struc- tures corresponding to the application of causal rules in deriving new tokens. The TCG and CDG are incrementally modified to reflect changes in the set B. III. Causal Theories In the TMM, a causal theory is simply a collection of rules, called projection rules, that are used to specify the behavior of processes. In the following rule, .Pr . . . Pn, &I -Qm, E, and R designate types, and delay and duration designate constraints (e.g., (e, 00)). In: (project (and PI . . . P, (M (not (and &I . . . Qm)))) E delay R duration) PI . . . P, and Qr . . . Q,,, are referred to as antecedent con- ditions, E is the type of the triggering event, and R refers to the type of the consequent prediction. The above pro- jection rule states that, if an event of type E occurs cor- responding to the token tE and Pr . . . P, are believed to be true at the outset3 of tE and it is consistent to believe that the conjunction of Qi . . . Qm is not true at the outset of tE, then, after an interval of time following the end of tE determined by delay, R will become true and remain so for a period of time constrained by duration (if delay and duration are not specified, they default to (0,O) and (e, co), respectively). In the following, we will be consid- ering a restricted form of causal theory, called a type 1 theory, such that the delay always specifies a positive off- set (causes always precede their effects). We also allow the user to specify rules that serve to disable other rules [ll]. F g i ure 1 shows a standard projec- tion rule Rl and a pair of projection and disabling rules R2 and R3 that replace Rl. The rule R3 is further conditioned by the rule R4. Assuming just the rules R2, R3, and R4, any application of R2 with respect to a particular token t of type E is said to be abnormal with regard to t just in case Qr . . . Qm hold at the outset of t and it is consistent to believe that R3 is not abnormal with regard to t. The nonmonotonic behavior of type 1 causal theories is speci- fied entirely in terms of disabling rules and the default rule of persistence (see Section II.). In addition to their useful- ness for handling various forms of incomplete information, disabling rules make it possible to reason about the con- sequences of simultaneous actions. The reader interested in a more detailed treatment of causal theories may refer to one of [5] or [ll]. Throughout the rest of this paper we will consider causal theories without disabling rules and consisting solely of simple projection rukes4. The following represents the general form of a simple projection rule: (project (and PI . . . Pn) E delay R duration) In order to support temporal reasoning, there has to be some method or decision procedure for drawing appro- priate conclusions from a set of basic facts and a given causal theory. In the TMM, such a procedure is used to generate new time tokens, update the status of existing tokens, and facilitate query processing by determining the truth of facts over specified intervals of time. As far as we are concerned, an inference procedure is fully specified by a criterion for inferring consequent effects from antecedent causes via causal rules, a method for actually applying that criterion (an update algorithm), and a criterion for deter- mining if a fact is true throughout some interval. Figure 2 shows a criterion for inferring consequent effects which we refer to.,as wealE projection. The criterion is specified as a rule schema (implicitly) quantified over simple projection rules. Figure 3 shows a criterion for determining if a fact is true throughout an interval which we refer to as weak: true throughout. The criterion is specified in terms of a 3An alternative formulation described in [6] states that the an- tecedent conditions of a projection rule must be true throughout the trigger event rather than true just at the outset. Both formulations are supported in the TMM, though we will only be discussing the true-at-the-outset formulation in this paper. 4All of the results mentioned in this paper extend to full type 1 theories (see [4]). 198 Planning VtE E T ((STATUS = IN) A (3tq . ..tp., E T (V 1 5 i 5 n (STATUS(tpi) = IN) A (BEGIN(tp<)< BEGIN( A (Vt,pi E T (STATUS(t,pi) = OUT) v (BEGIN(t,pi) +M BEGIN(tpi)) v (BEGIN +M BEGIN(tqPi)) )))) =s 3tR E T (STATUS = IN) A (DIST(END(tE), BEGIN(tjq)) 2 d&q/) A (DIST(BEGIN(tR),END(tR)) & d?mtiOn) Figure 2: Weak projection V?rlnsL E II 3tp E T (STATUS = IN) A (BEGIN 5 nl) A (Vtv, E T (STATUS(t,p) = OUT) v (BEGIN(t,p) -+%f BEGIN( v (Tz +M BEGIN(tlPi)) ))>) * TT(P, 7n, n2) Figure 3: Weak true throughout definition of the true throughout predicate TZ’. The infer- ence procedure (referred to as naive projection) consisting of weak projection, weak true throughout, and a simple update algorithm for applying weak projection by sweep- ing forward in time was used in one of the early versions of the TMM. In the following section, we will consider some of the properties of naive projection. IV. Completeness and Consistency In order to satisfy ourselves concerning the behavior of an inference system, we need a precise account of what the conclusions computed by that system mean. Such an ac- count should enable us to judge whether or not an inference system has come up with the right set of conclusions. The question we need to ask is: What are the intended models of a set of basic facts and a causal theory? As far as we are concerned, a model consists of an assignment of true or false to a particular set of proposi- tions concerning facts spanning intervals of time. Theories about the real world are invariably underconstrained, and a set of basic facts together with a causal theory will gen- erally have many models. We will simplify our analysis by partitioning models into various equivalence classes. The primary source of ambiguity in the TMM arises from the fact that the set of constraints seldom determines a total ordering of the tokens in T. Given that most inferences depend only upon what is true during intervals defined by points corresponding to the begin and end of tokens in T, all that we are really interested in are the classes of mod- els corresponding to the different total orderings consistent with the initial set of constraints. For each total order- ing we can identify a unique set of tokens that intuitively should be IN given a particular causal theory. We start with a set of basic facts B, consisting of a set of tokens TB and a set of constraints CB. The constraints in CB determine a partial order on the begin and end of tokens in TB. For a particular B, there may be a number of total orderings consistent with the constraints in CB. For a given B, a fixed causal theory, and a criterion for inferring consequent effects from antecedent causes (e.g., weak projection), the TMM generates a set of tokens T and a temporal constraint graph (TCG). Given T and the TCG, there are a finite number of statements of the form TT(P, ~1, nz) that are determined as true by the TMM us- ing a particular true throughout criterion (e.g., weak true throughout). The criterion for inferring consequent effects must be applied in a systematic way (essentially using the ordering information to perform a sweep forward in time) to yield results in keeping with our intuitions about causal- ity. The strategy built into the TMM for applying the crite- rion of weak projection with respect to specific tokens and updating the status of tokens already in T makes use of the intuition that you can’t know the effects of a particu- lar event e until you know the consequences of those events preceding e. It should be fairly easy to convince yourself that, in cases in which CB precisely constrains the order of the tokens in TB, the TMM, using weak projection, gener- ates a set T and a TCG such that the statements of the form TT(.P, rl,rz) determined true by the weak true through- out criterion are exactly the ones that we want. We will make use of this to define a working notion of model. Given some B together with a fixed causal theory, for each total ordering consistent with CB, we will say that the set of statements of the form TT(P, ~1, ~2) that are true using weak true throughout and weak projection is a model of B and the underlying causal theory. This set can be thought of as specifying an assignment to just those statements concerned with facts being true over intervals. Actually, the assignment designates a class of models, but we will neglect this to simplify our discussion. We will say that a particular inference procedure is complete for a class of causal theories, if for any set of basic facts and causal theory in that class, the statements of the form TT( P, 7ri,7r2) warranted by the inference procedure include at least those that are true in all models. Similarly, we will say that an inference procedure is sound for a class of causal theories, if for any set of basic facts and causal theory in that class, each statement TT(P, rl,7r2) warranted by the inference procedure is true in all models. Given the preceding definitions, it is easy to show that the TMM, using naive projection, is complete and sound for type 1 causal theories, assuming that the tokens in T are totally ordered [4]. Dean and Boddy 199 Vt E Tg STRONGLY-PROTECTED(t) VtE E T (STRONGIf-PROTECTED A (3 Pl * * * tp,., E T (V 1 5 i 5 n STRONGLY-PROTECTED(tp;) A (BEGIN(tp{) 5 BEGIN( A (Vt~pi E T (STATUS(tTpi) = OUT) v (BEGIN _( BEGIN(tpi)) v (BEGIN ~ BEGIN(t,Pi)) )))) =s, 3tR E T STRONGLY-PROTECTED(h) A (DIST(END(tE), BEGIN( & ddUy) A (DIST(BEGIN(tR),END(tR)) c du’ration) Figure 4: Strongly protected tokens VtE E T ((STATUS = IN) A (3tp1...tpn E T (V 1 5 i 2 n (STATUS(tpi) = IN) A (BEGIN(tpi) 5~ BEGIN(@) A (V’t,Pi E T ~STRONGLY-PROTECTED(t,pi) V (BEGIN(t,pi) +M BEGIN(tpi)) v (BEGIN +M BEGIN(tTPi)) )))) =$ 3tR E T (STATUS(&) = IN) A (DIST(END(tj$, BEGIN(&)) E de&/) A (DIST(BEGIN(tR),END(tR)) E duration) Figure 5: Improbably weak projection In situations where the set of basic facts does not de- termine a total order, it is easy to show that the TMM, using naive projection, can end up in a state with IN to- kens that allow one to conclude statements of the form TT(P, K~,Q) that are not true in any totally ordered ex- tension. In [4], we prove that the problem of determining if TT(P,7rr,r:!) is true for a type 1 causal theory, with or without disabling rules, is NP-complete. In the rest of this paper, we abandon the quest for complete inference procedures and concern ourselves with procedures that are sound. To improve the chances of the TMM warranting only valid statements of the form TT(P,~~,Q) the first thing we will do is strengthen the criterion for belief in a given token. The axioms in Figure 4 determine a set of tokens that are said to be strongly protected. If the set of constraints determines a total or- dering, then the set of strongly protected tokens is identical to the set of tokens that are IN, but generally the former is a subset of the latter. Next, we provide a criterion for generating consequent predictions that takes into account vn17r2 E II 3tp E 7 STRONGLY-PROTECTED A (BEGIN 5 7~) A (Vt,p E T (STATUS(t,p) = OUT) b’ (BEGIN(t,p) 4 BEGIN( v (~2 4 BEGIN@+)) ) es TT( p, m ,7r2 ) Figure 6: Strong true throughout every consequence that might be true in any total order, called improbubEy weak projection. This criterion is shown in Figure 5. And, finally, we provide a criterion for true throughout that succeeds only if the corresponding formula will be true in all total orders consistent with the current set of constraints (see Figure 6). There is a simple decision procedure for generating all consequences and computing the set of strongly protected tokens. Let To = TB, and initially assume that no tokens are strongly protected. Let i = 0. To compute the con- sequences of Ti, generate the consequent tokens of each token in Ti using the criterion of improbably weak projec- tion. Let Ti+r be the union of Ti and its consequences. Continue to compute new consequent tokens in this man- ner, incrementing i as needed until T; = Ti+r. Set T = Ti. At this point, perform a sweep forward in time (relative to the current partial order) determining for each token in T whether or not it is strongly protected and the status, IN or OUT, of each its consequents. In [4], we prove that this decision procedure is sound for a partially ordered set of tokens, and sound and complete for a totally ordered set. In [4], we describe an incremental update algorithm that has the same soundness and completeness properties as the algorithm described above. This incremental al- gorithm is such that small changes in B generally result in small amounts of computation. For causal theories in which the consequent predictions of causal rules all corre- spond to persistences, the worst-case behavior of the in- cremental algorithm is polynomial in the size of B and the causal theory. If we allow causal rules to generate new to- kens corresponding to the occurrence of triggering events, it is easy to construct examples in which T grows with- out bound. Generally, however, even those causal theories that generate new triggering events turn out to be well be- haved. In a planning system, the incremental algorithm can be used as part of a strategy for coping with com- plexity; if a query succeeds, the answer can be assured to be true in all totally ordered extensions. If, on the other hand, a query fails and the truth or falsity of the query is critical, the system can choose to expend additional effort in processing the query. In [4] we describe some additional techniques that can be used to improve the accuracy of our decision procedure without sacrificing its performance 200 Planning (e-g-, a simple examination of the tokens in T can serve to guarantee the failure of certain queries). v. dayed-Commitment Planning Nonlinear planning [2] has long been considered to have distinct advantages over linear planning systems such as STRIPS [8] and its descendents. One supposed advantage [lo] has to do with the idea that, by delaying commit- ment to the order in which “independent” actions are to be performed, a planner can avoid unnecessary backtracking. Linear planners are often forced to make arbitrary commit- ments regarding the order in which actions are to be carried out. Such arbitrary orderings often fail to lead to a solution and have to be reversed. By ordering only actions known to interact with one another (i.e., actions whose outcomes depend upon the order in which the actions are executed) the expectation was that nonlinear planners would avoid a lot of unnecessary work. The problem in getting this sort of delayed- commitment planning to work is that it is often difficult to determine if two actions actually are independent. This is especially so if we are considering a representation of actions sufficiently powerful to represent actions whose ef- fects depend upon context. In order to determine whether or not two actions are independent, it is necessary to deter- mine what the effects of those actions are. Unfortunately, in order to determine the effects of a given action it is nec- essary to determine what is true prior to that action being executed, and this in turn requires that we know the effects of those actions that precede that action. In general there is no way to determine whether or not two actions are in- dependent without actually considering all of the possible total orderings involving those two actions. Planning depends upon the ability to predict the con- sequences of acting. Past planning systems capable of rea- soning about partial orders (i.e., nonlinear planners) have either employed weak (and often unsound) methods for performing predictive inference or they have sought to de- lay prediction until the conditions immediately preceding an action are known with certainty. Delaying predictive inference can serve to avoid inconsistency, but it can also result in extensive backtracking in those very situations that nonlinear planners were designed to handle efficiently. It is our contention that delayed-commitment plan- ning is of dubious utility. However, the idea of delayed- commitment planning is not the only reason for build- ing planners capable of reasoning about partially ordered events. Most events will not be under a planner’s control and-more often than not it will be difficult if not impossible to determine the order of all events with absolute certainty. Reasoning about partially ordered events is likely to play a significant role in future planners. VI. Conclusjnns This paper is concerned with computational approaches to reasoning about time and causality, particularly in do- mains involving partial orders and incomplete information. We have described a class of causal theories capable of rep- resenting conditional effects and the effects of simultaneous actions. We have described a decision procedure for gener- ating predictions warranted by such causal theories. The decision procedure is provably sound and the resulting con- clusions are guaranteed consistent if the underlying causal theory is consistent. If the events turn out to be totally ordered, the procedure is complete as well as sound. Acknowlleclgments This work was instigated in part by an of&and remark made by Kurt Konolige while the first author was visiting SRI International. The different approaches to planning of David Chapman and David Wilkins were both influential in directing our research. References 1. 2. 3. 4. 5. 6. Chapman, David, Planning for Conjunctive Goals, Tech- nical Report AI-TR-802, MIT AI Laboratory, 1985. Charniak, Eugene and McDermott, Drew V., Introduction to Artificial Intelligence (Addison-Wesley Publishing Co., 1985). Dean, Thomas, and McDermott,‘Drew V., Temporal Data Base Management, Artificial Intelligence 32 (1987). Dean, Thomas, and Boddy, Mark, Incremental Causal Reasoning, Technical Report CS-87-01, Brown University Department of Computer Science, 1987. Dean, Thomas, An Approach to Reasoning About the Ef- fects of Actions for Automated Planning Systems, Annals of Operations Research (1987). Dean, Thomas, Temporal ImagemJ: An Approach to Rea- soning about Time for Planning and Problem Solving, Technical Report 433, Yale University Computer Science Department, 1985. 7. Doyle, Jon, A truth maintenance system, Artificial Intel- ligence 12 (1979) 231-272. 8. 9 10. 11. Fikes, Richard and Nilsson, Nils J., STRIPS: A new ap- proach to the application of theorem proving to problem solving, Artificial Intelligence 2 (1971) 189-208. Reiter, Raymond, A Logic for Default Reasoning, Artificial Intelligence 13 (1980). Sacerdoti, Earl, A Structure for Plans and Behavior (American Elsevier Publishing Company, Inc., 1977). Shoham, Yoav, Chronological Ignorance: Time, Knowl- edge, Nonmonotonicity and Causation, Proceedings AAAI- 86, Philadelphia, PA, AAAI, 1986. Dean and Boddy 201
1987
35
627
An Investigation into Reactive Planning in Complex Domains R. James Firby Department of Computer Science P.O. Box 2158 Yale Station, New Haven CT 06520 Abstract A model of purely reactive planning is proposed based on the concept of reactive action packages. A reactive action package, or RAP, can be thought of as an in- dependent entity pursuing some goal in competition with many others at execution time. The RAP pro- cessing algorithm addresses the problems of execution monitoring and replanning in uncertain domains with a single, uniform representation and control structure. Use of the RAP model as a basis for adaptive strategic planning is also discussed.’ I. Introduction Automatic planning research has been concerned primarily with the generation of a complete list of actions to carry out a given set of goals. For many domains, particularly those created artificially as in a laboratory or on a factory floor, it makes sense to construct a detailed plan well in ad- vance of execution because the situations expected can be anticipated and controlled. However, it is becoming clear that in more dynamic worlds, where agents exist whose actions cannot be anticipated, the situation at execution time cannot be controlled, and detailed plans cannot be built in advance. As one would expect, the solution to this difficulty is to leave some, most, or even all of the planning to take place during execution when the situation can be determined directly. Systems that build or change their plans in response to the shifting situations at execution time are called reactive planners. The choice of which detailed actions to put in a plan usually depends on the context in which they will be exe- cuted. If that context cannot be computed in advance then the actions cannot be chosen appropriately. For example, planning the arm motions for the loading and unloading portions of a delivery task is both pointless and impossible before the cargo and the loading docks have been exam- ined. More generally, having to choose actions at execution time is unavoidable in any domain where there is uncer- tainty about what will be encountered after an action is ex- ecuted. Such uncertainty arises when independent agents or processes can change the world, when actions might not ‘The work reported in this paper Defence Advanced Research Projects Research contract N00014-85-K-0301 was supported in part by the Agency under Office of Naval work exactly right, or when there are just too many inter- acting variables involved in predicting the future. Reactive planning concerns itself with the difficulties of direct interaction with a changing world and must con- front many of the outstanding issues from conventional, strategic planning research. In particular, the problems of execution monitoring and low-level replanning cannot be avoided when constructing a reactive planner. The world state must be monitored continually at execution time if actions are to be chosen based on that state. Furthermore, if the system is to adapt to any situation encountered on the way to a goal, selecting the next step in the plan be- comes indistinguishable from changing the plan because of a problem with the last step. Problems will make them- selves apparent in the new world state and choosing an appropriate next step will automatically take them into account. This paper describes an investigation into reactive planning that takes the extreme position of using no pre- diction of future states at all. Plan selection is done en- tirely at execution time and is based only on the situation existing then. This approach was chosen, not because an extreme system would be a good planner by itself, but be- cause reactive plan execution must occur at some level in every system; the static action list generated by previous planners lacks the flexibility to confront the dynamic do- mains of current interest. By studying the problems of reactive plan execution without the complexities of look- ahead, this study strives to define a form and content for the representation of more adaptive plans. A traditional strategic planner working with this representation should exhibit a more robust behavior than is possible with static actions lists. A. elated Work A great deal of research has been done in the field of plan- ning and good reviews of this work exist in [Joslin and Roach, 19861 and [Chapman, 19851. In general these in- vestigations have examined the problem of constructing a fixed, static plan for a highly predictable world in which no sensory feedback is required at execution time. The prob- lem of verifying that the execution of such a plan unfolds as expected in a less predictable domain was recognized early and discussed by Sacerdoti [Sacerdoti, 19751 (among others), and recently researchers have been attacking the 202 Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. problem with systems that add sensory verification activ- ities to otherwise static plans. Brooks [Brooks, 19821 uses models of domain uncertainty and expected error accumu- lation to decide where to insert monitoring tasks, while Gini [Gini et al., 19851 uses a model of planner intent to decide when expected situations should be verified. Doyle [Doyle et al., 19861 presents a system for inserting sen- sory verification tasks into a plan to check that expected world states really hold and Tate [Tate, 19841 has also cast some light on this subject. All of these systems assume that planning involves putting together only effector ac- tions and that sensor actions should be spliced into the plan afterwards as seems appropriate. Wilkins [Wilkins, 19851 approaches the problems in plan execution by looking at what to do when execution verification shows that something has gone wrong. His work on error recovery concentrates on defining a plan representation that facilitates determination of the parts of a plan which have been compromised by a failure and therefore need to be rethought. Fox and Smith [Fox and Smith, 19841 have discussed the problems of plan failure and replanning in the context of shop floor scheduling. Miller [Miller, 19851 assumes that basic plan represen- tations must include sensory operations as well as effector actions. In contrast to adding sensory tasks after plan construction, his model builds explicit sensory tasks into the plan right from the beginning. Miller’s system uses a scheduling algorithm to integrate all sensor and effector tasks into a single coherent plan before execution. Except for certain limited types of servo correction however, exe- cution time verification and replanning are not dealt with after the initial plan has been constructed. The FORBIN planner discussed in [Firby et al., 19851 builds further on this work. A markedly different approach to planning has been put forward by Chapman and Agre [Chapman and Agre, 19861 based on the idea of concrete situated activity. Their idea is essentially one of purely reactive planning organized around situation-action like rules. Direct sensory input is used to index structures suggesting possible subsequent actions. Instead of using sensors sparingly to verify con- structed plans, sensors must always be active to supply the concrete information on which to base action decisions. Complex activity arises from the continual activation of appropriate actions with no anticipation of the future. . T eactive Action ackage The reactive planner described in this paper is based on the idea of reactive action packages or RAPS. A RAP is essen- tially an autonomous process that pursues a planning goal until that goal has been achieved. If the system has more than one goal there will be an independent RAP trying to accomplish each one. Each RAP obeys three principles while it is running. First, all decisions of what action to execute next in pursuit of a goal must be based only on the current world state and not on anticipated states. Second, when a RAP finishes successfully, it is guaranteed to have I World Model I I RAP Execution Queue Figure I: The RAP Execution Environment satisfied its goal and to have executed all sensor actions required to confirm that success. Third, should a RAP fin- ish without achieving its goal, it will have exhausted every possible avenue of attack; a RAP will fail only if it does not know any way to reach its goal from the current state. To adhere to these principles, a RAP planner must come to grips with the problems of execution monitoring and low-level replanning. Execution monitoring is required to maintain an up-to-date current world model. Every ac- tion executed must return some form of feedback about its success or failure to ensure that the world model remains an appropriate basis for planning decisions. Furthermore, some RAPS may need to issue sensor operations in addition to this feedback in order to monitor the progress of their actions in more detail. Some form of low-level replanning must also take place within a RAP to ensure that it explores all approaches to achieving its goal before returning fail- ure. The reactive planner described in this paper consists of a RAP execution environment and processing algorithm that exhibits these characteristics. The planner is used to manage a robot. ask RAP nler Each RAP should be thought of as an independent entity, pursuing its goal in competition with the other RAPS in the system by consulting the current world state and is- suing commands to alter that state. The RAP execution environment shown in figure I supports this view of RAP execution. The world model holds the system’s under- standing of the current world state, the hardware interface controls communication with the real world, and the RAP interpreter and execution queue provide a mechanism for coordinating competition between RAPS. A RAP waits on the execution queue to be selected by the interpreter for its turn to run. When it does run, a RAP consults the world model and issues commands to the hardware inter- face. The interface passes those commands on to the robot hardware, interprets feedback, such as sensor reports or ef- fector failures, and makes appropriate changes to the world model. Interleaved RAP execution arises when the running RAP stops and returns to the queue to wait for a subgoal to complete and the interpreter chooses another to run in its place (see section III.). An important aspect of this architecture is the rela- tionship between the world model and the hardware inter- Firby 203 face. The RAP interpreter must strive to run in real time and therefore all automatic inference within the system must be kept tightly under control. To meet this require- ment, the world model remains strictly static: no forward inference is allowed when facts are added or changed and no backward inference is allowed when queries are made. All changes to the model must be handled explicitly by the hardware interface or by the RAPS themselves. The hardware interface has detailed expectations about the way primitive hardware commands will change the world. It uses this knowledge to interpret the successes and failures returned by actual hardware operations and make appropriate changes to the world model. For exam- ple, if a command is issued to grasp a specific object and the hardware returns success, then the interface updates the world model to reflect that the object has been grasped. On the other hand, when the hardware returns some rea- son for failure, that reason is used to try and straighten out inconsistencies in the world model (Le., noting that the object is too slippery to grasp, that the gripper is bro- ken, etc.). This requires that enough real world feedback come from the hardware to ensure that the interface can maintain a reasonable model of the true world state. Although it is fair to expect the hardware interface to keep the world model consistent and up-to-date with respect to facts tied closely to direct sensor feedback data, it is unreasonable to assume that it will infer more ab- stract truths or initiate goals to explain complex failures. Such abstract properties and failures must be derived or explained by the RAPS themselves. For example, a RAP might be responsible for running the dishwasher. Pushing the start button should start the machine and, given ap- propriate feedback, the hardware interface can update the world model to reflect that the button was pressed. How- ever, before noting in the world model that the dishwasher is running, the “on” light should be checked as confirma- tion. This sort of high-level knowledge about dishwashers belongs in the dishwasher RAPS and not in the hardware in- terface charged with monitoring primitive actions. Thus, the dishwasher starting RAP would issue a command to push the start button, a command to check the “on” light and, if both succeeded, would update the world model to reflect that the dishwasher was running. This division of labor in the RAP execution environ- ment has two desirable characteristics. First, a natural coupling is made between the world model and the real world through hardware level feedback that occurs irre- spective of the commands in any particular RAP. Second, any additional complex inference that is required becomes the responsibility of one or more RAPS and thus falls under the same control mechanisms as any other robot activity. 111. Issues in RAP Execution As a RAP runs and issues commands, it is doing a real-time search through actual world states looking for a path to its goal. In complex domains where general heuristics for Figure 2: An Illustration of RAP Execution deciding on the applicability of a given command are not well developed, a simple, blind search from state to state can be very inefficient. To limit the search performed, a RAP holds a predefined set of methods for achieving its goal and only needs to choose between these paths rather than construct new ones. A typical method consists of a partially ordered network of subtasks called a task net and each subtask in the net is either a primitive command or a subgoal that will invoke another RAP. To allow interleaved RAP execution, a RAP runs by consulting the world state, selecting one of its methods and issuing that method’s task net all at once. The RAPS and commands in the task net are added to the execution queue and the running RAP is suspended until they have been completed. This hierarchical style of RAP execution is achieved with the interpreter algorithm illustrated in figure 2. First, a RAP is selected by the interpreter from the RAP execu- tion queue. Selection is based on approaching temporal deadlines and on the ordering constraints placed on RAPS by task nets. If the chosen RAP corresponds to a prim- itive command it is passed directly on to the hardware. Otherwise the interpreter executes it. As shown in the il- lustration, each RAP consists of two parts: a goal check and a task net selector. RAP execution always begins with the goal check consulting the world model to see if its goal has already been accomplished. If it has the RAP finishes im- mediately with success. Otherwise, the RAP tries to choose an appropriate task net. If no net is applicable in the cur- rent situation, the RAP must signal failure, but one usually is and the RAP sends it to the execution queue. At this point the RAP has selected a plan for achieving its goal and must wait to see how things turn out. To wait, the RAP is placed back on the execution queue by the interpreter to run again once its task net has finished. When the RAP comes up again, it executes exactly as before. Thus, a RAP keeps choosing task nets until either its goal is achieved, as determined by its goal check, or the world state rules out every task net that it knows about. This method of specifying and running RAPS allows for a hierarchical and parallel pursuit of RAP goals, but raises the problem of coordination among the different subRAPs in a task net. If an early member of a task net fails, then it is probably pointless to execute those that follow; the 204 Planning method the task net represents simply isn’t working out. To deal with this situation, the system keeps track of task net dependencies and removes all the members of a task net from the queue when any one of them fails to achieve its goal. Another problem with using task nets is that one might fail without changing the world enough to cause the RAP that spawned it to select a different one. In this situ- ation, the RAP will restart, note that its goal has not been satisfied and choose the same task net over again. If noth- ing intervenes to change the world in some way, such a loop could continue indefinitely. The best solution to this problem is not obvious and is still an area of active inves- tigation. The current system has an execution-time loop detector that flags any RAP that selects the same task net repeatedly without success. Once flagged as a repeat of- fender, a RAP is given low priority on the execution queue for a while in hope that the world will change. If that doesn’t happen, the RAP is eventually made to fail so its parent can try and choose a different task net for its goal. Allowing this tenacious pursuit of goals is a necessary part of dealing with the problem of unintentional interference between competing RAPS. A. Interactions Between Task Nets A classic problem with hierarchical planners is that plans for conjunctive goals can clobber each other unintention- ally. This problem manifests itself in the RAP planner when an early RAP in a task net sets up a particular state for some later RAP and a third, independent, RAP or process upsets that state. If execution of the task net were to con- tinue after this type of interference, it is likely that the later RAPS would fail and work would have been done for nothing. A standard technique for preventing such wasted work is to place protections on the states established by early RAPS and prevent their change. However, such pro- tections are too restrictive and difficult to manage within the RAP model of execution. One problem is that enforc- ing a state prevents many useful interactions. For example, one RAP might pick up a glass to move it to the kitchen, and another will want to put it down temporarily to switch the light off before leaving the room. Enforcing something like (hold glass) until reaching the kitchen would pre- vent putting the glass down to free up the hand. A second problem with trying to prevent a state change is that it requires looking into the future and thus violates our goal of building a purely reactive planner. To keep a state un- changed requires asking an action whether it will effect that state before it is executed rather than waiting to see what has changed afterwards. Finally, a state cannot be enforced at all if an external agent or process decides to change it. The RAP planner deals with interference between task nets without protections. Whenever a task net is chosen, the state that each RAP in the net is designed to establish (if there is one) is attached to the RAP it is being estab- lished for. These states form a validity check on the later RAPS which can be evaluated at their execution. After a RAP'S goal check, the interpreter checks all states attached to it by earlier RAPS to see if they are still true. If they are, then no interference has occurred and execution of the RAP is still appropriate. If not, an assumption has been violated and the RAP fails causing removal of the task net it belongs to from the execution queue. B. Uncertainty in the Another problem that can occur during RAP execution is for the world model to become inconsistent with the state of the real world. This can occur for many reasons in- cluding other agents changing the world, simple lack of information about something in the world, or failure to account properly for the evolution of an independent pro- cess. Rather than try and deal directly with the uncer- tainty this causes in the world model, the system just ig- nores it. When unquestioned faith in the world model results in a primitive command being attempted that can- not possibly succeed, the hardware and hardware interface are supposed to analyze the subsequent failure and correct the world model. For example, if a command is issued to lift a particular rock but it fails because the rock is too heavy, the hardware interface should interpret the failure that way and alter the world model to reflect that the rock is too heavy to lift. Then when the RAP attempting to lift the rock gets around to trying again, it will notice the rock is too heavy and try something else. In this way, the world model is made consistent through corrective feedback from the domain, and replanning required because of previous inconsistency occurs automatically. c. In summary, the RAP-based reactive planner described above separates each RAP into three parts: the Goal Check and Task Net Selector which form the predefined body of the RAP and a Validity Check which gets added when the RAP becomes part of a task net. This simple RAP struc- ture is interpreted according to the following algorithm, somewhat reminiscent of the NASL interpreter described in [McDermott, 19761: 1. Choose a RAP or command to execute from those wait- ing on the execution queue. If a command is chosen simply pass it on to the hardware and choose again until a RAP comes up. 2. Run the RAP’s Goal Check to see if its goal has been achieved. If it has then this RAP is finished and should return success. 3. Run the RAP’s VaEidity Check if it has one. If the test fails then a task net assumption has been violated, this RAP is no longer appropriate and it should finish returning failure. 4. Run the RAP’s Task Net Selector to choose a task net to achieve its goal starting from the current world model. If no appropriate task net is known then the Firby 205 goal of this RAP cannot be achieved and the RAP should finish returning failure. Place the subtasks from the selected net on the exe- cution queue so that they will be run in accordance with the orderings placed on them by the net. Put the RAP back on the execution queue to be run after its task net has finished executing. Go to (1) to choose another RAP. IV. Summary and ConcIusions The idea of purely reactive planning as typified by the RAP model described in this paper has one obvious short- coming: it cannot deal effectively with problems that re- quire thinking ahead. Making plan choice decisions based only on the current world state precludes identification and prevention of impending detrimental situations. This can cause pursuit of the planner’s goals to be inefficient, un- successful, or even dangerous. Some inefficiencies occur because interactions between competing task nets are only repaired and not prevented during RAP execution. Also, RAPS cannot take expected states like rain into account, so choices like leaving the umbrella behind because it’s sunny can be short sighted and cause unnecessary trips back once the rain starts. Poor management of scarce resources is a similar inefficiency that can prevent otherwise successful plans from working. Finally, not being able to look ahead can cause disasters which should be preventable, like car- rying an oil lantern downstairs to look for a gas leak. Strategic planning, or looking ahead into the future, is required to detect inefficiencies and unhappy situations before they occur. Given the RAP model of reactive plan- ning, a strategic planner’s job would be to put constraints on RAP behavior before execution to either prevent or en- courage specific situations. Such constraints might take the form of ordering RAPS on the execution queue or forc- ing certain RAPS to make particular task net choices. For example, left on its own a RAP might elect to not pick up an umbrella because it is sunny, but the strategic planner, knowing that it will rain, could force the RAP to choose a task net that included taking the umbrella. In summary, the purely reactive RAP planner dis- cussed in this paper has several important features. It is extremely adaptive and hence tolerant of uncertain knowl- edge introduced by the actions of other agents or by the inherent complexity of the domain. In addition, by study- ing the domain feedback required to support RAP process- ing, the role of execution monitoring has been clarified and integrated as a natural part of the planning and execution environment. Similarly, issues of plan failure and replan- ning are subsumed by a single, uniform RAP processing algorithm. Finally, it is suggested that strategic planners would achieve more flexibility in many domains by assum- ing RAP-based plan execution and generating RAPS con- strained only as required to prevent serious inefficiencies and dangerous situations. eferences [Brooks, 19821 Rodney Brooks. Symbolic Error Analysis and Robot Planning. Technical Report AI Memo 685, MIT, September 1982. [Chapman, 19851 David Chapman. Planning for Conjunc- tive Goals. Technical Report TR 802, MIT Artificial Intelligence Laboratory, 1985. [Chapman and Agre, 19861 David Chapman and Philip E. Agre. Abstract reasoning as emergent from concrete activity. In Workshop on Planning and Reasoning about Action, Portland, Oregon, June 1986. [Doyle et al., 19861 Richard Doyle, David Atkinson, and Rajkumar Doshi. Generating perception requests and expectations to verify the execution of plans. In Pro- ceedings of the Fifth National Conference on Artificial Intelligence, AAAI, Philadelphia, PA, August 1986. [Firby et al., 19851 R. J ames Firby, Thomas Dean, and David Miller. Efficient robot planning with deadlines and travel time. In Proceedings of the IASTED Inter- national Symposium, Advances in Robotics, IASTED, Santa Barbara, CA, May 1985. [Fox and Smith, 19841 Mark S. Fox and Stephen Smith. The role of intelligent reactive processing in produc- tion management. In 13th Meeting and Technical Conference, CAM-I, November 1984. [Gini et al., 19851 Maria Gini, Rajkumar S. Doshi, Sharon Garber, Marc Gluch, Richard Smith, and Imran Zualkenian. Symbolic Reasoning as a Basis for Auto- matic Error Recovery in Robots. Technical Report TR 85-24, University of Minnesota, July 1985. [Joslin and Roach, 19861 D avid E. Joslin and John W. Roach. An Analysis of Conjunctive-Goal Planning. Technical Report TR 86-34, Virginia Tech, 1986. [McDermott,. 19761 Drew V. McDermott. Flexibility and Eficiency in a Computer Program for Designing Cir- cuits. PhD thesis, Dept. of Electrical Engineering, MIT, September 1976. [Miller, 19851 David P. Miller. Planning by Search Through Simula- tions. Technical Report YALEU/CSD/RR 423, Yale University Department of Computer Science, 1985. [Sacerdoti, 19751 Earl D. Sacerdoti. A Structure for Plans and Behavior. Technical Report SRI Project 3805, Stanford *Research Institute, 1975. [Tate, 19841 Austin Tate. Planning and Condition Moni- toring in a FM?. Technical Report AIAI TR 2, Uni- versity of Edinburgh, Artificial Intelligence Applica- tions Institute, July 1984. [Wilkins, 19851 David E. Wilkins. Recovering from execu- tion errors in SIPE. Computational InteZZigence, l(l), February 1985. 206 Planning
1987
36
628
ON STRATIFIED AUTOEPISTEMIC THEORIES Michael Gelfond Computer Science Department The University of Texas at El Paso El Paso, TX 79968 Abstract In this paper we investigate some properties of "autoepistemic logic" approach to the formalization of common sense reasoning suggested by R. Moore in [Moore, 19851. In particular we present a class of autoepistemic theories (called stratified autoepistemic theories) and prove that theories from this class have unique stable autoepis- temic expansions and hence a clear notion of "theoremhood". These results are used to establish the relationship of Autoepistemic Logic with other formalizations of non-monotonic reasoning, such as negation as failure rule and circumscription. It is also shown that "classical" SLDNF resolution of Prolog can be used as a deduc- tive mechanism for a rather broad class of autoepistemic theories. Key words and phrases: common sense reasoning, autoepistemic logic, negation as failure rule, non-monotonic reasoning. (Science section). 1. Introduction In this paper we will investigate some properties of "autoepistemic logic" approach to the formalization of common sense reasoning suggested by R. Moore in [Moore, 19851. This approach is based on ideas from [McDermott and Doyle, 19801 and [McDermott, 19821 and is meant to capture "nonmonotonicity" of common sense reason- ing; i.e., the ability of a reasoning agent to withdraw some of his conclusions when a new evidence is presented. Moore concentrates on the type of reasoning which can be interpreted as reasoning about agent's knowledge or belief and uses modal logic (namely the notion of autoepistemic theory) to formalize this type of reasoning. Let us review some of the basic notions of his approach. By an autoepistemic theory T we mean a set of formulae in the language of propositional calculus augmented by a belief operator L where Lf is interpreted "f is believed" for any formula f. A Fzrmula in this language is called irreducible if it is an atom or begins with L. It is easy to see that each formula can be represented in exactly one way as a propositional combination of irreducible subformulas. The language of an autoepistemic theory T is the set of all propositional combinations of the irreducible components of the formulas from T. By Cn(T) we will denote the set of all formulas in the language of T which follow from T by propositional calculus. Obj(T) will stand for the set of all objective formulae from Cn(T) (i.e., the formulae of Cn(T) which do not contain the belief operator L). Definition 1. (Moore) A set of formulae E(T) is a stable autoepistemic expansion of T if it satisfies the following condition: E(T) = Cn(T + (Lp: p is in E(T)) + {"Lp: p is not in E(T))) Moore shows that stable expansions contain all and only those formulae which are true in every interpretation of formulae from the language of T which satisfies T and makes Lp true for every formula, p, in the extension. The notion of a stable autoepistemic expansion of a theory T plays a major role in the Moore's formalization of autoepistemic logic: it describes a set of beliefs of a rational agent with a set of premises T. The agent is rational in a sense that he believes in all and only those facts which are based on evidence rooted in his premises or in the stabi- lity condition. If this expansion is unique then it can be viewed as the set of theorems which follow from T in autoepistemic logic. EXAMPLE 1. Consider the autoepistemic theory T = {"Lp + q}. Let us informally investigate the construction of E(T). An agent with the set of premises T does not have any evidence in favor of p and hence p does not belong to his set of believes E(T) l Therefore "Lp is in E(T) (due to the stability condition) and hence q GdOd 207 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. is in E(T) (due to agent's ability to reason) and the only objective formulae belonging to E(T) are those from Cn(g). To construct formulae which express the agent's beliefs about objective state- ments we have to add to E(T) all formulae of the form Lf where f is in Cn(q) and formulae of the form -Lf if f is not in Cnh>. In a similar way we can construct agent's beliefs about his beliefs about objective formulae, etc. It is easy to see that the resulting E(T) is the only stable expansion of T. This construction as well as the proof of the uniqueness of E(T) will be discussed in detail in Section 2. Example 1 also illustrates nonmonotonic nature of autoepistemic logic. The agent's present state of knowledge forces him to conclude q. But if new information about p becomes avail- able this conclusion can be withdrawn which reflects the nonmonotonocity of this form of reasoning. Unfortunately, Moore, as was recognized by a theory T may have more than one stable expansion or even no consistent stable expansion at all. To see why let us look at the following examples from [Moore, 19851: EXAMPLE 2. Let T = {-Lp + ~1. A theory T has no consistent stable expansion. Informally: we have no evidence for p, hence we conclude that -Lp which leads us to p and therefore Lp. Contradiction. EXAMPLE 3. Let T = {%Lp + q, -Lq +p1. It is easy to see that T has two stable expansions: El with an objective part Cn(q) and E2 with an objective part Cn(p). This raises an important question of characterization of autoepistemic theories with unique stable expansions (i.e., clear notion of "theoremhood"). This question was first addressed in [Marek, 19861. His results immediately imply the following theorem: THEOREM 1. (Marek) Any consistent objective theory T (i.e., consistent theory without the belief operator) has a unique stable expansion E(T). In the first part of this paper we will generalize this result and give sufficient conditions which guarantee the existence of a unique stable expansion for a much broader class of theories T. Theories from this class will be called stratifiable autoepistemic theories. Informally the notion is based on requiring the presence of certain hierarchy of pre- dicates defined by a theory T which allows the use of formulae of the form Lf on the level k of this hierarchy only if f itself is fully defined on the lower levels. The second part is devoted to the investigation of the relationship between Mautoepistemic logic" formalization of common sense reasoning and the alternative formalization based on the "negation as failure rule" used in logic programming. We start with the review of the definition of stratified logic programs and their semantics [Apt et al, 19861 [Van Gelder, 19861 and then show that stratified logic programs can (in some precise sense) be interpreted in terms of belief. This, together with results from [Lifshitz, 19861, [Gelfond, Przymusinska, 19861 establishing the relationship between circumscription, autoepistemic logic and stratified logic programs shows that in the presence of a suitable hierarchy of definitions in a knowledge base different formalizations of nonmonotonicity in common sense reasoning essentially coin- cide. Another important consequence of this result is that it gives us a feasible deductive procedure we can use to charac- terize theorems of a broad class of autoepistemic theories. To give a flavor of the techniques used to prove these results we include the complete proof of theorem 2. Complete proofs of other results will be published elsewhere. 2. Stratified Autoepistemic Theories By literals we mean formulae of the forms p, -p, Lf, "Lf where p is a propo- sitional letter and f is an objective formula. Literals which contain the belief operator L will be called auto- epistemic while those without L will be called objective. From now on we will restrict our attention to autoepistemic theories consisting of clauses of the form S-+Vwhere S is a list of literals and V is a list of atoms (both S and V can be empty). DEFINITION 1. An autoepistemic theory T is called stratified if there is a partition T = To + . . . + T such that: (i> T B is objective Tpossibly empty) (ii) c auses with the empty conclu- sions do not belong to Tk where k > 0. (iii) if a propositional letter p belongs to the conclusion of a clause in Tk then literals p and -p do not belong to T Tk-I and literals Lf @hd'"LS where f contains p do not belong to To, . . . . Tk. We will say that the degree of a proposi- tional letter p is k and write D(p) = k if p belongs to the conclusion of a clause in Tk. If there is no such clause then the degree of p is 0. (It is obvious that if an autoepistemic theory is stratified then every propositional letter p has exactly one degree). The degree of an objective formula f is the maximum degree 208 of its propositional letters. It is easy to see that theories from Examples 2 and 3 are not stratified while the theory from Example 1 is stratified with T = ( 1 and Tl = {"Lp -> q1 W& will start with a construction of the stable expansion of T. The idea is to first build the objective core of such an expansion and then to apply Marek's construction to it. Such an objective core is build gradually by expanding the corresponding layers of the stratified theorv T. More preciselv: Kg = Cn(T ). ' Km+ 1= nK, f ( + (Lp:D(p) = m & p in Km} + ("Lp:D(p) = m & p not in Km} + Tm+ \ 1. The fol owing simple lemmas capture important properties of this construction. LEMMA 1. Any model Mm of I$-,, can be expanded to a model Mm+1 of Kmtl. Proof. Let Mm+ = Mm + (Lp : D(p) = m & p in Km1 + (q : D q) = m+l}. t It can be easily seen from the definition of strati- fied autoepistemic theories that Mm+1 is indeed a model of Km+l. LEMMA 2. (a) qftt+eory Km is consistent m 1s consistent. (b) Km+l' is a conservative extension of Km. Proof. Follows immediately from Lemma 1. Now we can construct a stable expansion E of T. DEFINITION 2. Let K = Kn where Tn is the last layer of the partition of T. K is consistent and hence, in virtue of Theorem 1, there is a unique stable expansion of obj(K). Let us denote it by E. To show that E is indeed a stable expansion of T we need the following Lemma. LEMMA 3. For any objective formula f of degree m, f in obj(Km) iff f in E. Proof. The only if part is obvious. To prove the if part it suffices to notice that f in E implies f in obj(K) (see Theorem 2 from [Marek, 19861 and hence, by clause (b) of Lemma 2, we have that f is in obj(Km). THEOREM 2. Any consistent and stratified autoepistemic theory T has a stable expansion E(T). Proof. To show that E is a stable expansion of T we have to prove that E satisfies the following condition: (1) E =-Cn(T+{Lf: f in EI + {"Lf: f not in El). Let us denote the set on the right side of this equation by R. From the definition of E we have that (2) E = Cn(obj(K) + {Lf: f in EI + C"Lf: f not in El). and hence it remains to show that R = E. (a) To show that E is in R let us prove first that for any m, Km in R. We will use induction on m. The base is obvious and the inductive step follows immediately from Lemma 3. Now it suffices to notice that, by the definition of K, obj(K) is in R. (b) To show that R is a subset of E we will prove that every model of E is a model of R. Suppose it is not the case and there is a model M of E which is not a model of R. Let U = (S --> V) be a clause from T of the lower degree m such that V is not empty and M(S) = True and M(V) = False. It is easy to see that such clause always exists and its premise S must contain autoepistemic literals (otherwise U would be in obj(K) and false in M which is impossible). Suppose that the first such a literal is Lq. Since E is complete w.r.t. autoepistemic literals (i.e., E I- Lq or E I- *Lq) and M(Lq) = True we have that q is in E. The theory T is autoepistemic, therefore the degree of q is less than m, by Lemma 3 we have that q is in Km-l and hence Lq is in Km. Now we can eliminate Lq from S and obtain a clause Ul which belongs to Km and fails in M. If the first autoepistemic literal in S is "Lq it can be eliminated in exactly the same manner. By repeating this process we will eventually obtain a clause Ur which is objective, belongs to Km and fails in M which contradicts our assumption. Hence R is a subset of E. Q.E.P. THEOREM 3. E is the only stable expan- sion of T. 3. Stratified Logic Programs We will start with recalling the notion of stratified logic program (for the propositional case) and its semantics (see [Apt. et al., 19861, [Van Gelder, i986] j: By a logic program we mean a collection of clauses of the form S --> p where S is a (possibly empty) list of literals and p is a propositional letter, Logic programs are used to answer queries of the form 11 V.. Vln where 11, . . . . In are literals. In this process the 'negation as failure rule' is used which makes the precise definition of the notion of an answer to a query Q somewhat difficult to come up with. Recently several researchers independently suggested the characterization of a class of logic programs for which this notion allows elegant and clear semantics. DEFINITION 3. A logic program LP is called stratified if there is a partition LP = TD + . . . + Tn such that if a propo- sitional letter p belongs to the conclu- sion of a clause in Tk then p does not belong to TO, . . . . Tk-1 and "p does not belong to TO, . . . . Tk. Gelfond 209 Stratifiability is a condition on the use of negation in a logic program. Intuitively it forbids use of negation on formulas which are not completely defined. EXAMPLE 4. It is easy to see that a program p, p --> q, -q --> r is strati- fied with TO = {p, p --> q} and Tl = {"q --> r). The notion of an answer to a query for a stratified program LP is based on the following definition: DEFINITION 4. Consider a sequence of theories ELPk (where ELP stands for an 'extension of logic program') such that ELPO = CWA(T0) + {'"p: there is no clause in LP with a conclusion ~1: ELPk+1 = CWA(ELPk + Tk+l);’ * ELP = ELPk; where CWA(T) is Reiter's Closed Work Assumption of a theory T (see [R])i.e., CWA(T) = T + {--p: T I-/- p). PROPOSITION 1. For any stratified theory LP = To + . . . + T,, ELP is consistent and has the unique model. This model is intended to represent the universe described by LP. (It can be shown that model is exactly the "canonical" model of LP defined in [Apt, et al.]. The notion of "an answer to a query Q" is defined as follows: DEFINITION 5. We will say that the query Q in LP has a positive answer and write LP I = Q iff Q is true in M. Otherwise the answer to Q is negative. 4. The Relationship To investigate the relationship between logic programs and autoepistemic theories we will need a suitable mapping I from the propositional language in which logic programs are written into the corresponding language with the belief operator L. DEFINITION 6. For any propositional formula f, I(f) is the formula obtained from f by replacing every occurrence of every negative literal -p in f by the neqative autoepistemic literal "Lo. For any logic program LP, I(LP) = ’ {I(S):S in LPI. THEOREM 4. For any stratified program LP there is a unique autoepistemic expansion E of I(LP). THEOREM 5. For any stratified program LP and for any query Q, LP 1 = Q iff E I-I(Q) REMARK The following corollary of Theorem 5 establishes the relationship between auto- epistemic theories and prioritized circum- scription (see [McCarthy, 19861). COROLLARY. For any stratified program LP and any query Q we have CIRC(LP,Pl > > Pn) I= Q iff E(LP) I- I(Q). Proof follows immediately from Theorem 3.1 of [Ll], definition of ELP and Theorem 5. (On the relationship of circumscription and logic programs see also [LB6 and P861). 5. Conclusion The Moore's formalization of auto- epistemic logic is based on the notion of stable autoepistemic expansion of a theory T which represents a possible set of beliefs of a rational agent with a set of premises T. If such an expansion of a theory T is unique then it can be viewed as the set of theorems derivable from T in autoepistemic logic. In this paper we introduced a notion of stratified auto- epistemic theory and showed that such theories have unique stable expansions. We believe that this result is of some importance not only because it clarifies the notion of "theoremhood" in autoepis- temic logic but also because of the following reasons: (a) Like many other nonmonotonic reasoning systems, autoepistemic logic was presented non-constructively. Neither the semantic basis nor the syntactic realization of this semantic provided a mechanism for arriving at the theorems of a given autoepistemic theory. We use the notion of stratification to show that "classical" SLDNF resolution of Prolog can be used as such a mechanism for a rather broad class of autoepistemic theories. This result also allows us to interpret the behavior of systems based on (propositional) Prolog in terms of belief and suggests possible directions in which Prolog can be extended to auto- epistemic theories. (b) The results of this paper suggest that a designer of a knowledge system based on autoepistemic logic may find it rewarding, both conceptually and compu- tationally, to restrict yourself to stratified autoepistemic theories (very much as a designer of a traditional soft- ware system may find it rewarding to restrict yourself to traditional data structures such as stacks, trees, etc.). It is possible that other syntacticly described classes of autoepistemic theories more suitable for some types of applications will be discovered. But, in my judgement to make autoepistemic approach really practical we have to first extend it to allow quantification. 210 Planning (c) Autoepistemic logic is based on an intuition rather different from those used for the development of other formalisms of this sort such as negation as failure rule or circumscription. We believe that the better understanding of the relationship between different forms of non-monotonic logic is essential for further development. It may help to single out areas of applicability of these methods, find their limitations and even eventually lead to the discovery of deeper underlying principals of non-monotonic reasoning. 6. Acknowledgements I am grateful to Vladimir Lifshitz and Halina Przymusinska for numerous discussions on the subject of this paper. 7. References [Apt, Blair, and Walker, 19861 Toward a Theory of Declarative Knowledge, In: Preprints of Workshop on Foundations of Deductive Databases and Logic Programs, 1986. [Gelfond and Przymunsinska, 19861 On the Relationship between Circumscription and Autoepistemic Logic. Proceed- ings of International Symposium on Methodologies for Intelligent Systems, 1986. [Lifschitz, 19851 Closed-World Databases . and Circumscription. Artificial Intel1 igence 27, 1985. [Lifschitz, 19861 On the Declarative Semantics of Logic Programs with Negation. In: Preprints of Work- shop on Foundations of Deductive Databases and Logic Programs, 1986. [Moore, 19851 Semantical Considerations on Nonmonotonic Logic, Artificial Intelligence 25 (l), 1985. [Marek, 19861 Stable Theories in Auto- epistemic Logic, (Preprint), 1986. [McCarthy, 19861 Applications of Circum- scription to Formalizing Common Sense Reasoning, Artificial Intelligence 28, 1986. [McDermott and Doyle, 19801 Nonmonotonic Logic 1, Artificial Intelligence 13, 1980. [Przymusinski, 19861 On the Semantics of Stratified Deductive Databases. In: Preprints of Workshop on Foundations of Deductive Databases and Logic Programs, 1986. [Reiter, 19781 On Closed-World Databases In: Logic and Databases (H. Gallaire and J. Minker, Eds), 1978. [Van Gelder, 19861 Negation as Failure Using Tight Derivations for General Logic Programs, Third IEEE Symposium on Logic Programming, 1986. [McDermott, 19821 Nonmonotonic Logic 2: Nonmonotonic Modal Theories, J. ACM 29, 1982. Gelfond 211
1987
37
629
cation Probkm Matthew L. Ginsberg and Davidl E. Smith1 The Logic Group Computer Science Department St anford University Stanford, California 94305 Abstract In this paper, we propose a solution to McCarthy’s qualification problem [lo] based on the notion of pos- sible worlds [3, 61. We begin by noting that existing formal solutions to qualification seem to us to suffer from serious epistemological and computational dif- ficulties. We present a formalization of action based on the notion of possible worlds, and show that our solution to the qualification problem avoids the dif- ficulties encountered by earlier ones by associating to each action a set of domain constraints that can potentially block it. We also compare the computa- tional resources needed by our approach with those required by other formulations. 0 Introduction A. The problem An important requirement for many intelligent systems is the ability to reason about actions and their effects on the world. There are several difficult problems involved in au- tomating reasoning about actions. The first is the frame problem, first recognized by McCarthy [ll]. The difficulty is that of indicating all those things that do not change as actions are performed and time passes. The second is the rumiJication problem (so named by Finger 123); the dif%- culty here is that it is unreasonable to explicitly record all those things that do change as actions are performed and time passes. The third problem is called the qualificaZion problem.. The difficulty is that the number of precondi- tions for each action is immense. In a previous paper [4], we presented a computationally effective means for solv- ing the frame and ramification problems. In this paper we extend this method to deal with the qualification problem. A familiar example of the qualification problem, due to McCa.rthy, is the “pota.to in the tailpipe” problem. One precondition to being able to start a car involves having the key turned in the ignition, but there are many others. For example, there must be gas in the tank, the battery must be connected, the wiring must be intact, and there can’t he a potato in the tailpipe. It would hardly be pra.&cal to sci- ’ Rot 11 authors supported by DARPA under grant number NO0039 (‘-UO:33 and by ONR under grant nunher NO001 -l-81-li-uoo-l. check all of these unlikely qualifications each time we were interested in using the car. To describe the qualification problem more formally, we will use a simple situation calculus to talk about the world. Let the predicate holds(p, s) indicate that the proposition p holds in the state s. We also denote by p(u) the preconditions of an action a, and by c(u) the conse- quences of the action a given that the preconditions hold. An action can now be characterized by an axiom or axioms of the following form: holds(p(a), s) ---f holds(c(u), do@, s)), where do(u, s) refers to the new situation that arises af- ter the action a has been performed. The qualification problem is that there are a great many preconditions and qualifications appearing in the complete precondition p(u). It is difficult to enumerate them all, and comput,ationally intractable to check them all explicitly. This overall problem consists of three distinct difficul- ties: 1. The language or ontology may not be adequate for expressing all possible qualifications on the action u, 2. It may be infeasible to write down all of the qualifica- tions for a even if the ontology is adequate, and 3. It may be computationa.lly intractable to check all of the qualifications for every a&ion that is considered. In this paper we will be concerned only with the second and third of these issues - how to conveniently express qualifications and how to reason with them in a computa- tionally tractable way. We will not consider the problem of recognizing or recovering from qualifications that can- not be described within the existing ontology or language of a system. . The default approach There hcas been a recent resurgence of interest, in nroblems of commonsense reasoning about actions and their conse- quences. Several authors [‘i, 8, 9, 133 have suggested that, the qualification problem can be effectively addressed by grouping together all of the qualifications for an action un- der a disabled predicate. This predicate is then a.ssumed false by default in any particular situation. For example, given an action a wit,h explicit preconditions p(a), explicit, 212 Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. L-l Ll Figure 1: Move A to B’s location Figure 3: The blocked dumbbell problem Figure 2: The dumbbell problem consequences c(a) and additional qualifications q(a), we could write holds@(a), s) A ldisabled(a, s) - holds(c(a), do@, s)) holds(q(u), s) -+ disabled(u, s), together with the default rule Mldisabled(u, s) ldisabled(u, s) ’ In other words, if the action’s preconditions hold in state s, and the action is not provably disabled, then the conse- quences will hold in the state resulting from the execution of the action. The advantage of this approach is that a system does not need to reason about all of the obscure qualifications that might prevent each action. They can be assumed to be false, unless the contrary has been shown by some form of forward inference. Unfortunately, there are some serious difficulties with this approach. Consider a simple blocks world consisting of a floor with two blocks on it, as shown in Figure 1, and a single operation move(b, I) that moves the block b to location 1. One qualification on this action is that the intended destination for a move operation must be vacant. We might express this as: holds(on(z, I), s) -+ disabled(move(y, I), s). (1) If P is in some location 1, the action of moving y to that location is disabled. Now suppose that we complicate ma.tters by allowing blocks to be connected together as shown in Figure 2. (We will henceforth refer to this as the dumbbell problem.) If we try to move the block A to the location occupied by B, B moves also, and will therefore not be in the way when A arrives. In this case, the fact that B is in the way is not a qualificat.ion on the action. So we need to modify (1) to Income: on( r. 1) A lconnected(z, y) 3 disabled(move(y, I)), (2) intlicating that an object. at. the destination of an intended 11101 ion disables t.hat, act.ion unless it. is coll0ect ~(1 to the object being moved. (We have dropped the situation vari- able in (2) in the interests of simplicity.) The “blocked dumbbell” problem shown in Figure 3 requires that we introduce still more qualifications on t,he move operator. Now the presence of C blocks the action, since B is unable to move to its new location. We have to modify (2) to produce something like: on(t, r’) A connected(z, y) A lconnected(y, z) Ainduced-position(y, I’, move(z, 1)) - disabled(move(z, i)). (3) This axiom states that a move action will be disabled if an object connected to the object being moved is prevented from reaching its new location. The increased complexity is a consequence of the fact that the disabling rules (2) and (3) need to anticipate the ramifications of the move action, but the possible ramifi- cations become increasingly numerous and complicated as the complexity of the domain increases. In addition to epistemological problems, this complex- ity leads to computational difficulties. As the number of ramifications grows, it becomes impractical to forward chain on the direct results of an action in order to deter- mine all of the subsequent actions that may be disabled. We will see in Section IV-A that a backward chaining ap- proach to this problem is also intractable. c. Appsoach In the examples above, the move operation always failed because there was something in the way. It would therefore seem that we should be able to derive the above qualifi- cations from more general constraints on the world. In the blocks world, one of the domain constraints is that an object cannot be in two places at once; another domain constraint is that no two objects can ever be in the same place at the same time. We could state these formally as: on(z,J)AZ# 1’ - lon(z,1’) on(z, I) A z # x ----f Ton@, /), (4) If we try to move a block to a location that is already occupied, the resulting world will be in contradiction with the domain constraint (4). We conclude that the action cannot be performed. A similar argument can be made for the potato in the tailpipe problem. In this case, it is inconsistent for an engine to be running with a blocked exhaust. It follows tha.t a. car with a blocked exhaust cannot be started. Ginsberg and Smith 213 Unfortunately, there is a serious flaw in these argu- ments. The trouble is that we have not distinguished be- tween things that an action can change (ramifications) and things tha.t prevent it from being carried out (qualifica.- tions). In our blocks world example, it may very well be tl1a.t a block in the way will defeat a move operation. On the other hand, it might be the case that the robot arm is sufficiently powerful that any block in its way simply gets knocked aside. Given only the domain constraint, we have no way of knowing which is the case. The same is true for the potato in the ta.ilpipe prob- lem. Given a car with a potato in its tailpipe, how are we to know whether turning the key in the ignition will have no effect, or will blow the potato out of the tailpipe? Surely a potato in an exhaust nozzle of the space shuttle would not prevent it from taking off, but nowhere have we provided any information distinguishing the two cases. The problem is essentially this: given that the re- sults of an action may include arbitrary inferential con- sequences of the stated results, we need to distinguish le- gitimate qualifications for an action from possible ramifi- cations of the action. One solution to this problem is to explicitly identify, for each potential ramification of an ac- tion, whether or not it can act to qualify the action in ques- tion. Unfortunately, the number of potential ramifications to an actions grows exponentially with the complexity of the domain [4], so that any approach to the formalization of action that requires the exhaustive enumeration of all of an action’s ramifications will become computatiopally intractable when dealing with complex domains. The approach we will take to this problem is to indi- cate, for each possible action, which subset of the domain constraints can potentially block the action. In our blocks world example, the domain constraint that no two things can be in the same place at the same time qualified the failing move operations. In the car example, the constraint about exhaust blockages leads to the qualification. D. Organization In t#he next two sections, we present a formalization of the informa.l approach discussed in Section I-C, and go on t#o show that this approach does indeed give intuitively correct answers when dealing with a va.riety of qualified and unqualified actions. In Section IV we briefly compare the computational requirements of the possible worlds approach with those of existing descriptions, such a QA-3 type planner using monotonic situation calculus [S] or a system using a default approach such as tha.t described in Section I-B. II. Possible worlds nearest possible world of the action hold. in which the explicit To formalize this, suppose that we have some set S of facts describing the condition of our world before taking an action a with consequences C(u). Now after the action a is taken, we need to add the facts in C(a) to our world description S; the difficulty is that the simple union S U C(a) may be inconsistent. In order to avoid this difficulty, we consider consistent, subsets of S U C. In other words, we define a possible world for C in S to be any consistent subset of S U C. Now note that the nearness of a particular subset to the original situation described by S is reflected by how large the subset is: if C 2 Tl E T2 C SUC, Tz is at least as close to S as Tl is. This leads us to define the nearest possible world for C in S to be a mazimul consistent subset of SW. Note that maximal here refers to set inclusion, as opposed to cardinality, so that a subset of S U C is maximal if it has no consistent superset in S U C.2 There is one additional subtlety that we need to con- sider. Specifically, there will often be facts that will ulwuys hold, so that we want to only consider subsets of SUC that contain them. Domain constraints such as (4) often have this property; we can expect (4) to hold independent of the modifieations we might make to our world description. We cater to this formally by supposing that we have identified some set P containing these protected facts. Definition 1 Assume given a set S of logicul formulue, a set P of the protected sentences in our lunguuge, und an additional set C. A nearest possible world for C in S is defined to be any subset T E S U C such thut C C T, P fl S c T, T is consistent, and such that T is muzimul under set inclusion subject to these constraints. In general, we will have no use for possible worlds other than the nearest ones, and will therefore refer to the nearest possible worlds for C in S simply as possible worlds for C in S. ification and possible A. Manipulating domain constraints To describe qualification in t,his framework, we will de- scribe an action a using a precondition p(n), a consequence set C(u), and a qualification set Q(u). The qualification set contains those domain constraints that can qualify the success of the action. As an as follows: example, we can characterize the move operat,or The approach t,o qualification tl1a.t we are proposing builds upon our earlier work on the frame and raniification proh- lems [4]. \2:e will review that work very briefly here; the t5s;ent ial idea is to take the result. of an action t.0 he t.he p(move(b, 1)) = clear(b) “This definition originally appeals in [l]. It is shown in [:3] to be equivalent. t.o ideas appearing earlier in Reit,er’s default logic [I 21. 214 Planning I I I I C M Move A to B’s location Move A onto B with C in the way ;A; B I I I , I ” Domain constraint violated Figure 4: A qualified action C(move(b, 1)) = {on(b) I)} Q(move) = {on(z) I) A z # 2 -+ Ton(z) I)}. (5) The precondition is simply that the block being moved be clear, and the consequence is to reloca.te the block at the destination of the move operation. The qualification set consists of the single domain constraint stating that only one block can be in any particular location at any given time. In other words, move actions will be qualified if something gets in their way. To determine whether or not an action will succeed, we first remove the domain constraints in Q from our world description, and only then do we construct the nearest pos- sible world in which the consequences of the action hold. If the domain constraints are violated in all of the worlds so constructed, the action is qualified; if there is some world in which none of the domain constraints is violated, the action succeeds (and the corresponding world is the result of the action). For move actions, this involves computing the conse- quences of the action assuming that any number of blocks can occupy the same location. If no two objects coincide in the resulting world, the action succeeds: nothing got in the way. B. Examples We now examine a series of potentially qualified ac- t,ions, and show that this definition does indeed give us t,he desired result in all cases. Consider first the simple example shown in Figure 4a. The initial state is given by: * on(A) floor) on( B, floor) on(C) B). Domain constraint intact Figure 5: An unqualified action To construct the possible world for on(A) B), we must remove the fact that A is on the floor, since the domain constraint indicating that A can be in only one place at a time is not in the qualification set Q(move). But we do not need to remove the fact that B is also on top of C, since the domain constraint that A and B cannot coincide is not being considered. The resulting world is shown in Figure 4b, where the * labelling (6) indicates that it has been removed from our world description. The domain constraint (4) is violated in this world, and the action is therefore qualified. As a second example, consider the dumbbell problem, which is repeated in Figure 5a. The initial state is given aS: * 4% 4) * 4% 12) connect ed( A, B) . We also need axioms describing the connected predicate. We might have3: connected(z) y) A on(z) !I) + on(y) 12) (7) connected(z, y) A on(z) r,) + on(y, /3). (8) We assume that the axioms describing connection and the fact connected(A) B) are all protected. Even in the absence of the domain constraint saying that A and B cannot both be located a.t 12, on(B, 22) is inconsistent with the consequence on( A, /2) because of the domain constraint (8) d escribing the effect of the connec- tion between A and B. Thus (4) continues to hold, and the action is not qualified. The result is given by: on(A k4 connected(A) B). Using (8)) we can now derive on(B) /,) from these t,wo facts, so that B’s new location is a ramification of the move ac- tion. See Figure 5. 3An alternative formulation would describe the connected predi- cate arithmetically, assigning a numeric position to objects in our domain. We are using the description given only for reasons of simplicity. Ginsberg and Smith 215 A B C ?vlove A to B’s location Domain constraint violated Figure 6: The blocked dumbbell In the blocked dumbbell problem (Figure 6a), the ini- tial description is: * on(A,h) * on(W2) on(C, J3> connected(A, B). As a.bove, B must move when A does, since the two blocks a.re connected. But C need not be dislodged if we ignore the domain constraint in Q(a): the only reason it has to move is that it cannot remain at B’s implied destination. Thus the domain constraint is violated in the resulting world and, as depicted in Figure 6, the action fails. The pulley problem shown in Figure 7a is somewhat different. Here, moving A toward B causes B to move toward A, and a ramification of the action is to introduce a qualification. The action should fail.4 The initial state is given by: * on(U) * on(M2) pulley(A, B). If we denote by 14 the location halfway between II and Z2, the axioms describing the pulley system are: pulley(z, Y) A on(x, Id - 049, /2) (9) pulW(z, Y> A on(ql4) --+ on(y) j4). (10) Ignoring the domain constraint stating the blocks can- not coincide, the possible world relocating A halfway be- tween 11 and 12 removes the facts marked with a * above; the domain constraint (4) is violated in this world, since the physics of the pulley system implies that both blocks must be located at Id. 4 As with the “self-fulfilling” dumbbell problem, this sort of “self- clc4eat ing” act ioIl posts severe problems for earlier descriptions of qrlalificatio~~. Move A ha1fwa.y to B l2+q Domain constraint violated Figure 7: The pulley IV. Comparison wit approaches Existing approaches to qualification proceed by explicitly indicating under what circumstances the action is quali- fied; if none of these circumstances can be proven to have arisen, the action is assumed to be unqualified. We will refer to this as the “exhaustive” approach to qualification because of the need to list all of the qualifications explic- itly. If any of the listed qualifications is present, the action is blocked. The non-exhaustive, or inferential approach that we proposed in Section III takes a more relaxed view, enabling us to determine inferentially which domain facts potentially qualify the action in question. In this section, we compare these two a.pproaches. We are interested in the additional computational resources needed by the various methods in order to both describe the qualifications on an action, and to determine whether or not any particular action is in fact qualified. We begin by considering the exhaustive approach. A. Exhawtive approaches In a domain with a distinct action types, we showed in [4] that any of these actions can have up to 2”” distinct rami- fications, where K 5 1 is a number that can be expected to be fairly small for la.rge domains, although the product &a will increase as the domain becomes more complex. As we have seen, it is theoretically possible for any or all of these ramifications to qualify any particular action, although we would expect in general that only some fraction X2’“” of them will. Suppose now that the investigation of any particu- lar domain fact is done by backward inference and takes an amount of time t. Assuming that most actions are not qualified, so that the examination of each of the X2”” qual- ifications is a necessary overhead to the investigation of the successful action, it, follows that,: 216 Planning Theorem exhaustive 2 The computational overhead required by an approach to qualification is given by: B. The inferential approach The approach to qualification presented in Section III de- scribes qualifications not in terms of disabling conditions, but by using a “qualification set”. In principle, it might be as difficult to describe the qualification set as to list all of the disabling conditions; in practice, however, it appears that a simple qualification set (such as that in (5)) will often correspond to all of the disabling conditions. The power of our approach to qualification is that it enables us to take advantage of this simplicity of description.5 We will say that a domain is uniform if a single qualification set for each action generates all of the disabling conditions for it. In a uniform domain, we address the qualification problem by identifying, for each action a, which of the Kur domain constraints6 are in Q(a). This will require us to list as many as Ka2r domain constraints in the various Q(u)‘s, although it is likely that a domain constraint will only be in Q(u) if it involves a relation symbol appearing in u’s consequence set C(u). In general, therefore, we can expect to need to list at most ZKUT domain constraints in order to describe Q(u) for each action, where z is the num- ber of consequences in a typical C(a) (i.e., the number of “direct” consequences of the action). The additional time needed to investigate the action is that needed to check whether or not the domain con- straints in &(a) are violated in the possible world con- structed. We demonstrated in [4] that this is given by nm(t -t-t,), where t, M t and n is the number of constraints in Q(u). This gives us: Theorem 3 In a uniform domain, the computational re- quirements of the inferential approach to qualification are given by: Comparing this with theorem 2, we see that it is the inferential approach that does not suffer from an exponen- tial deterioration in performance as the domain becomes increasingly complex. 5%ollam has argued that our method also takes its power from L a partial order on possible worlds, but this is not the case. As can be seen from the examples in Section III-B, the met.hod remains romput at ionally effective if the partial order used is simply that given by set inclusion. ‘Lye are using T here to represent the number of relation sylubols in our domain. See [d]. We would like to thank the Logic Group for providing, as ever, a cooperative and stimulating - and demanding - environment in which to work. We would like to. specifi- cally thank Vladimir Lifschitz, John McCarthy, Drew Mc- Dermott and Yoav Shoham for many useful discussions; it, was Vladimir who introduced the dumbbell problem dur- ing a discussion at the Timberline planning workshop. PI PI PI VI 151 PI PI PI PI WI WI PI WI R. Fagin, J. Ullman, and M. Vardi. On the seman- tics of updates in databases. In Proceedings Second ACM Symposium on Principles of Database Systems, pages 352-365, Atlanta, Georgia, 1983. J. J. Finger. Exploiting Constraints in Design Synihe- sis. PhD thesis, Stanford University, Stanford, CA, 1987. M. L. Ginsberg. C ounterfactuals. Artificial Inttlli- gence, 30:35-80, 1986. M. L. Ginsberg and D. E. Smith. Reasoning about ac- tion I: A possible worlds approach. In Proceedings of the 1987 Workshop on Logical Solutions to fhe Frame Problem, Lawrence, Kansas, 1987. C. C. Green. Theorem proving by resolution as a basis for question-answering systems. In B. Meltzer and D. Mitchie, editors, Machine Intellzgence 4, pages 183-205, American Elsevier, New York, 1969. D. Lewis. Counterfactuals. Harvard University Press, Cambridge, 1973. V. Lifschitz. Formal theories of action. In Proceed- ings of the 1987 Workshop on Logical Sollltzons to the Frame Problem, Lawrence, Kansas, 1987. J. McCarthy. Applications of circumscription to for- malizing common sense knowledge. Artificial Intelli- gence, 28:89-l 16, 1986. J. McCarthy. Circumscription - a form of non- monotonic reasoning. Artificial Intelligence, 13:27- 39, 1980. J. McCarthy. Epistemological problems of artifi- cial intelligence. In Proceedings of the Fifrh Inter- national Joint Conference on Artificial Intelligence, pages 1038-1044, Cambridge, MA, 1977. J. McCarthy and P. J. Hayes. Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Mitchie, editors, Machine In- telligence 4, pages 463-503, d Ter.jcan Elsevier, New York, 1969. R. Reiter. A logic for default reasoning. Artificial Intelligence, 13:81-132, 1980. Y. Shoham. Chronological ignorance. In Proceedings of the Fifth National Conference on Artzficza?i Inlelli- gence, pa.ges 389-393, 1986. Ginsberg and Smith 217
1987
38
630
Simple Causal Minimizations for Temporal Persistence and Projection Brian A. Haugh Martin Marietta Laboratories 1450 South Rolling Road Baltimore, Maryland 21227 Abstract Formalizing temporal persistence and solving the temporal projection problem within traditional non-monotonic logics is shown possible through two different approaches, neither of which re- . special minimization techniques. $$$z~~ potential causes is shown to yield a type of temporal persistence that is useful for the temporal projection problem, although it differs significantly from the ordinary conception of tem- poral persistence. A conception of determined causes is then developed whose minimization does yield the results preferred by ordinary temporal persistence. Finally, previous approaches to for- malizing temporal persistence using chronological minimizations are shown inadequate for certain classes of scenarios, which causal minimizations formalize correctly. 1. Introduction A. Temporal Persistence Temporal persistence of facts (i.e., their presumed con- tinuation through time in the absence of contrary infor- mation) was introduced to the AI community by Drew McDermott as an important part of formalizing planning (McDermott, 1982). It contributes to a solution of the “frame problem” i.e., i the determination of what facts will continue to old after the occurrence of some sequence of events) in automatic planning systems (McCarthy and Hayes, 1969). Applying McDermott’s ori- ginal conception of persistence, referred to here as “the ordinary conception of temporal persistence,” enables the deduction that all previously holding facts continue to hold (for their persistence period) unless explicit causal rules (or other provable facts) entail their cessation. Other, non-standard, conceptions of the conditions under which facts are presumed to persist are possible, and will be shown to be preferable under certain circumstances. B. Temporal Projection Temporal persistence may also contribute to solutions of a narrower problem, the “temporal projection problem,” which has been recently described as: ‘I... given an initial description of the world (some facts that are true), the occurrence of some events, and some notion of causality (that an event occurring can cause a fact to become true), what facts are true once all the events have oc- curred?” (Hanks and McDermott, 1986, p. 330) While this definition could be more explicit, it appears to define the temporal projection problem as narrower than the frame problem since (in the context of its use) it seems intended to admit to an initial description only facts that hold in some initial state, excluding facts that hold prior or subsequent to that state. Ordinary tem- poral persistence has been proposed as a basis for the solution to both these problems. Thus, we examine vari- ous alternatives for formalizing temporal persistence to determine whether they accurately model its ordinary conception or provide a reasonable basis for solution to either of these two problems. C. Difficulties In Logical Formulation Early AI papers on temporal persistence’ expressed opti- mism that its non-monotonic features would eventually be adequately modeled by some version of a non- monotonic logic. It was recently realized (Hanks & McDermott, 1985) that formalizing this concept was rnore difficult than it appeared, due to unacceptable models associated with certain obvious formulations. Further- more, the intended interpretation of temporal persistence seemed relatively obvious and procedural implementa- tions had already been developed. Thus, it was argued that ordinary non-monotonic lo&s (e.g., the NML of McDermott and Doyle, 1980; the &r&mscription of McCarthv. 1980. 1986: or the default logic of Reiter. 1980) we& inherently incapable of formaliz&g an impor: tant type of ordinary inference, casting doubt on the sui- tability of logic as a foundation for continuing work in artificial intelligence (Hanks and McDermott, 1985, 1986). In response to these arguments we describe how useful notions of temporal persistence can be formalized by minimizing certain causal relations using ordinary non- monotonic logics. PH. Causal Minimizations A. Informal Scenario Description The simple shooting scenario presented in Hanks and McDermott (1986) is used to illustrate how different pro- posed logical formulations achieve the temporal per- sistence properties important to solving the problems of temporal projection. In this idealized scenario, a gun is loaded and, after a brief wait, is fired at someone. Furth- ermore, there is a causal rule asserting that if the gun is shot while loaded, someone will die. Temporal per- sistence is required to ensure that the gun remains loaded and the intended result obtains. B. Situation-Calculus Formalization A situation-calculus type formalism (McCarthy, 1968) is used here, although it differs in two respects from typical logics of this type. First, causal relations are represented (after McDermott, 1982, and we use result relations e.g., ) rather than result functions e.g., l)), smce relations are easier to res- i;F&etan functions when minimizations are being per- . Causal relations also seem required for ‘For example, McDermott (1982), p.122. 21% Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. supporting the intended causal minimizations, since truth functional representations of causality will hold in many circumstances in which they do not express causal rela- tions. The causal predicate used is of the form Causes(precondition1, camel, eflectl), and is informally interpreted as: when precondition1 holds in a state, then any event of type cause1 occurring in that state causes the effect effect1 to hold in the situation that results. Scenario Axioms The particular known formulation by the axioms: Al) T(Alive, SO) A2) Result(Load, SO, Sl) A3) Result(Wait, Sl, S2) A4) Result(Shoot, S2, S3) The general causal relations facts are represented in this are expressed by: A5) Causes(True, Load, Loaded) A6) Causes(Loaded, Shoot, not(Alive)) where A7) -(Load = Wait v Load = Shoot v Wait = Shoot v Loaded = True v Loaded = not(Alive)) General Temporal Causal Axioms A set of axioms is required to define the basic tem- poral and causal relations. The condition that a fact has just ceased being true (clipped) is defined: Tl) Clipped(f) s) <=> &s’) [Result( c, s’, s) 81 T(f, s’) 8c -,T(f,s)] A prior fact become false (changed), is defined: T2) Changed(f, s) <=> (4s’) [Before(s’,s) & T(f, s’) & -T(f, s)] where T3) Before(s’, s) <=> ($,s’) [Result(c, s’, s) v (2s”) (Before(s’, s”) & Result(c, s”, s)] T4) Before(s’, s) => lBefore(s, s’) One special feature of this formulation is that it requires clippings of facts to have immediate explanations in terms of applicable causal laws2, expressed by: T5) Clipped(f, s) => (jp,c,s’) [Causes(p, c, not(f)) & T(p, s’) & Result(c, s’, s) ] where T6) T(not(f), s) => -T(f) s) T7) f = not(not(f)) T8) T(True, s) The consequence that the effect holds whenever the cause occurs and the preconditions of a causal relation hold is expressed by: T9) Causes(preconditions, cause, effect) = > (t/s+‘) [(T(preconditions, s) & s’ = Result(cause, s)) => T(effect, s’)] C. Minimizing Potential Causes 1. Distinguishing Preferred Models Causally Examination of the alternative models of the shooting scenario reveals several significant distinctions between them apart from the fact that clipping is minimized in ‘As suggested by (Hayes, 1971). the preferred models. Most notably, there is no known cause for the gun becoming unloaded, nor is any causal law identified that could explain how “Loaded” might come to be clipped. In contrast, the clipping of “Alive” is covered by a causal axiom and the required cause is known to occur. Preferred models, then, contain fewer potential causal explanations in them, i.e., some sort of minimization of causal relations or potential causes is indicated to ensure the preferred conclusions in all minimal models. 2. Applied to the Shooting Scenario The condition of “being covered by a causal law whose cause occurs” is one promising candidate for minimiza- tion. The existence of a “potential cause” may be defined to capture this condition, as follows: Ml) Potential-cause(p,c,e,s) < = > (Causes(p,c,e) & (As’) Result(c, s’, s)] Informally, a be understood “Potential-cause(p,c,e,s)” as asserting that state s potential cause c occurring in the previous state, where c’s occurrence would cause effect e to hold in situation s if precondition p is met. Under this definition, the only potential causes in the preferred model of the shooting scenario are the shooting, resulting in the death in S3, and the loading, resulting in the loaded gun in Sl (i.e., Potential-cause True, I Load, Loaded, W and Potential-cause Loaded, Shoot, not(Alive), S3)). These are potential causes in all models, even those in which the gun is unloaded and the shooting remains merely a poten- tial cause, so that “Alive” is not actually clipped. Undesired models will not be minimal in potential causes since they will require an additional potential cause (Potential-cause(P1, Cl, Not(Loaded), S2)) to explain the clipping of “Loaded” (since our axioms require ‘every clipping to have a cause). Thus, this conception of minimization is adequate to distinguish the preferred model in this shooting scenario, yielding the desired con- clusion that T(Dead, S3) holds in all minimal models3 However, this solution does not accurately model the ordinary notion of temporal persistence in many other situations. statement may results from a 3. Inequivalence to Temporal Persistence The results expected of ordinary temporal persistence cannot be obtained by minimizing potential causes when minor modifications are made to the shooting scenario to include a potential causal explanation for the gun. being unloaded. For example, suppose someone tried to unload the gun, and would have, if he had known how, while waiting; that is, we add the following axiom: A8) Cause(Knows, Wait, not(Loaded)) Here we assume that there is no information about the truth of the precondition (“Knows”) in causal relation A8, so that we cannot conclude that the unloading was successfully performed. This modified set of axioms admits models in which “T(Knows, Sl)” is false, and in which “Loaded” persists through S2 and S3. Such models are the preferred ones by the ordinary notion of temporal persistence, which requires that provable facts (e.g., “Loaded”) persist when possible (barring conflicts with other persistences). Even 3After completing this analysis, we learned of a similar ap- P roach to causal minimization developed by Vladimir Lifschitz Lifschitz, 1987 amined cases. 1 that appears to achieve the same results in the ex- ifschits’s approach also includes a computationally efficient method for performing the required minimizations as well as other types of default causal reasoning, although it does not ap- pear as readily extensible to more expressive temporal formal$ms. l-laugh 219 though there is some conflict of persistences in this case, it is clear that the ordinary notion of persistence requires the persistence of “Loaded” in this case just as it did in the original. If no attempt were made to shoot the gun, then there would be no conflict of persistences, and ‘LLoaded” unquestionably ought to persist when the ordi- nary conception is in force. Whether or not someone attempts to fire the gun subsequently should have no bearing on whether “Loaded” persisted previously, an intuition well illustrated by the original scenario. Furth- ermore, this persistence result is obtained by application of chronological minimization of clipping (Kautz, 1986, Lifschitz, 1986), the most successful previous approach to formalizing temporal persistence. Minimizing qtential causes, however, does not lead to the results reqL.-ed by temporal persistence in these cases. Whether or not the gun is actually unloaded, Potential-cause(Knows, Wait, not(Loaded), S2) will be true; hence, minimizing potential causes cannot distin- guish the preferred model, and will not accurately model the ordinary notion of temporal persistence. However, we must ask whether the ordinary notion of persistence is the preferred inference procedure in such circumstances. 4. Minimizing Potential Causes Preferred Suppose that a potential assassin knew that an attempt would be made to unload the gun, but had no idea whether the attempt would be successful. Surely it would be foolhardy to proceed with the expectation of a loaded gun, simply because there was no definite proof of its unloading. Knowledge of the occurrence of a potential cause for the clipping of a fact provides reasonable grounds for doubting its persistence. It seems more rea- sonable in such circumstances to consider the persistence of potentially clipped facts as uncertain, and (in planning contexts) to make plans for both alternatives that ensure the desired effects e.g., unloading attempt). !I? checking the gun after the his strategy corresponds to minim- ization of potential causes, and entails that after the attempted unloading, a loaded condition can no longer be deduced, only the disjunction “T(Loaded, S2) v T(not(Loaded), S2)” can be derived. Thus, minimization of potential causes actually pro- vides a better model of commonsense temporal reasoning in the cases examined than does ordinary temporal per- sistence or the chronological minimization of clipping. Whether this advantage persists in arbitrary temporal projection scenarios requires further investigation. In any case, this inquiry has created a new perspective on how persistence may best be modeled by planning systems. Since ordinary temporal persistence might still be useful, we have continued to pursue its formalization. Il. Minimizing Determined Causes I. Initial Conception of Determined Causes To use causal minimizations for ordinary temporal per- sistence requires a causal concept that is more discrim- inating than that of potential cause. We observe that in the revised scenario, the preferred clippin differs in that its precondition (“Loaded” “, (of “Alive”) must either hold or have changed previously. In contrast, the precon- dition change a “Knows”) for ‘clipping “Loaded” need not have if it is false, since it may have always been false. Thus, we may distinguish the preferred model by minim- izing causes whose preconditions could not have always been false. In all models, such “determined causes” either have true preconditions or their preconditions have previously changed to false, i.e.: M2) Determined-cause(p,c,e,s) < = > [Causes(p,c,e) & (3s’) [ Result( c, s’, s) & (T(P) s’) v Chwes(p,s’))l] Such causes are “determined” in the sense that the axioms of the system taken together with ordinary tem- poral persistence will determine the truth value of their preconditions, unlike merely potential causes, whose preconditions may be of indeterminate truth value. Minimizing determined causes will then favor as minimal the models of the latest scenario preferred by ordinary persistence, since the attempt at unloading will not be a determined cause in all models. More formally, the wffs Determined-cause(True, Load, Loaded, SO) and Determined-cause(Loaded, Shoot, not(Alive), S,Y) are true in all models of the scenario, while Determzned-cause Wait, not(Loaded), Sl) PI, is false in some models in w h ich the attempt at unloading is unsuccessful. Thus, all models minimal in determined causes are ones in which the gun remains loaded, as required by temporal per- sistence. The method by which minimizing determined causes works in other cases is interesting to observe. Whenever something qualifies as a determined cause by virtue of a change in the truth value of its precondition, models in which the precondition retains its last provable value are preferred, since any further change would require an additional cause. Thus, determined causes in such cases will not be effective in the minimal models unless their preconditions are presumed to be true assuming ordinary persistence, as intended. 2. Determined Causes in Causal Chains This conception of determined cause needs modification to account for causal chains, in which the precondition of one causal relation is a result of another causal relation. For example, a simple causal chain can be described: Al’) T(A, SO) & -T(B,SO) & --T(C, SO) A2’) Result(Wait, SO, Sl) A3’) Result(E1, Sl, S2) A4’) Result(E2, S2, S3) A5’) Causes(A, El, B) A6’) Causes(B) E2, C) In such situations, ordinary temporal persistence prescribes that A persists through the “Wait,” enabling El to cause B, which in turn enables E2 to cause C. How- ever, event E2 will not be always be a determined cause of C under our latest definition, since there are models in which A is clipped and C remains false without changing. This results in indifference between models in which A is clipped and A persists, since when A is clipped, there is an extra determined cause for that clipping, and when A persists, there is an extra determined cause for the change in C. Thus, the definition of determined cause must be modified to also apply when the precondition of a cause is the result of a determined cause or its persistence, making it recursive, as follows: M3) Determined-cause(p,c,e,s) < = > [Causes(p,c,e) & ($‘)[Result( c, s’,s) & (T(p) s’) v Changes(p,s’) v &“,pl,cl)(Before(s”,s’) & Determined-cause(pl,cl,p,s”)))]] This new definition will now handle our chaining example properly: since E2 will be a determined cause of C in all models, any models with A clipped will not be minimal in determined causes; hence, A is not clipped. The recursive nature of the new definition also ensures that the conse- quences of any length chain of determined causes will also be determined causes. All such determined causes will be potential causes. But, when there is no information bearing on the truth of the preconditions of a potential cause, it will not be a 220 Planning determined cause, allowing the preference of models favoring the persistence of known facts over the truth of unknown preconditions. Thus, we have isolated a con- ception of determined cause whose ordinary minimization yields the results entailed by the ordinary conception of temporal persistence in all the varied situations con- sidered so far. However, because of the many ways in which different persistences may interact and conflict, the adequacy of this conception must be considered provi- sional upon further investigations. . Gircumscription Proofs To illustrate our assertion that the causal minimizations discussed could be achieved in any ordinary non- monotonic logic, we here sketch an approach to perform- ing these minimizations using McCarthy’s circumscrip- tion techniques (McCarthy 1986). Circumscription works by supplementing a theory with a set of circumscription axioms that entail the minimization of. an identified predicate’s extension. Variable circumscription I Perlis and Minker, 1986 is a simplified form of formu a cir- cumscription (MC d arthy, 1986) in which the circumscrip- lfiymaxloms are specified by a schema of the followmg 0 : [A[Z,‘..., Z,] 8L (x)(Zox => P,x)] = > (Y)(P,Y => Z,Y) where A is the original theory specified as a conjunction of all its axioms; A]P,,.,.,P,] is the same conjunction of axioms identified as a function of certain predicates that appear in them; P, is the original predicate to be minim- ized; PI,..., P are other predicates in the theory A that are allowed tz vary along with the minimization predicate Pa; and A[Z,,...,Z,] is the result of substituting the for- mulas Zs,..., Zn for the original predicates P,,,...,P, in the theory A. Proving the intended results of minimizing our causal relations (e.g., Potential-cause) using circumscrip- tion requires choosing other, intimately connected, rela- tions (PI,..., P,) to vary, so that the original theory under appropriate substitutions, A[Z,,...,Z,], can be proven. We review how such a proof proceeds for circumscribing Potential-cause in the original shooting scenario. When the circumscription axiom schema for this case is instan- tiated, choosing T, Result, Causes, Clipped, Changed, and Before as the P,,...,P, to vary, we get: {[A[Z,,...,z,] & (‘C/p,c,w>( Z&v,e,s) = > Potential-cause(p,c,e,s)] = > (t/P’,c’,e’,s’)(Potential-cause(p’,c’,e’,s’) = > Z&P ,c’ ,e’ 4))) The theory A in our original scenario is the conjunction of our axioms Al-A7 and Tl-T9. The substitution, Z,, for Potential cause p,c,e,s), should be a formula that uniquely identifies t ii e minimal set of potential causes in this situation, as follows: [ (p = True & c = Load & e = Loaded & s = Sl) v (P = Loaded & c = Shoot & e = not(Alive) & s = S3)] Any quadruple <p,c,e,s> satisfying this Z, obviously satisfies the Potential-cause predicate in this scenario. The substitutions for the other varying predicates should also specify their minimal extension in the preferred models. For T(f,s) we may substitute the Z,: [(f=Alive & (s=SO v s=Sl v s=S2)) v (f=Loaded & ( s = so v s=Sl v s=s2 v s=S3)) v (f=not(Alive) & s=S3) v (f=True & (s= so v s=Sl v s=s2 v s=S3)) ] For Result(c,s,s’) we many substitute the Z,: [(c = Load & s = SO & s’ = Sl) v (c = Wait & s = Sl & s’ = S2) v (c = Shoot & s = S2 & s’ = S3) ] For Causes(p,c,e) we may substitute the Zz: [(p = True & c = Load & e = Loaded) v (p = Loaded & c = Shoot & e = not(Alive)) ] For Clipped(f,s) we may substitute the Z4: [ f = Alive & s = S3 ] For Changed(f,s) we may substitute the Z,: [ f = Alive & s = S3 ] For Before(s) s’) we may substitute the Zg: its = so & (9’ = Sl v 9’ = s2 v 9’ = S3)) v (9 = Sl & (9’ = s2 v s’ = S3)) v (s = s2 & 9’ = S3)] These substitutions define an interpretation of their predicates which can be proven to satisfy the original axioms, although there is not space here to complete the proof. When this is done, it is possible to prove the antecedent of the circumscription axiom, leading to t’he result that the potential causes specified by Z, are the only ones that exist, which then allows proving that the loaded state of the gun persists, and the shooting is effective as intended. Although our knowledge of the preferred result assisted the derivation in this example, the circumscrip- tive axiom schema provides the basis for such proofs whether or not one knows how to choose the most useful substitutions for the varying predicates. Still, our exam- P le illustrates the validity of criticisms of circumscription e.g., Hanks and McDermott, 1986) which emphasize the absence of any efficient general procedure for applying it. Whether an efficient method can be developed for apply- ing circumscriptions to such causal minimizations remains a topic for further research. However, we have, nevertheless, demonstrated that such ordinary non- monotonic techniques can achieve the results desired for the temporal projection problem. Finally, the prospects for efficient non-monotonic techniques for causal minimi- zations have been brightened by recent independent development (Lifschitz, broad class of cases. 1987) of such techniques for a . Chronological Minimization A. Background A chronological minimization of a time-indexed predicate is roughly defined as one that prefers admitting an instance that occurs later in time over one that occurs earlier in time. Chronological minimization (either of clipping or of knowledge) has been widely advocated for formalizing temporal persistence (Kautz, 1986; Lifschitz, 1986: Shoham. 1986). However. while it has been found to yikld the desired ‘results in temporal projection prob- lems similar to our original shooting example, we have discovered that when there is incomplete knowledge of the initial state, it may not lead to the.conclusions pre- ferred by commonsense. Now it will be argued that, in certain cases, such chronological minimization cannot fully formalize the ordinary conception of ternporal per- sistence either, so that its domain of application is further restricted. Although this argument addresses chronologi- cal minimization of clipping -- because it has been more clearly formulated than such a minimization of l-laugh 221 knowledge -- there is good reason to believe that chrono- it does not provide the basis for a general solution to the logical minimization of knowledge will also suffer from frame problem. similar limitations. Chronological minimization is rather suspect as a general basis for formalizing persistence. Why, in gen- eral, if there is a choice between two different changes occurring, should temporal persistence prefer the later change to the earlier one ? Of course, given a choice between an earlier clipping of a fact with no potential explanation (e.g., of “Loaded”) and a later clipping of one with a potential explanation (e.g., of “Alive”), then the later clipping is preferred. But, the basis for decision here is not simply a preference for later clippings over earlier ones, but a preference for clippings that have “known” potential explanations over those that do not. If causal minimizations were the actual basis for such decisions, then we would not expect there to be a basis for choice when competing clippings were not distinguished causally, regardless of whether they were distinguished chronologically. And, indeed, this is what we find. ES. Inadequacy for Temporal Persistence In situations where neither of two alternatively required clippings has a potential explanation, neither common- sense nor the ordinary notion of temporal persistence pro- vides a basis for choosing between them. A general situa- tion of this type is characterized by the following axioms: Al”) T(A, Sl) & T(B, Sl) A2”) Result(E1, Sl, S2) A3”) Result(E2, S2, S3) A4”) T(not(A), S2) v T(not(B), S3) In such a situation, the principle of presuming facts to persist when possible provides no basis for choosing between models in which A or B is clipped, and any faithful logic of persistence should leave both of these possibilities open. Chronologically minimizing clipping in models of axioms Al ’ ’ - A4” will favor models in which A persists, since the clipping of B occurs later, and thus, does not capture the ordinary notion of persistence in such cases. Minimizing potential or determined causes, however, will remain indifferent between these choices in accord with the ordinary notion of persistence. Chronological minimization of clipping is, thus, inadequate for formalizing the ordinary sense of temporal persistence in general. However, it might still be con- sidered adequate for applications to the frame problem or to the more narrowly understood temporal projection nroblem. since our formulation of the above counterex- a , ample is not characteristic of such problem descriptions. @. Inadequacy for Frame Problem Our counterexample schema above is not an example of a temporal projection problem because it specifies indefinite factual information about times other than the initial state. However, situations fitting our schema may easily arise in ordinary planning contexts, which include no specific information about the future. For example, our axioms (Al’ ’ - A4”) might be specified in a context in which the current state is S4, which was the result of some event in S3. It might be known in S4 that either A or B was clipped earlierlas indicated), due to some other fact known to hold in S4 that is incomnatible with both A and B persisting. Since such incomplete knowledge of the present and the past is characteristic of most real- world planning domains, chronological minimization will not provide the results preferred by ordinary temporal persistence (or commonsense temporal projection, for that matter) in many realistic planning situations; hence, D. Inadequacy for Temporal Projection Forced choice clipping situations like those just described can arise even within scenarios fitting the narrowly con- ceived problem of temporal projection. Even if the initial state is fully specified, certain sets of general causal rela- tions may lead to a forced choice of dipping one of two facts, neither of which has anv notential causal explana- tion.’ Consider situations in which two facts, A and-B, are initially known to be true, but the continued persistence of both in circumstances where certain events occur would lead to contradictory causal results: Al”‘) T(A, SO) & T(B, SO) & -T(C) SO) & yT(D1, SO) & -T(D2, SO) A2”‘) Result(Wait, SO, Sl) A,“‘) Result(E1, Sl, S2) A,“‘) Result(E2, S2, S3) A,“‘) Causes(A) El, C) A6”‘) Causes(B, E2, Dl) A,“‘) Causes(C) E2, D2) A,“‘) T(D1, s) => lT(D2, s) Here, the persistence of A would cause C to hold in Sl which would cause Dl to hold in S3, while the persistence of B would cause D2 to hold in S3. Thus, if A and B were both to persist, Dl and D2 would both hold in S3, which is impossible by A8”‘. Hence, the preconditions to one of the general causal relations A6”’ and A7”’ must be false in S2. Since temporal persistence provides no basis for choice here, only the disjunction (-T(B) S2) v -T(C) S2) follows. If C is false in S2, then the precondition, A, to the causal law supporting it, must also be false (i.e., -T(A) Sl)). Th us, temporal persistence supports only the conclusion that either A is clipped in Sl or B is clipped in S2. Since chronological minimization favors the later clipping of B (for no good reason), it does not model tem- poral persistence in such temporal projection situations. This temporal projection schema may be instan- tiated for an autonomous vehicle, Robbie, in a factory domain using the following informal definitions: A =df John can lock out Robbie’s forward gears. B =df Robbie’s reverse gears are locked out. C =df Robbie’s forward gears are locked out. Dl =df Robbie is observed moving forward. D2 =df Robbie is observed moving in reverse. Wait =df Robbie waits. El =df John tries to lock out Robbie’s forward gears. E2 =df Robbie moves. The causal laws are thus informally interpreted: A,“‘) If John is able to lock out Robbie’s forward gears, and attempts to do so, he will succeed. A,“‘) If Robbie’s forward gears are locked out, its move- ment will cause it to be observed moving in reverse. A7”‘) If Robbie’s reverse gears are locked out, its move- ment will cause it to be observed moving forward. In the initial conditions, reverse gears are locked out, forward gears are not locked out, John is able to lock the forward gears, and no robot movement is observed. Thus, we would ordinarily expect the robot to be unable to move in S2, except that we are assuming that move- ment occurs. Hence, one of the gears must not be locked, but there is no basis for deciding which one. If reverse is 222 Planning unlocked, then in accord with persistence, we assume that its previously locked state was clipped in S2. If forward is unlocked, then the precondition (John’s ability to lock it) for its presumptive cause clipped in Sl. \ Chronologica John’s attempt) must be minimization of clipping clearly entails that the reverse gear must have been un- locked, while minimizing determined causes (or potential causes) allows either alternative since neither is a deter- mined cause in all models. This quite ordinary sort of situation uncovers a problematic assumption in the chronological minimiza- tion approach. It assumes that one can start with an ini- tial state and a planned set of events and sweep forward in time allowing all facts to persist until the result of some event clips them. However, this model is clearly inadequate for handling conflicting results; temporal backtracking must be allowed in order to accommodate the adjustments required by such conflicts. Although causal minimizations are, thus, superior to chronological minimization of clipping in this type of scenario, other conflicting result scenarios might conceiv- ably create problems for our approach as well. This remains an area for further investigation. V. Further Research Much work remains to extend the simple formulation presented here to handle variable length persistences, more complex facts as preconditions and effects, multiple simultaneous events of the same type, densely ordered time, durations of states and events, event-event causa- tion and default causal generalizations. Our continuing work (Haugh, 1987) is aimed at a temporal causal logic supporting all these capabilities. VI. Summary The conclusions lows: reached here may be summarized as fol- 1) 2) 3) 4) 5) 6) Contrary to Hanks and McDermott (1985, 1986), minimizations using ordinary non-monotonic logics can handle the temporal projection problem. Minimizing potential causes leads to the results pre- ferred by commonsense in all the examined instances of the temporal projection problem. Minimizing determined causes models the ordinary notion of temporal persistence better than chrono- logical minimization of clipping. Ordinary temporal persistence does not yield the conclusions preferred by commonsense in certain cases of incomplete initial knowledge. Chronological minimization of clipping does not pro- vide an adequate basis for solving the temporal pro- jection problem nor for modeling temporal per- sistence in a variety of cases. Therefore, ordinary minimizations of potential and determined causes better formalize temporal projec- tion and temporal persistence, respectively, than the chronological minimizations previously advocated. Acknowlledgements I would like to thank Donald Perlis for helpful dis- References Allen, J. 1984. “Towards a General Theory of Action and Time.” Artificial Intelligence, 24 (2): 123-154. Hanks, S. and McDermott, D. 1985. Temporal Reason- ing and Default Logics. YALEU/CSD/RR #430. New Haven: Yale University Department of Comput- er Science Hanks, S. and McDermott, D. 1986. “Default Reasoning, Nonmonotonic Logics, and the Frame Problem,” AAAI-86. Palo Alto: Morgan Kaufmann Publishers, Inc., pp. 328-333. Haugh, B. 1987. Non-Monotonic Formalisms for Com- monsense Temporal-Causal Reasoning. Ph.D.. Dissertation, Department of Philosophy, University of Maryland, College Park. (in preparation). Hayes, P. 1971. “A Logic of Actions.” In D. Michie and B. Meltzer, (eds.), Machine Intelligence 6. Edin- burgh: Edinburgh University Press. Kautz, H.A. 1986. “The Logic of Persistence.” AAAI-86. Palo Alto: Morgan Kaufmann Publishers, Inc., pp. 401-405. Lifschitz, V. 1986. “Pointwise Circumscription: Prelim- inary Report.” AAAI-86. Palo Alto: Morgan Kauf- mann Publishers, Inc., pp. 406-410. Lifschitz, V. 1987. “Formal Theories of Action” In Brown, F. (ed.), The Frame Problem in Artificial ks- telligence: Proceedings of the 1987 Workshop. Los Altos: Morgan Kaufman Publishers, Inc. McCarthy, J. 1968. “Programs with Common Sense.” IQ Minsky, M. ed.), Semantic Information Processing. Cambridge: 4 he MIT Press, pp. 403-418. MtCarthy, J. 1980. “Circumscription - A Form of Non- Monotonic Reasoning.” Artificial Intelligence, 13:27-39. McCarthy, J. 1986. “Aplications of Circumscription to Formalizing Common-Sense Knowledge.” Artificial Intelligence, 28:89-116. McCarthy, J. and Hayes, P. 1969. ‘LSome Philosophical Problems from the Standpoint of Artificial Intelli- gence. ” In D. Michie and B. Meltzer, (eds.), Machine Intelligence 4. Edinburgh: Edinburgh University Press. McDermott, D. 1982. “A Temporal Logic for Reasoning about Processes and Plans.” Cognitive Science, 6: 101-155. McDermott, D. and Doyle, J. 1980. “Non-Monotonic Logic I.” Artificial Intelligence, 13:41-72. Perlis, D. and Minker, J. 1986. “Completeness Results for Circumscription.” Aritificial Intelligence, 28:29- 42. Reiter, R. 1980. “A Logic for Default Reasoning.” Artificial Intelligence 13~81-132. Shoham, Y. 1986. “Chronological Ignorance: Time, Nonmonotonicity, Necessity and Causal Theories.” AAAI-86. Palo Alto: Morgan Kaufmann Publishers Inc. pp. 389-393. cussions, encouragement in pursuing issues in temporal reasoning, and for pointing out Lifschitz’s independent work in causal minimization. Special thanks to Steve Barash for assistance in developing the robot domain counterexample. I also thank Vladimir Lifschitz, Yukiko Sekine, and Frederick Suppe for helpful comments. i-laugh 223
1987
39
631
Achieving Flexibility, Efficiency, and Generality in Blackboard Architectures Daniel D. Corkill, Kevin Q. Gallagher, and Philip M. Johnson Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 Abstract Achieving flexibility and efficiency in blackboard- based AI applications are often conflicting goals. Flexibility, the ability to easily change the black- board representation and retrieval machinery, can be achieved by using a general purpose blackboard database implementation, at the cost of efficient per- formance for a particular application. Conversely, a customized blackboard database implementation, while efficient, leads to strong interdependencies be- tween the application code (knowledge sources) and the blackboard database implementation. Both flexi- bility and efficiency can be achieved by maintaining a sufficient level of data abstraction between the appli- cation code and the blackboard implementation. The abstraction techniques we present are a crucial aspect of the generic blackboard development system GBB. Applied in concert, these techniques simultaneously provide flexibility, efficiency, and sufficient general- ity to make GBB an appropriate blackboard devel- opment tool for a wide range of applications. I. Introduction Blackboard architectures, first introduced in the Hearsay- II speech understanding system from 1971 to 1976 [Erman et al., 19801, have become popular for knowledge-based applications. The interest in the generic blackboard con- trol architecture of BBl [Hayes-Roth, 19851 is but one ex- ample of the increasing popularity of blackboard architec- tures. The blackboard paradigm, while relatively simple to describe, is deceptively difficult to implement effectively for a particular application. As noted by Nii [Nii, 19861, the blackboard model with its knowledge sources (KSs), global blackboard database, and control components does not specify a methodology for designing and implementing a blackboard system for a particular application. Historically, most blackboard-based systems have been built from scratch, implementing the blackboard model according to the criteria that appeared most ap- propriate for the particular application. Some implemen- tations were built for execution eficiency, with consider- able effort placed on providing fast insertion and retrieval This research was sponsored in part by the National Science Foundation under CER Grant DCR-8500332, by a donation from Texas Instruments, Inc., by the Defense Advanced Research Projects Agency, monitored by the Office of Naval Research under Contract NR049-041, and by the National Science Foundation under Support and Maintenance Grant DCR-8318776. of objects on the blackboard. The KSs and control com- ponents in these implementations were so tied to the un- derlying blackboard database that making modifications to the blackboard structure or insertion/retrieval strate- gies was difficult. Other implementations were designed with jlexibility in mind. These applications were built on top of a general-purpose blackboard database retrieval fa- cility (for example, a relational database system [Erman et al., 19811). While these implementations could be re- structured relatively easily, their inefficiency in accessing objects on the blackboard made them slow. Finally, a few implementations were simply built in a hurry, with little effort toward achieving either flexibility or efficiency. In this paper, we concentrate on the two con- flicting issues of flexibility and efficiency of blackboard systems. We show that by appropriately hiding in- formation between three phases of blackboard system development-blackboard database specification, applica- tion coding (KSs and control components), and blackboard database implementation-it is possible to achieve both flexibility and efficiency. This principle of blackboard data abstraction is an integral design principle of the generic blackboard development system GBB [Corkill et al., 19861. Abstraction also makes GBB sufficiently general for use in a wide range of applications. Although we describe the benefits of blackboard abstraction in the context of GBB, these abstractions are appropriate for any blackboard de- velopment environment. II. n Flexibility and Efficiency Flexibility in a blackboard system is the ability to change the blackboard database implementation, the inser- tion/retrieval strategies, and the representation of black- board objects without modifying KS or control code and vice-versa. Flexibility is important for two reasons. First, the application writer’s understanding of the in- sertion/retrieval characteristics and the representation of blackboard objects may be uncertain and therefore sub- ject to change as the application is developed. Second, even after a prototype of the application has been com- pleted, the number and placement of blackboard objects as the application is used may differ from the prototype. This again requires changes to the blackboard representa- tion in order to achieve the desired level of performance. Therefore, it is important that the blackboard implemen- tation provides enough flexibility to allow these changes without significant changes to the KSs, the control code, 18 Al Architectures From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. or to the blackboard database implementation machinery. With sufficient flexibility it is possible to actually “tune” the blackboard representation to the specific characteris- tics of the application. Efficiency in the insertion and retrieval of blackboard objects is an equally important design goal. Typically, improving the execution efficiency of blackboard systems is achieved through improvements to the quality and ca- pability of the control components. Reducing the num- ber of “inappropriate” KSs that are executed (by making more informed scheduling decisions) can significantly re- duce the time required to arrive at a solution. Making appropriate control decisions should never be neglected in the development of an application. In this paper, how- ever, we assume that a high-quality control component and high-quality KSs will be written by the application implementer. We will focus on the remaining source of execution inefficiency-the cost of inserting and retrieving objects from the blackboard. A. The Need Rx Blackboard atabase Efficiency Why are we placing such an emphasis on the efficiency of the blackboard database? In addition to inserting new hypotheses on the blackboard, KSs perform associative re- trieval to locate relevant hypotheses that have been placed on the blackboard by other KSs. This need for KSs to lo- cute appropriate information on the blackboard is often overlooked in casual discussions of blackboard-based sys- tems. A KS is typically invoked by one or more triggering stimulus objects. The KS then looks on the blackboard to find other objects that are “appropriately related” to the stimulus object. Each KS thus spends its time: 1. retrieving objects from the blackboard based on their “location” on the blackboard; 2. performing computations using existing objects termine new blackboard objects to create); (to de- 3. creating and placing these new objects onto the black- board. The ratio of items 1 and 3 over item 2 defines the amount of time the KS spends interacting with the black- board versus the amount of time the KS spends performing computations. The larger this interaction/computation ra- tio is, the more that blackboard efficiency issues will dom- inate performance. The ratio of item 1 over item 3 defines the read/write ratio of blackboard interactions for the KS. This ratio can be used to aid the selection blackboard im- plementation and retrieval strategies. Note that associative retrieval is central to the black- board paradigm. Associative retrieval is used to provide anonymous communication among KSs by allowing KSs to look for relevant information on the blackboard rather than receiving the information via direct invocation by other KSs. Yet the blackboard provides more than this anonymous communication channel among KSs. Objects on the blackboard often have significant latency between the time they are placed on the blackboard and the time they are retrieved and used by another KS. If it were not for this latency, the blackboard could be “compiled away” into direct calls among KSs by a configuration-time com- piler. This latency in blackboard objects indicates that the blackboard also serves as a global memory for the KSs. Objects are held on the blackboard to be used when and if they are needed by the KSs. Without the blackboard each KS module would have to maintain its own copy of objects received from other modules. Whether the mem- ory is globally shared (on the blackboard) or private, an efficient means of scanning the remembered objects is still required. The amount of time a KS spends creating and scan- ning for objects versus performing other computations (the interaction/computation ratio) varies greatly between dif- ferent applications and even between different KSs in a sin- gle application. Of course, the greater this ratio the more significant the efficiency of the blackboard implementation becomes. Experience with the Hearsay-II speech under- standing system [Erman et al., 19801 and the Distributed Vehicle Monitoring Testbed (DVMT) [Lesser and Corkill, 19831 demonstrates that blackboard performance has a sig- nificant effect on system performance in these applications. If the underlying hardware provided true associative retrieval, these efficiency issues would become irrelevant and the implementer would only need to write the applica- tion KSs and control code. However, the present hardware situation requires that the associative retrieval of black- board objects be simulated in software by appropriate re- trieval strategies on the blackboard database. . asic Blackboard perations Before we continue, it is useful to describe in more detail the blackboard operations that are typically required to support an application. Insertion: When a blackboard object is created, it must be placed onto the blackboard. Placement onto the black- board involves creating one or more locators, pointers that are used to retrieve the object. In the simplest situation where blackboard objects are merely pushed onto a list, the single locator is the list pointer. With retrieval strategies supporting efficient retrieval of objects based on complex criteria, multiple locators are used. These locators are de- termined based on attribute values of the object. Merging: When placing an object onto the blackboard, it can be important to determine if an “identical” object already exists on the blackboard. The semantics of iden- tity depend on the application, but an example is two hy- potheses created by different KSs that differ only in their belief attribute. Often it is desirable that hypotheses on the blackboard be unique; that is, no identical hypotheses be created on the blackboard. Instead, the two hypothe- ses should be merged into a single blackboard object that reflects the two by merging their belief attributes into a single attribute value in the existing hypothesis. Merging can be handled in two ways. One approach is to have all KSs avoid creating identical hypotheses by check- ing for an existing hypothesis before creating a new one. If an existing hypothesis is found, its attributes are updated by the KS. The second approach builds an application- specific merging capability into the basic blackboard object insertion machinery. Co&ill, Gallagher, and Johnson 19 Retrieval: Retrieval involves searching the blackboard for objects that satisfy a set of constraints specified in a re- trieval pattern. Retrieval can be broken down into two steps. The first step determines a set of locators (based on the retrieval pattern) that contain pointers to potentially desirable objects. The second step eliminates those candi- dates from the first step that do not satisfy the constraints of the retrieval pattern. Since this elimination process can be computationally expensive, an efficient retrieval strat- egy is one where the first step substantially reduces the number of candidates. In order to implement an efficient, yet flexible, retrieval strategy the constraints must be ex- pressed declaratively so that they may be examined by the blackboard implementation machinery to determine the appropriate set of locators to use in the retrieval. Deletion: Deleting an object from the blackboard re- quires removing it from the locators which point to it. Since other blackboard objects may contain links pointing to the deleted object, these links must also be found and eliminated. For example, if links are maintained as bidi- rectional pointers (as is the case in GBB), deleting these links is simply a matter of traversing all links from the deleted object and then eliminating the inverse links. Repositioning: If the attributes that determine the ob- ject’s locators (such attributes are termed indexing at- tributes) are mbdified, the locators may also need to be changed (deleting some and adding others) to maintain consistency in the blackboard database. In many applica- tions, all indexing attributes are static-only the values of the other attributes (such as belief) are allowed to change. Domains involving objects that move over time, however, are examples of situations where the positioning of objects may need to be modified during the course of problem solving. A. The Unstructured Blackboard A simplistic approach to building a blackboard application is to represent each blackboard level as an unstructured list of the objects residing on that level. KSs add a new object to the blackboard by simply pushing it onto the appropriate list. Retrieval is performed by having the KS scan the list for objects of interest. This approach only appears to be simple, as there is no work to implementing the blackboard implementation machinery (global variables serve quite nicely). Actually, all the effort has been shifted into the KSs. Each KS must worry about the entire retrieval process, and since each object on the blackboard level must be tested for appro- priateness, the KS must perform this test as efficiently as possible. Each KS may also need to worry about merging blackboard objects; avoiding the creation of a blackboard object that is semantically equivalent to an existing object. If merging is not performed, KSs must consider the possi- bility that semantically equivalent objects may be retiieved from the blackboard. Insertion, deletion, and reposition- ing of blackboard objects must also be directly handled by the KSs as well. B. The General-Purpose Kernel In t,his approach, a general-purpose blackboard database facility is provided to the KS and control component imple- menters. The facility supports blackboard object retrieval based on the attributes of the objects. In its most general form, all attributes of the objects may be used as retrieval keys (for example, blackboard objects may be stored in a relational database). The application implementers re- trieve objects by writing queries in the retrieval language. This approach provides a very flexible development envi- ronment, but the unused generality of the blackboard data- base implementation poses severe time/space performance penalties. C. The Customized Kernel As noted above, the use of a general-purpose retrieval strategy for all blackboard applications is a source of in- efficiency. Retrieval of blackboard objects in a particu- lar application may be made significantly faster using a specialized retrieval mechanism. Furthermore, retrieval of different classes of blackboard objects within a single application may be best achieved using different retrieval strategies. One solution is to custom-code the appropriate retrieval strategy for each situation. In this approach an insertion/retrieval kernel is written that is tailored to the situations that arise in a particular application. When a KS needs to locate blackboard objects, it invokes kernel functions to perform an initial retrieval from the black- board and then uses procedural “filters” to identify which returned objects are actually of interest. This approach is significantly more efficient then the general-purpose ap- proach when the kernel functions significantly prune the number of blackboard objects that need to be filtered by the KS. However, it poses a number of disadvantages: A new customized kernel must be written to suit the different insertion/retrieval characteristics of each ap- plication. If the kernel is found to be inappropriate to the applica- tion, due to incorrect intuition during the initial design or to changing application characteristics, it must be rewritten. The KS code is directly coupled to the particular kernel. The code must be written with the knowledge of which attributes are matched by the kernel code and which at- tributes must be filtered by the KS. Changing the kernel attributes requires rewriting the KSs. The kernel code is tied to the blackboard representa- tion. Changes to the blackboard representation require modifications to the kernel code. The KS and kernel code is tied to the structure of black- board objects. Changes to the representation of at- tributes require code modifications. In short, although the custom-coded kernel approach can provide efficient insertion and retrieval of blackboard 20 Al Architectures objects, that efficiency comes at the cost of inflexibility to changes in the KS and control code and to changes in the blackboard and object representation. By appropriately combining a number of blackboard data abstraction techniques, it is possible to “have your cake and eat it too” with respect to flexibility and efficiency. The generic blackboard development system GBB [John- son et al., 19873 provides the application implementer and blackboard database administrator with distinct, abstract views of the blackboard. Developing an application using GBB involves three separate, but interrelated phases: blackboard & blackboard object specification: This phase involves describing the blackboard structure (the blackboard hierarchy), the structure of each black- board level, the attributes associated with each class of blackboard objects (called units in GBB), and the mapping of units onto blackboard levels (called spaces in GBB). application coding: This phase involves writing KSs and control code in terms of the blackboard and black- GBB also supports enumerated dimensions. An enu- merated dimension consists of a fixed set of labeled cate- gories. For example, in the vehicle tracking domain a space might also have the enumerated dimension “classification” corresponding to a set of vehicle types. Space dimensionality is a key means of abstracting the blackboard database. It provides information hiding by allowing the application code to create and retrieve units according to the dimensions of spaces, without regard to the underlying implementation of the blackboard struc- ture. Dimensional references, however, contain enough information when combined with information about the structure of the blackboard to allow efficient retrieval code to be generated. Here is an example of the space definitions frorn the DVMT application that specifies the time, x-position, y- position, dimensions discussed above (as well as a sensory event classification dimension): (define-spaces (PT PL VT VL GT CL ST SL) : UNITS (hyp) :DIMENSIONS ((time :ORDERED ebb-time-range*) (x :ORDERED *bb-x-range*) (Y :ORDERED *bb-y-range*) (event-class :ORDERED *bb-event-class-range*))). board object specifications. Application code deals with the creation, deletion, retrieval, and updating of units. Re- trieval is specified by patterns based on the structure of the relevant blackboard space(s). Abstracting Unit Insertion When a unit is created in GBB, it is inserted on the black- board based on the unit’s attributes. There are two de- cisions to be made when inserting a unit on the black- blackboard database implementation specification: This phase involves specifying the blackboard database im- plementation and retrieval strategies. The locator data structures appropriate for the particular characteristics of the application are specified in this phase. These specifi- cations are also made in terms of the blackboard structure and unit specifications. By maintaining an abstracted view of the blackboard, the details of decisions made in each of the three phases can be hidden until they are combined in GBB’s code gen- eration facility. board. The first is what space or spaces to store the unit on and the second is the location of the unit within the n- dimensional volume of each space. The definition of each unit includes the information required to make these two decisions based on the values of the unit’s attributes. This insulates the KS code from the details of the blackboard structure. For example, the KS code does not need to know which attributes and dimensions are actually used to cre- ate locators for the unit. Thus changes in the blackboard structure do not necessitate changing KS code. Here is an example of the hypothesis unit class defi- nition from the DVMT application: A. Abstracting the Blackboard In GBB, each blackboard space is a highly structured n- dimensional volume. Space dimensionality provides a met- ric for positioning units onto the blackboard in terms that, are natural to the application domain. Units are viewed as occupying some n-dimensional extent within the space’s dimensionality. For example, in a speech understanding system, one of the dimensions of a blackboard space could be utterance time. In the domain of vehicle tracking, a space might contain the dimensions sighting time, x-position, and y- position. In GBB, such dimensions are termed ordered. Ordered dimensions use numeric ranges which support the concept of one unit being “nearby” another unit along that dimension. In the speech understanding domain, this al- lows a KS to extend a phrase by retrieving words that begin “close in time” to the phrase’s end time. (define-unit (HYP (:NAME-FUNCTION generate-hyp-name) (:INCLUDE basic-hyp-unit)) : SLOTS ((belief 0 :TYPE belief) (event-class 0 :TYPE event-class) (level nil :TYPE symbol) (node 0 : TYPE node-index) (time-location-list 0 :TYPE time-location-list)) :LINKS ((supported-hyps (hyp supporting-hyps) :UPDATE-EVENTS (supported-hyp-event)) (supporting-hyps (hyp supported-hyps) :UPDATE-EVENTS (supporting-hyp-event))) : DIMENSIONAL-INDEXES ((time time-location-list) (x time-location-list) (Y time-location-list) (event-class event-class)) :PATH-INDEXES ((node node :TYPE :label) (level level :TYPE :label)) ((t ('node-blackboards node 'hyp level)))). Corkill, Gallagher, and Johnson 21 The dimensional indexes define how attributes seman- tically specify the positioning of hypothesis units onto the dimensionality of a space. (The details of which attributes are actually used in locator construction are specified in the unit-space mapping discussed in Section E.) These specifi- cations include the information required for destructuring when highly structured attribute values are used for unit positioning. Path indexes specify the space(s) on which created units are to reside. A unit is simply created by supplying its attributes: (make-hyp :NODE :LEVEL *current-node-number* bb-level activities are performed. The goal blackboard mirrors the structure of the data blackboard, and contains eight cor- responding spaces. Specifying complete blackboard/space paths makes such a transition cumbersome, because each call to find-units must be changed to reflect the new blackboard-space paths. To eliminate this problem, GBB now provides an ab- stract path specification mechanism which allows black- board/space paths to be specified rela.tive to other paths, to another space instance, or to the spaces on which a unit instance resides. For example, the path to a stimulus hypothesis’s space is coded as: GBB’s basic unit retrieval function, find-units , permits :TIME-LOCATION-LIST time-location-list a complex retrieval to be specified in its pattern language. :EVENT-CLASS event-class This declarative pattern language provides an abstraction :BELIEF computed-belief). over the blackboard database. A find-units pattern con- @. Abstracting Unit Retrieval (make-paths :UNIT-INSTANCES stimulus-hyp). (make-paths :UNIT-INSTANCES stimulus-hyp) '(:CHANCE-RELATIVE :UP st)) where :UP indicates to move up one level in the black- The path to the ST level of a hyp in the DVMT ap- plication can be coded as: r board/space hierarchy and st indicates to move back down (change-paths sists of an n-dimensional retrieval specification for partic- to the ST space. ular classes of units on a blackboard space. This means The path to a corresponding goal space given a hy- that the KS code need only specify the desired classes of pothesis unit in the DVMT application would be coded units, the spaces on which to look, and the values for the as. L . dimensions. We will present an example of unit retrieval shortly. Il. Abstracting the Blackboard Path Specifying a blackboard space in KS and control code is an- other area where data abstraction is important. In GBB, the blackboard is a hierarchical structure composed of atomic blackboard pieces called spaces. In addition to be- ing composed of spaces, a blackboard can also be composed of other blackboards (themselves eventually composed of spaces). This hierarchy is a tree where the leaves are spaces and the interior and root nodes are blackboards. Units are always stored on spaces; GBB’s blackboards simply allow (change-paths (make-paths :UNIT-INSTANCES stimulus-hyp) '(:CHANGE-SUBPATH hyp goal)). The following call to find-units illustrates the use of abstraction in unit retrieval: (find-units 'hyp ; ; We look on the same space as the 'stimulus-hyp' :: (make-paths :UNIT-INSTANCES stimulus-hyp) ‘(:AND ; ; Check for adjacent (in time) hypotheses within .. the maximum velocity range of vehicle movement :: ~IPATTERN-OBJECT (:INDEX-TYPE time-location-list :INDEX-OBJECT ,(hyp$time-location-list stimulus-hyp) the implementer to organize the set of spaces in the system. At a conceptual level, the space upon which to store the unit is specified by the sequence of nodes traversed from a root blackboard node through all intermediate blackboard nodes to the leaf space node. This sequence, which unam- biguously specifies a space, is called the blackboard/space path. In addition, blackboards and spaces can be repli- cated, which creates multiple copies of blackboard sub- trees. These copies of the blackboard structure are disam- biguated by qualifying the replicated blackboard or space with a index. In the original design of GBB, the blackboard path was directly specified in find-units. Even here, the lack of abstraction caused difficulty in modifying the black- board structure without modifying the application code. For example, consider the DVMT application where the basic data blackboard consists of eight spaces (the abstrac- tion levels SL, GL, VL, PL, ST, GT, VT, and PT). Using a very simple control shell for initial prototyping of the KSs, the blackboard structure might consist of a single black- board containing the eight levels and another blackboard containing the scheduling queues. Later on, however, a more complicated control shell might be desired which con- :DISPLACE ((time 1)) :DELTA ((x , *max-velocity*) (y ,*max-velocity*))) :ELEMENT-MATCH :within) ; ; Check event class for frequency within '. *max-frequency-shift* of stimulus-hyp :: ;IPATTERN-OBJECT (:INDEX-TYPE event-class :INDEX-OBJECT ,(hyp$event-class stimulus-hyp) :DELTA ((event-class ,*max-frequency-shift*)) :ELEMENT-MATCH :within)))) E. Specifying the Implement at ion Ma- chinery Specifying how locators are to be constructed from unit at- tribute values is made by defining a mapping for each unit class onto each blackboard space. The mapping is specified in terms of the dimensionality of the space. For example, here is a simple implementation of the levels in the DVMT application where only the time dimension is used for loca- tor construction (the other dimensions are checked during the filtering step of the retrieval process): tains a separate goal blackboard on which goal processing 22 AII Architectures (define-unit-mapping (hyp) (pt pl vt vl gt gl at sl) :INDEXES (time) :INDEX-STRUCTURE ((time :SUBRANGES (:START :END (:WIDTH 1))))). To add in other dimensions into the locator structure, only the mapping declaration need be changed. Here is the same definition implementing a locator strategy for time and x-y-position: (define-unit-mapping (hyp) (pt pl vt vl gt gl st 81) :INDEXES (time (x y)) :INDEX-STRUCTURE ((time :SUBRANGES (:START :END (:WIDTH 1))) (x :SUBRANGES (:START :END (:WIDTH 10))) (Y ZSUBRANGES (:START :END (:WIDTH 16))))). The parentheses in the : INDEXES value in the above example indicates that the locators for the time dimension are to be implemented as a single vector and the locators for the x and y dimensions are to be grouped into a two- dimensional array. Without the extra level of parentheses, three vectors of locator structures would be implemented. I?. Abstracting the Control Interface In GBB, the control interface is separated from the black- board database implementation by viewing changes to the blackboard as a series of blackboard events. Control com- ponents are then defined to be triggered on particular events. An important capability for constructing generic control shells is the definition of basic units (such as basic-hyp) that can be included in the definition of ap- plication units. GBB’s unit inclusion mechanism (see the definition of the HYP unit in Section B) allows event han- dling to be appropriately inherited to the including unit’s definition. The application implementer does not need to know the details of the event handling machinery in spec- ifying blackboard units, and different control shells can be substituted without changing the unit definitions. 0 mary Blackboard database abstraction is an appropriate imple- mentation goal for all the reasons typically associated with data abstraction. In this paper, we have described how in- formation hiding abstractions can be combined to permit a blackboard implementation system to simultaneously pro- vide flexibility, efficiency, and generality. These abstrac- tions are: 1. Viewing blackboard levels (spaces) as structured n- dimensional volumes, blackboard objects (units) as oc- cupying some extent within a space’s n dimensions, and retrieval patterns as constrained volumes within a space’s dimensions. 2. Extracting the information determining a unit’s dimen- sional extent and the space(s) on which the unit is to be placed (the blackboard path) directly from the val- ues of the unit’s attributes and from the general (class) definition of the unit. Specifying the constraints of a retrieval pattern relative to the attribute values of another (stimulus) unit. Specifying the blackboard path for unit retrieval rela- tive to the path of another (stimulus) unit or relative to a particular space instance. Separating control machinery from the blackboard database implementation via the use of blackboard events to trigger control activities. Separating the three phases of blackboard system de- velopment (blackboard and unit definition, application and control coding, and blackboard implementation specification), but combining the product of each phase in a code generation facility to produce an efficient, cus- tomized implementation. These abstractions are implemented in the current re- lease of GBB, and our initial experience using these infor- mation hiding abstractions indicate that they work well at providing flexibility, efficiency, and generality in the devel- opment of blackboard-based AI applications. ]Corkill et al., 19861 Daniel D. Corkill, Kevin Q. Gallagher, and Kelly E. Murray. GBB: A generic blackboard devel- opment system. In Proceedings of the National Conference on Artificial Intelligence, pages 1008-1014, Philadelphia, Pennsylvania, August 1986. (Also to appear in Blackboard Systems, Robert S. Engelmore and Anthony Morgan, edi- tors, Addison-Wesley, in press, 1987). /Errnan et al., 19811 Lee D. Erman, Philip E. London, and Stephen F. Fickas. The design and an example use of Hearsay-III. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, pages 409-415, Tokyo, Japan, August 1981. [Erman et al., 19801 Lee D. Erman, Frederick Hayes-Roth, Vic- tor R. Lesser, and D. Raj Reddy. The Hearsay-H speech- understanding system: Integrating knowledge to resolve uncertainty. Computing Surveys, 12(2):213-253, June 1980. [Hayes-Roth, 19851 B ar b ara Hayes-Roth. A blackboard archi- tecture for control. Artificial Intelligence, 26(3):251-321, July 1985. (Johnson et al., 1987) Philip M. Johnson, Kevin Q. Gallagher, and Daniel D. Corkill. GBB Reference Manual. Depart- ment of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts 01003, GBB Ver- sion IL.00 edition, March 1987. [Lesser and Corkill, 19831 Victor R. Lesser and Daniel D. Corkill. The Distributed Vehicle Monitoring Testbed: A tool for investigating distributed problem solving net- works. AI Magazine, 4(3):15-33, Fall 1983. (Also to ap- pear in Blackboard Systems, Robert S. Engelmore and An- thony Morgan, editors, Addison-Wesley, in press, 1987 and in Readings from A I Magazine 1980-1985, in press, 1987). [Nii, 19861 H. P enny Nii. Blackboard systems: The blackboard model of problem solving and the evolution of blackboard architectures. AI Magazine, 7(2):38-53, Summer 1986. Corkill, Gallagher, and johnson 23
1987
4
632
Caroline Hayes Robotics Institute, Carnegie Mellon University Pittsburgh, Pa. Abstract producing the part. The Machinisf program extends domain dependent planning technology. It is modeled after the behavior of human machinists, and makes plans *for fabricating metal parts using machine tools. Many existing plannmg programs rely on a problem solving strategy that involves fixing problems in plans only after they occur. The result is that planning time may be wasted when a bad plan is unnecessarily generated and must be thrown out or modified The machinist program improves on these methods by looking for cues in the problem spect$cation that may indicate potential dtfj?culties or conflicting goal interactions, before generuting any plans. It plans around those di@ulties, greatly increasing the probability of producing a good plan on the first try. Planning ef$ciency is greatly increased when false starts can be eliminated The machinist program contains about I80 UPS5 rules, and has been judged by experienced machinists to make plans that, are on the average, better than those of a 5 year journeyman, The knowledge that makes the technique eflective is domain dependent, but the technique itself can be used in other domainst I. Introduction Machiiist is a planning program that works on machining problems, and produces feasible plans for manufacturing individual metal parts. Machining is the art of producing metal parts using a variety of power tools to shape the metal. It is a highly skilled task requiring 10 to 15 years to become fairly accomplished. The program works by first scanning the problem specification (a set of shapes to be cut in a metal block, and some information on raw material, dimensions, etc.) for cues or patterns that indicate potential problems. It also looks for other types of patterns that provide salient information: what set of tools and processes can be used for specific cuts, as well as information on the details and restrictions on those processes. Using this information as the building blocks, the program constructs a plan for ‘This research was sponsored by Cincinnati Milacron and Chrysler 224 Planning This approach is more efficient than traditional planning methods, for domains that have many interactions between the goals. Traditional planners typically work by first generating a plan then using “critics” to check the resulting plan for problems and correct them [Sussman 75, Scacerdoti 751. The critic method uses much more time in generating and fixing bad plans. The ideas for Machinist’s planning technique are taken from observations of the behavior of human machinists. Protocol analysis was used to collect this information. The resulting program consists of about 180 OPS5 rules, and it runs on a DEC-20, a UNIX VAX, and a SUN workstation. The main emphasis of this paper is to explain the program’s planning methods and to examine how these methods can be used in other domains. The way in which this planning technique is implemented is domain dependent: the ability to identify a goal interaction efficiently by looking at a problem specification requires intimate knowledge about that problem domain. This knowledge, in the form of patterns which identify interactions, together with operators that tell how to avoid the interactions, takes many years for the expert to build up and years for the knowledge engineer to extract. As used here, a pattern together with an associated composite operator will be referred to a croq tar. Unfortunately, the planner must have these macro- operators to find these interactions in complex domains, otherwise the search would be tremendous. This does not lend hope for domain independent planners to be successful in large domains, but perhaps we must reconcile ourselves to the fact that efficiency may require expertise [Sussman 751. II. Bnteractions A major problem that the machinist confronts in planning is interactions between the different features that are ‘cut into the part. Cutting one feature first may make it difficult or impossible to cut subsequent ones. One can view the collection of features as subgoals to be achieved in the machining plan. The difficulty in making a plan is finding an order in which none of the subgoals interferes too seriously with achieving the others. This type of problem is not it;l:!lated to the From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. machining domain; interactions between subgoals have been observed in many planning domains by many researchers: Stefik [Stefik 811, Hammond [Hammond 863, Sussman [Sussman 751, Tate [Tate 761, and Carbonell [CarbonelI 811 to name a few. Sussman noted it as early as 1973 in HACKER: “interactions between steps, (are) a common cause of bugs.“2 Stefik perhaps, expressed it best: “In planning problems, there are typically many goals to be achieved in some order. The goals interact with each other in many ways which depend both on the order in which they are achieved and on the particular operators which are used to achieve them.0V3 A feature interaction happens when cutting one collection of features affects the way in which others can be made. Cutting one set of features may make it difficult to make other features latter on in the process. The methods used to make those subsequent features may have to be changed, or all the steps in the plan may have to be reordered so it is possible to cut all of the features. Feature interactions have several different causes. Most commonly they result from clamping problems; producing one feature destroys the clamping surfaces needed to grip the piece while cutting another feature. A feature interaction is shown below in figure 1. This part has two features: an angle, and a hole. The angle has been cut and the hole is about to be drilled, but when the drill touches the angled surface it will slip sideways and cause the hole to be placed inaccurately. The angle can be said to interact with the hole. The solution is to drill the hole first while the end of the part is still flat. Since the hole does not affect how the angle is made, a simple reordering prevents the features from interacting. the drill will slide when it starts to chill the hole. Figure I: Featare interaction: the hole must be cut before the angle II The wist program is modeled after the human’s planning process but it only implements a part of that 2 Gerald 9. Sussman, A Computer Model of Skill Acquisition, American Elsevier Publishing Company, New York, 1975, MIT AI Technical Report TR-297, August 1973, p. 119 ‘Mark Stefik, “Planning and Me&Planning (MOLGEN: Part 2);’ Artificial Intelligence, voi 16, no. 2., 1981, p. 141. process. The human’s planning process is described in [Hayes 871. The most important omission is that there is no verification phase at the end of the program’s planning. To demonstrate how the program works, let us suppose one wanted to make the part in figure 2 from the metal stock shown in figure 3. There are five features that need to be cut into this part: three holes, an angle, and a shoulder (a shoulder is any ledge-like shape cut out of a side). The part is. represented in the program as a rectangular block from which features are subtracted. The block of metal that it will be made from, the stock, is saw cut and irregular on all sides. Figure 2: A part with 5 features: three holes, a shoulder, and an angle ~--~~~~~-~~-5,25 --~~~~-~--~~-~> Figuie 3: The stock from which the part will be made: saw cut on all sides The first task to be done is that the program must identify the problems and interactions that occur in the part. This gets the program oriented to the basic structure and difficulties of the problem. Macro-operators are used to identify the interactions and produce the corresponding restrictions that they cause. In this part there are three interactions. The first is between Hole 3 and the angle. If the angle is made first it will interact with the hole, by causing the drill bit to slip on the slanted surface. This will make the hole placement inaccurate, as shown in the previous section II. The restriction that this interaction puts on the plan is that Hole 3 must be made before the angle. The second interaction is between Hole 3 and the shoulder: the hole must be made before the shoulder. If the shoulder is made first, the part will be too thin and floppy when it is clamped to cut the hole. The result of the third interaction is that the angle must be made before the shoulder, for similar reasons. These three interactions: Hole 3 before Angle, Hole 3 before Shoulder, Angle before Shoulder, all restrict the order in which the features can be cut. They can be put Hayes 225 together into one interaction graph (shown in figure 4). Each arrow represents one interaction. a. Drill Hole 3 I Hole 3 before shoulder b. Mill Angle Angle gfore Shoulder Ho1e 2 c. Mill Shoulder Figure 4: Interactibn Graphtthe order in which the features may be cut The next task is to retrieve a squaring graph from memory. A squaring graph outlines all methods for getting the raw material into a square and accurate shape with the minimum waste of material. It represents the constraints on the order in which each of the sides may be “squared off.” It serves as a framework from which the feature constraints can be hung. The squaring graph for this example is shown in figure 5. In each step, the shaded surfaces will be machined smooth. Steps that are shown side by side as branches in the graph can be done in either order: it does not matter which side of a branch is done first. side Set-up C Set-up D Set-up E Figure 5: The Squaring Graph for squaring up a block that is sawn on all sides We now have a graph showing the orders in which the features can be produced, and a graph showing the orders in which the sides may be cut. Each graph represents a separate set of constraints on the plan. The two must be merged with as much overlap between the steps as possible, so that we get a compact sequence. The more overlap the better, because the plan will be more concise. The merging the two graphs is shown in figure 6. Observe that between the Interaction Graph and the Squaring Graph there are 8 steps, but in the final plan there The Final 4. a. 7. c Figure 6: Merging the Interaction Graph with the Squaring Graph are only 7. This is because we were able to combine step b from the Interaction Graph with Set-up E from the Interaction Graph. The details on the processes by which squaring plans are chosen and the two graphs are merged is described in wayes 871. After producing the plan, the program does not go through the final verification phase as the human does. If all problems and goal interactions have been properly identified, the plan wiZZ be correct and the verification step unnecessary. However, the program would obviously be more robust if it used a verification step as the human does. It is not always possible to identify all problems beforehand: neither the machinist nor the program can have a complete set of patterns to identify absolutely all possible problems and goal conflicts. Therefore, the plans produced will not always be good the first time: there needs to be some sort of a safety net to catch problems that initially escape notice. Human machinists also use a “critic” approach, to check the final plan for errors. They may reorder steps, or replan to fix th;m. Future versions of the Machinist program will also be able to do this. Out of the 180 productions that comprise this system: 10 productions identify feature interactions and construct the feature interaction graph, 39 identify other problem’s and generate constraints not caused by interactions, 13 choose the squaring graph, 44 merge the interaction graph with the 226 Planning squaring graph, 11 generate the final plan from the merged constraint graphs, and 63 enter and check data, infer missing data, group features, push and pop goals, etc. The first two categories which identify interactions and generate constraints, are the ones that have the most room to grow. Productions can be added to these two categories, greatly increasing the range of parts that the system can handle, while the rest of the system remains the same. How much do the heuristics implemented by these rules cut down the search space? There are several categories of heuristics used by the program: feature interactions, squaring graphs, and graph merging. If the total effect of all the heuristics on the example used in this paper is taken together, we find that they reduce the number of plans that must be examined by a minimum factor of 1,663,200 compared to search using no heuristics. Let us now consider only the feature interaction heuristic by itself. For the example part there are 5 features but only 3 interactions. For this case, the feature interaction heuristic alone cuts down the number of plans that must be examined by a factor of 10. If we look at a more complicated example taken from [Hayes 873 that has 14 features and 5 interactions, the feature interaction heuristic cuts down the number of plans examined by a factor of 630,000. Essentially, the more features and the more interactions there are, the more difficult it is to find a good plan. The problem is not that the search space gets larger as more interactions are added, it is that the density of good solutions in that space goes down. The machinist’s knowledge of feature interactions helps him to zero-in on only those good solutions. The program was tested against four machinists at various experience levels: two second year apprentices, one third year apprentice, and one journeyman with 5 years experience including the apprenticeship. Each of these subjects was asked to create a machining plan for the same series of three parts. Each part was apparently simple, but contained difficulties when examined more closely. Their resulting plans were judged by two very experienced machinists, each having more than 15 years experience. The average rating given to each of the four subjects and the program are shown in figure 7. The program’s average performance was better than that of the apprentices or the journeyman. In fact, Machinist 1 declared the program’s plan for Part III to be “Almost the perfect plan. Who ever did this is a man after my own heart.” The judging was done in the following way: for each of the three parts there were five plans generated, one fiorn each of the four young machinists, and one from the program. All information indicating who (or what) created the plan was removed, and the the plans were presented to the two experienced machinists. Independently, they Total 5 rating points: 4 3 I 0 2nd Year 2nd Year 3rd Year 5th Year Machinist Appr. B Appr. A Appr. Journey. Program Figure 7: Average Plan Rating for Each Subject ordered each set of five plans, rating them from best %o worst. The best plans were given a score of 5, and the, worst, 1. The machinists’ ratings agreed exactly for 8 of the plans, differed by 1 point for 3, and more than one for 4. However, neither machinist felt that @e other was wrong in his ratings. Both felt that the plans which they rated differently were actually very close in quality. u-k Many pieces of this planning process have been described before but not as one cohesive method. Virtually all of the planners referenced in this paper recognize the importance of goal interactions in planning, but their method of dealing with this problem is different than achinist’s. Typically they do not foresee problems in the problem specification and avoid them. lnstead they make plans with mistakes in them and use critics to recognize and correct them after the fact [Sussman 75, Scacerdoti 751. Time is wasted fixing and replanning. TWEAK [Chapman SS], and CARI [Descotte 811 both work by successively adding constraints to the description of the solution. The interaction and squaring graphs used by Machist are also constraints, but kf..chinnbf’s advance over this approach is to obtGn the constraints as the result of feature interactions. A number of chess strategy planners use macro- operators. They use patterns associated with plans to make search more efficient. Interestingly, many of them have been modeled, at least indirectly,, from human Berliner and Campbell [Berliner 831, and De Groot 651 all use some variant of this method but none of them seem to consider the effect of goal interactions on planning. There are only a few programs that take goal interactions into account before attempting a plan. One of the earliest, Tate’s pate 761 planner for house construction, does take interactions into account before it makes a plan. Hayes 227 However, these interactions must be entered by a human Phyllis Huckestein for contributing their machining since the planner itself cannot determine what tasks expertise to this project: to Paul Wright, Jaime Carbonell, interact. A new set of interactions must be entered for each Herb Simon, Irene Skupneiwicz, Paul Englert, Brack new task. This type of solution is not practical for Hazen; Gregg Lebovitz, Barbara Wright, Mark Perlin and machining problems since each new problem contains a different set of imeractions. One cannot reuse the same set Mike Parzen for their advice, comments, and proof reading; and to Ken Mohnkern for the art work. This of interactions over and over again for a large class of research was funded by Cincinnati Milacron and Chrysler. problems. Wilenski’s planner, PANDORA [Wilensky 801, and to some extent Wilkins planner, SIPE [Wilkins 841 specifically look for goal interactions before planning (which is a great advance in domain independent planning). However, since it iS domain independent, it can not make use of domain knowledge (in the form of patterns) to help identify goal interactions quickly and to find a way around them. Consequently, its performance on complex tasks such machining problems would be impractically slow. Chef [Hammond 861 is a planner that generates recipes for Chinese cooking. It is one of the few planners that looks at the problem description for cues to potential problems and interactions. However, it does not use the interaction information to generate the plan as Machinist does but only to retrieve and modify plans. This is a good approach for many problems but it will not do for machining. Small differences in the shape or size of a part may make ‘big differences in the plan-so it is not good enough to index a past plan for a part that looks similar, and modify it. The plans may have so little similarity that it is easier to construct a new plan from scratch. VII. Conclusion The difference between Machinist and other planners is that it has all of the following properties together: 1. a pre-planning step in which it scans the References [Berliner 831 Berliner, Hans: Murray Campbell. Using Chunking to Solve Chess Pawn Endgames. Technical Report CMU-CS-83-122, Carnegie Mellon University, April, 1983. [Carbone Carbonell, Jaime G. Subjective Understanding, Computer h4odels of Belief Systems. UMI Research Press, Ann Arbor, Michigan, 1981. PHD Thesis, Yale University, 1979. [Chapman 851 Chapman, David. Planning for Conjunctive Goals. Technical Report 802, Massachusetts Institute of Technology, May, 1985. De Groot 651 De Groot, A. D. Thought and Choice in Chess. Mouton & Co., The Hauge, Netherlands, 1965. pescotte 811 Descotte, Y., J. Latombe. GARI: A Problem Solver that Plans how to Machine Mechanical Parts. Proceedings of IJCAI :766-772,1981. pammond 861 Hammond, Kristian J. CHEF: A Model fo Case-Based Planning. AAAI-86 :267-271,1986. [Hayes 871 Hayes, Caroline C. Planning in the Machining Domain: Using Goal Interactions to Guide Search. Master’s thesis, Mellon College of Science, Carnegie Mellon University, April, 1987. problem specification for signs of possible goal interactions, [Scacerdoti 751 Sacerdoti, Earl D. The Nonlinear Nature of Plans. IJCAI4 :206-214,1975. 2. macro-operators to identify goal interactions, and to suggest ways to restrict the plan so as to avoid them, [Stefik 811 Stefik, Mark. Planning and Meta- Planning (MOLGEN: Part 2). Artzjicial InteZZigence 16(2):141-170,1981. 3. a plan constructed from collected information rather then a plan that is indexed from memory and modified. In particular, the pre-planning identification of problem areas can greatly increase planning efficiency within a particular domain. The macro-operators that identify problem areas and suggest solutions are the key to planning efficiency. The set of operators used must be domain.dependent, but the general strategy can be applied to other domains. [Sussman 751 Sussman, Gerald J. A Computer Model of Ski22 Acquisition. American Elsevier Publishing Company, New York, 1975. MIT AI Technical Report TR-297, August 1973. Acknowledgements I would like to extend a special thanks to Jim Dillinger, Dan McKeel, Ken Pander, Steve Klim, Dave Belotti, and pate 761 Tate, Austin. Project Planning Using a Hierarchic Non-Linear Planner. Technical Report D.A.I. Research Report No. 25, University of Edinburgh, University of Edinburgh, August, 1976. [Wilensky 801 Wilensky, Robert. Me&planning. AAAI :334-336,198O. [Wilkins 841 Wilkins, David E. Domain-independent Planning: Representation and Plan Generation. Artzjicial Intelligence :269-301,1984. 22% Planning
1987
40
633
Compiling Plan Operators from Domains Expressed in Qualitative recess Theory John C. Mogge Qualitative Reasoning Group Department of Computer Science University of Illinois at Urbana-Champaign Abstract The study of Qualitative Physics has concentrated on expressing qualitatively how the physical world behaves. Qualitative Physics systems accept partial descriptions of the world and output the possible changes that can oc- cur. These systems currently assume that the world is left untouched by human or robot agents, limiting them to certain types of problem solving. For instance, a state- of-the-art qualitative physics system can diagnose faulty electrical circuits but can not construct plans to rewire circuits to change their behavior. This paper describes an approach to planning in physical domains and a working implementation which integrates Forbus’ Qualitative Pro- cess Engine (QPE) with a temporal interval-based plan- ner. The approach involves compiling QPE expressions describing a physical domain into a set of operators and rules of the planner. The planner can then construct plans involving processes, existence of individuals, and changes in quantities. We describe how the compilation is per- formed, the types derivable plans, and current limitations in our approach. 1 Introduction The study of Qualitative Physics has concentrated on express- ing qualitatively how the physical world behaves. Qualitative Physics systems accept partial descriptions of the world and output the possible changes that can occur. These systems cur- rently assume that the world is left untouched by human or robot agents, limiting them to certain types of problem solving. For instance, a state-of-the-art qualitative physics system can diagnose faulty electrical circuits but can not construct plans to rewire circuits to change their behavior. This paper describes an approach to planning in physical domains and a working implementation which integrates a par- ticular qualitative physics system, the Qualitative Process En- gine (QPE) [F or b us, 861, with a planner (TPLAN) based on [Allen and Koomen, 831. The implementation, called the Op- erator Compiler, accepts QPE expressions describing a physical domain and compiles a set of TPLAN operators for achievmg goals that require processes to occur. For instance, given the def- inition for a liquid flow process. the Operator Compiler outputs an operator for creating liquid flows. This operator can solve goals matching the effects of liquid flow, such as the increased amount of liquid in a container. The Operator Compiler could prove useful in applications, due to its integration two powerful systems. QPE envisions what can happen in the world from various states, while TPLAN plans changes in the world. Furthermore, once a domain physics has been constructed and debugged using QPE, adding planning capabilities requires little work. The user can design the physics without worrying about the task of planning. Formalizing the rest of the domain (such as an agent’s possible actions) then requires some use of the physics vocabulary (to relate actions to processes, for instance). Sections 2 and 3 describe aspects of QPE and TPLAN rele- vant to understanding what the Operator Compiler does. The Operator Compiler is presented in section 4, followed by discus- sion in section 5. 2 Introduction to Q QPE is an implementation of Qualitative Process Theory (QPT) (Forbus, 841. We use the term QPT to refer to the language with which one models physical processes, ignoring other aspects such as the deductions it sanctions. Examples of processes are liquid flow, heat flow, and boiling. QPT provides a syntax for describ- ing individuals in a domain and for expressing how processes become active and how they affect individuals. Examples of in- dividuals are containers, fluid paths, liquid sources, quantities of individuals, and processes. Like other qualitative physics the- ories, QPT models quantities as a partial order and uses sym- bolic, rather than numeric values. Thus, inequalities such as (GREATER-THAN quantity1 quantitya) are meaningful, but mea- sured amounts are irrelevant. Quantities have two components: amount (signified by A) and derivative (signified by D). QPT defines a process as a collection of five components: individuals involved in the process, preconditions (outside of QPT’s knowledge) on the process, quantity conditions (inequal- ities), relations asserted during the process, and influences the process puts on quantities. Figure 1 shows a typical process definition. 3 Introduction to TPL TPLAN is an implementation of the temporal interval-based planner described in [Allen and Koomen, 831. TPLAN keeps a database of facts qualified by intervals over which they hold. The planner runs on top of a time logic described in :Allen, 831, which maintains temporal relationships between intervals. Ta- ble 1 shows the possible values a relation can have. TPLAN adopts the following syntactic conventions: 1. Intervals are denoted by symbols starting with “9’. 2. Variables are denoted by symbols starting with “?“. From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Process : (LIQUID-FLOW ?src-can ?dst-can ?liq) Individuals : (CONTAINER ?src-can) (CONTAINER ?dst-can) (CONTAINED-LIQUID (CL ?liq ?src-can)) (FLUID-PATH (FP ?src-can ?&t-can)) Preconditions: (VALVE-OPEN (FP ?src-can ?dst-can)) quantity Conditions: (GREATER-THAM (A (PRESSURE ?src-can)) (A (PRESSURE ?dst-can) ) ) Relations : (IJUANTITY FLOW-RATE) (q= FLOW-RATE (- (PRESSURE ?src-can) (PRESSURE ?dst-can))) Influences: (I+ (AMOUNT-OF (CL ?liq ?dst-can)) (A FLOW-RATE)) (I- (AMOUNT-OF (CL ?liq ?src-can)) (A FLOW-RATE)) Figure 1: Definition of Process Liquid Flow Table 1: Seven Possible Values of Interval Relations and their Inverses Value Description Inverse Description :< before :> after :M meets :MI met bv I I I .a :O’ I overlaps 1 :OI overlapped by 3 starts I :SI / * started by :F finishes :FI finished by :D ! during :DI encloses 4 .- .- equals .- .- equals 3. The temporal relation between two intervals is expressed as a disjunction and written as a list. For instance, (:< :>) means “is before or after.” 4. Facts are paired with the interval over which they hold. Thus, (OH A TABLE) $IHTERVAL~ denotes a fact (ON A TABLE) which holds over $IHTERVAL~. 5. Two facts paired with the same interval is equivalent to assigning them separate intervals constrained to be (:=). These syntactic conventions are used in operator and rule defi- nitions. We describe each in turn. 3.1 Operators An operator defines an action the agent can perform to change the world. TPLAN adopts the model of action presented in ‘Allen and Koomen, 831, with temporally qualified patterns de- scribing operator preconditions and effects. For instance, Fig- ure 2 defines an operator PICKUP which can be applied to an object if it is clear and resting on something. PICKUP’s effects clear the object’s old location. The constraints field constrains the temporal relations among facts unifying with the precondi- tions and effects. 3.2 Rules Rules-model temporal laws of the domain TPLAN uses rules as backward chaining operators for solving goals, as well as forward OPERATOR pickup PRECONDITIONS: (clear ?object) $clear-object (on ?object ?surface) $on EFFECTS: (pickup ?object ?surface) $pickup (clear ?surface) $clear-surface CONSTRAINTS : $clear-obj ect ( :M) $pickup $on (:O) $pickup $on ( : M) $clear-surf ace Figure 2: Definition of Operator PICKUP chaining, temporally constrained inference rules. Thus, if we express QPT processes as rules, we can both infer processes and plan for their occurence. RULE Antecedents: (ON ?x ?y) $on-xy (ON ?y ?z) $on-yz Temporal Conditions: Exists (INTERSECTION $on-xy $on-yz) called $intersection Consequents: (OVER ?x ?z) $intersection Consequent Constraints: Figure 3: A Temporally Qualified Inference Rule Rule definitions are similar to operator definitions, with an- tecedents behaving like operator preconditions, consequents be- having like operator effects, and consequent constraints behav- ing like operator constraints. The additional field: temporal con- ditions, places preconditions on the temporal relations among facts matching the antecedents. Our time logic supports tempo- ral intersections, allowing us to inhibit a rule until antecedents are known to intersect (meaning their relation is a subset of (:S :SI :F :FI :D :DI :0 :OI :=)) and to assert consequents over their intersection. Figure 3 demonstrates these features. Given (ON A B) and (ON B C) w h ose intervals intersect, (OVER A C) is asserted over their intersection. perator Compiler Qualitative physics systems reason about situations containing fixed sets of individuals. For instance, given a pot containing water resting on a burner, QPE can envision what might happen depending on the relative temperatures of the water and the burner. However, QPE can not envision what might happen if we move the pot off of the burner at some point unless we attempt to model agent actions in process definitions. While QPE does support such modeling (implemented as additional assumptions), the support is temporally weak and multiplies the size of the envisionment of possible world states by the number of consistent action combinations. The Operator Compiler is an attempt to avoid this combina- torics by modeling actions within TPLAN, whose search through possible states is more constrained than QPE’s envisionment. The Operator Compiler approach involves three steps: 1. The user models in QPT the individuals and physical pro 230 Planning cesses of the domain. 2. The Operator Compiler analyzes the model of step 1 and compiles operators and rules for constructing plans involv- ing processes and individuals. 3. The user models in TPLAN the actions an agent can per- form in the domain and the temporal laws of the domain. These actions must be made relevant to the output of step 2. For example, assume we want to model a kitchen and gen- erate plans involving liquid flows. In step 1 we model a liquid flow process and individuals such as containers, liquid sources, and valved fluid paths. In step 2, the Operator Compiler would output operators and rules which include the means by which a liquid flow is initiated. In step 3 we construct TPLAN rules that describe a simple Blocks World geometry for the kitchen and re- late the geometry to the process descriptions. For instance, one rule establishes an exterior fluid path from faucets to any con- tainer underneath it. We then model actions relevant to the geometry and the physics. For instance, one geometric operator would permit us to move objects. Other operators would allow us to modify certain quantities in the physics, such as the valve position of faucets. The vocabulary produced by these three steps can now be used to solve specific planning scenarios. For example, we could assert container POT1 initially on the counter, liquid source FAUCET having a non-zero amount of water, and liquid drain SINK underneath FAUCET and have TPLAN solve any of the following problems: 1. Increase the amount of water in POT1 2. Increase the pressure in POT1 3. Fill the sink with water These problems may seem trivial. However, our running ex- amples require complex plans such as the one traced in Figure 4, since our QPT domain model is detailed. Figure 4 shows examples of the backward-chaining use of rules generated by the Operator Compiler. One rule is used to initiate a liquid flow to solve the initial goal. Later, a rule is used to infer that the faucet is a container if it is a liquid source. Such rules are simple to construct from QPT expressions because of QPT’s notion of causality. Since QPT processes are encoded as a set of preconditions and effects, creating a rule for achieving a process is straightforward. Similarly, QPT enforces a causal ordering on all quantity changes, permitting compilation of simple operators for influencing quantities. Sections 4.1 and 4.2 describe the primary expressions of QPT and their compilation into TPLAN rules. Section 4.3 describes rules and operators applicable to all QPT domain models. 4.1 Compilation of Entity Definitions .A QPT entity definition is an expression of the form (DEFENTITY <pattern> <consequents>) where <consequents> are asserted in every situation in which <pattern> is true. Thus, if we have the following definition “OC” marks rules and operators constructed by the Operator Compiler ’ ‘USER’ ’ marks rules and operators constructed by the user !les goale are indented under the solution which introduces COAL, Increase the amount of water in the liquid state in POT1 SOLUTION Apply OC rule that increases POTl’s amount of water by creating a liquid flow from some source and fluid path Over the course of planning, “some source” is unified with FAUCET The rest of the trace assumes this GOAL Make POT1 a container SOLUTION Unify this goal with an initial given. GOAL Make FAUCET a container SOLUTION Apply OC rule which says liquid sources are containers GOAL. Wake FAUCET a liquid source SOLUTION Unify this goal with an initial given GOAL Get some water znto container FAUCET. SOLUTION Unify this goal rlth an Initial given GOAL Find a flurd path from FAUCET to POT1 SOLUTION Apply an OC rule which says exterior fluid paths are fluid paths GOAL Find an exterior fluid path from FAUCET to POT1 SOLUTION. Apply a USER rule which says that an exterior fluid path exists when a container is under FAUCET GOAL. Make POTl’s location be underneath FAUCET SOLUTION. Apply USER operator to move POT1 from the counter to underneath FAUCET. GOAL Make FAUCET’B pressure be greater than POTl’s pressure SOLUTION. Unify this goal with an initial given GOAL: Make the fluid path’s valve position be open SOLUTION. Apply USER operator for opening FAUCET’s valve Figure 4: Trace of a Plan for Adding Water to POT1 (DEFEIITIT~ (CONTAINER ?x) (HAS-quAfJT1~y ?x VOLUME) (GREATER-THA~I (A (voLUJ~E ?x)) 2~~0) and assert (CONTAINER POTI) in certain situations, QPE’s inference engine will assert the following in those situations: (HAS-QUAIITITY POT1 VOLUME) (GREATER-THA~J (A (voLu14E eoT1)) ZERO) Expressmg a DEFENTITY as a TPLAN rule is simple. The antecedent and consequents are left unchanged, except that the consequents hold over the same time interval as the antecedent. In the above example, (HAS-qUA!JTITY POT1 VOLUME) and (GREATER-THAN (A (VOLUME ~0~111 ZERO) would hold over the interval over which (CONTAINER POTI) holds. For TPLAN the above DEFENTITY would be expressed as: RULE Antecedents : (CONTAINER ?X) $C Consequents: (HAS-QUAHTITY ?X VOLUME) $C (GREATER-THAN (A (VOLUME ?x)) ZERO) $c Since TPLAN treats rules as backward chaining operators as well as forward chaining inference rules, TPLAN could use the above rule to achieve goals which unify with any of the con- sequents. This scheme is used on all consequents except for qualitative proportionality assertions. For instance, the following definition (DEFENTITY (CONTAII;ED-LI~UID ?x) (HAS-qUAHTITY ?X LEVEL) (QPROP (LEVEL TX) (AMOUNT-OF ?x))) gives contained liquids a quantity LEVEL which increases when the A;.IOU:;T-OF contained liquid increases and decreases when Hogge 231 AMOUNT-OF decreases. Encoding the QPROP as a rule consequent would not tell the planner that it can increase a contained liq- uid’s LEVEL by increasing the AMOUNT-OF contained liquid (and likewise for decreasing the LEVEL). Thus, the above PEFEN- TITY is instead compiled into the following rules. RULE Process-Liquid-Flow Antecedents: (CONTAINER ‘src-can) $CI (CONTAINER ‘dst-can) 3~2 <FLUID-PATH (FP 7src-can ‘dst-can)) Sfp RULE Antecedents:'(CONTAINED-LIPUID ?X) $cl Consequents: (HAS-qUANTITY ?X LEVEL) $cl RULE Antecedents: (CONTAINED-LIqUID ?X> $cl (INCREASING (AM~UMT-OF ?x) mwsE) $inc Temporal Conditions: (CONTAINED-LIQUID (CL 711q ?src-can)) 3~1 (VALVE-OPEN (FP ?src-can 7dst-can)) ho (GREATER-THAN (A (PRESSURE-DIFFERENCE (FP ?src-can ‘dst-can))) ZERO) Sgt Temporal Condltlons, Exists (INTERSECTION $cl ScZ Ofp Scl Svo $gt) called Sint Consequent. (LIQUID-FLOW ‘src-can ‘dst-can ?liq) $int (IJUANTITY (LOCAL-QUANTITY FLOW-RATE (LIQUID-FLOW ‘src-can ‘dst-can ‘liq) 1) $lnt (INCREASING (ACCOUNT-OF (CL ?liq ?dst-can)) (A (LOCAL-QUANTITY FLOW-RATE Exists (INTERSECTION $cl $inc) called $int Consequents: (INCREASING (LEVEL ?X) ?cAusE) $int RULE (LIQUID-FLOW ?erc-can ?det-can ?liq)))) $mt (DECREASING (AMOUNT-OF (cr. ?liq tsrc-can)) (A (LOCAL-QUANTITY FLOW-RATE (LIQUID-FLOW ‘src-can ‘dst-can 71iq)))) $lnt Antecedents: (CONTAINED-LIQUID ?X) $cl (DECREASING (AMOUNT-OF ?x) ?cAusE) $dec Temporal Conditions: Exists (INTERSECTION $cl $dec) called $int Consequents: (DECREASING (LEVEL ?X) ?CAUSE) $int Figure 5: Operator Compiled From LIQUID-FLOW Process Definition of Figure 1 4.3 Universal QPE Rules The first rule is the result of extracting the QPROP from the DEFENTITY, while the second and third rules encode the extracted QPROP. The second rule says that if a contained liq- uid exists during $CL, its AMOUNT-OF is increasing over $IFJC, and $CL and $INC intersect, then the level is increasing over the in- tersection The third rule is interpreted similarly. QPE supports a variety of qualitative proportionality ex- pressions (QPROP-, Q=, and Q=) which are handled in similar QPE encodes a set of universal rules applicable to all physical domains. For instance, value X can not be both equal to Y and greater than Y at the same time. This can be expressed as the following TPLAN rule: RULE Antecedents: (equal-to ?x ?y) $= (greater-than ?x ?y) $> Consequents: $= (:< :> :M :MI) $> fashion. The method employed in handling these expressions in DEFENTITY is also used to handle their occurrence in process definitions, which are covered in the next section. 4.2 Compilation of Process Definitions The antecedents of a process definition (DEFPROCESS) are its individuals, preconditions, and quantity conditions, while the consequents are its process form (such as the PROCESS field of Figure l), relations, and influences. A DEFPROCESS specifies that in every situation in which facts matching the antecedents hold, the consequents are asserted. This is expressed in TPLAN as a rule which asserts the process form over the temporal inter- section of the antecedents. For instance, in Figure 5 the liquid Bow process holds over $INT, the temporal intersection of the antecedent intervals ($cl, $c2, $fp, $cl; $vo, and $gt). The temporal constraints on relations and influences depend on whether they refer to any local quantities of the process. While local quantities hold only during the process, other quan- tities may exist before, during, and after the process. Thus, re- lations and influences involving local quantities are asserted over the intersection of antecedents, while all others are asserted over unique intervals containing the intersection. Each of the effects in Figure 5 refer to local quantity FLOW-RATE; thus, they hold over $INT. As with DEFENTITY, we extract all qualitative proportion- The Operator Compiler enforces consistency during planning by including such rules in its compilation of QPT domains. Also included are a set of operators for planning changes in quantities. Section 4.1 described the encoding of qualitative proportionali- ties, in which rules of the form “If X is increasing then Y is in- creasing” are output for each qualitative proportionality. These rules allow us to construct operators which achieve inequalities between two quantities by achieving an increase or decrease in one of them. For instance, if we are given that quantity X is greater than quantity Y, we can solve the goal of making X equal to Y by either decreasing X or increasing Y. The latter is encoded as follows: OPERATOR Preconditions: (greater-than ?x ?y) $> (increasing ?y ?cause) $increasing Consequents: (equal-to ?x ?y> $= Constraints: $> (:M) $= $increasing (:M :FI :DI :O) $= As an example of planning changes in quantities, suppose that we are given a faucet and stoppered sink containing water. Our goal is to make the water level be greater than SOME- LEVEL. Assume that our domain includes a liquid flow process and a qualitative proportionality stating that a contained liq- uid’s water level increases when the amount of water increases. Figure 6 traces the plan which solves our goal. ality assertions from the DEFPROCESS and create rules which express each of them. Thus, the Q= assertion of Figure 1 would be compiled into separate rules. 5 Discussion We have described an Operator Compiler which solves simple planning problems in physical domains. The compiler handlr 232 Planning 6 Acknowledgements GOAL: (GREATER-THAN (LEVEL SINK-WATER) SOME-LEVEL) SOLUTION: Apply an inequality change operator which achieves (GREATER-THAN ?X ?y) by increasing ?x. GOAL: (INCREASE (LEVEL SINK-'JATER) ?cause) SOLUTION: Apply QPROP rule which says (LEVEL ?x) increases when (AMOUNT-OF ?x) increases and ?x is a contained liquid. GOAL: (CONTAINED-LIQUID sink-water) SOLUTION: Unify with initial given. Many thanks go to Ken Forbus for providing direction and use- ful suggestions on this research. Ken Forbus and Brian Falken- hainer gave helpful comments on this document. Discussions with Ken Forbus, Brian Falkenhainer, Barry Smith, and John Collins were helpful in constructing the QPT domain models used in the examples. The Office of Naval Research supported this project through Contract No. N00014-85-K-0225. GOAL: (INCREASE (AMOUNT-OF srNK-wATER) ) SOLUTION: Apply liquid flow operator with destination SINK and source FAUCET. References Figure 6: Trace of a Plan for Changing a Quantity [Allen, 831 Allen, J.F., “Maintaining Knowledge about Tempo- ral Intervals”, Communications of the ACM, vol. 26, pp. 832-843. most syntactic features of QPT. It has been used to compile rules from QPT models of liquid flow, heat flow, and boiling which have allowed generation of simple plans (such as that shown in Figure 1) involving processes, individuals, and changes in quan- tities. This section describes its current limitations. The Operator Compiler’s primary shortcoming is its overly optimistic strategy for solving goals involving changes in quan- tities. The strategy makes several naive assumptions: 1. Any positive influence on a quantity causes it to increase, while any negative influence on a quantity causes it to decrease. 2. Any quantity inequality can be achieved by increasing or decreasing one of the two quantities. [Allen and Koomen, 831 Allen, J.F. and Koomen, J.A., “Plan- ning Using a Temporal World Model”, Proceedings of the Eighth International Joint Conference on Artificial Intelli- gence, pp. 741-747. [Forbus, 811 Forbus, K.D., “Qualitative Reasoning about Phys- ical Processes”, Proceedings of the Seventh International Joint Conference on Artificial Intelligence, August, 1981. [Forbus, 841 Forbus, K.D., “Qualitative Process Theory”, Arti- ficial Intelligence 24, 1984. [Forbus, 861 Forbus, K.D., “The Qualitative Process Engine”, Technical Report UEUCDCS-R-86-1288, University of Illi- nois. Department of Computer Science, December, 1986. Under these assumptions, if a liquid flow into a sink posi- tively influences the water level and a liquid drain negatively influences the water level, assumption #l fools the planner into thinking the water level is both rising and lowering. Further- more, assumption #2 fools the planner into thinking the water level will become greater than the sink’s maximum level and also lower than the minimum level, assuming that these inequalities are introduced as goals during planning. These assumptions are made by the operators described in section 4.3 for achieving inequalities. Although they are not valid, they do solve simple problems involving changes in quan- tities. For instance, given some water draining out of a sink and a goal of filling the sink, the planner can use the inequality operators to generate a plan which turns on the faucet. The so- lution is partial since nothing specifies that the flow rate must be greater than the drain rate. While such rate ambiguities could be resolved through QPE simulation, many problems involving rates and quantity changes are too complex for our compiled rules, since a rule’s effects on a quantity depend on the con- text. Unfortunately, formulating context-dependent rules and operators is beyond TPLAN’s capabilities. One limitation of the Operator Compiler arises from its de- pendence on QPT. The compiler is bound by the limits to which QPT can be used to model the physical world. For instance, QPT currently does not provide a mechanism for modeling sets of interacting individuals. (Process definlGons explicitly state which individuals affect the process.) Such imprc~vements to QPT would require additional complexity in the compiler and planner.
1987
41
634
Models of Axioms for Time Intervals Peter Ladkin Kestrel Institute 1801 Page Mill Road Palo Alto, Ca 94304-1216 Abstract James Allen and Pat Hayes have considered axioms ex- pressed in first-order logic for relations between time in- tervals [AllHay85, AllHay87.1, AllHay87.2]. One impor- tant consequence of the results in this paper is that their theory is decidable [Lad87.4]. In this paper, we charac- terise all the models of the theory, and of an important subtheory. A model is isomorphic to an interval structure INT(S) over some unbounded linear order S, and con- versely, INT(S), f or an arbitrary unbounded linear order S, is a model. The models of the subtheory are similar, but with an arbitrary number of copies of each interval (con- versely, all structures of this form are models). We also show that one of the original axioms is redundant, and we exhibit an additional axiom which makes the Allen-Hayes theory complete and countably categorical, with all count- able models isomorphic to INT(Q), the theory of intervals with rational endpoints, if this is desired. These results enable us to directly compare the Allen-Hayes theory with the theory of Ladkin and Maddux iLadMad87.11, and of van Bent hem [vBen83]. 1 Introduction The Interval Calculus The representation of time by means of intervals rather than points has a history in philosophical studies of time ([Ham71, vBen83, Hum78, Dow79, Rop79, New80J). James Allen defined a calculus of time intervals in [AU83], as a representation of temporal knowledge that could be used in AI. We call this the Interval Calculus. Allen in- vestigated constraint satisfaction in the Interval Calculus, and use of the Calculus for representing time in the con- text of planning fA1184, AUKau85, PelA1186]. Allen and Pat Hayes in fAllHay85, AllHay87.1, AllHay87.2] refor- mulated the calculus as a formal theory in first-order logic. Our interest in this representation of time stems from our belief that it is more in keeping with common sense use of temporal concepts to represent time by means of inter- vals, than to use the mathematical abstraction of points from the real number line (op. cit.). The Interval Calcu- lus is particularly amenable to treatment by the methods of mathematical logic [LadMad87.1, Lad87.2, Lad87.41, since it is complete, countably categorical (i.e. there is a unique countable model, up to isomorphism), decidable, and admits elimination of quantifiers (i.e. every first- order formula is equivalent to a quantifier-free formula), although it is NP-hard [ViZKau86]. We shall show below that the Allen-Hayes reformulation is a strictly weaker theory than the Interval Calculus. Overview of the Results Allen and Hayes [AlZHay85] introduced their axioms as a first-order logical formulation of the theory of inter- vals, guided by [All83]. We investigate their axioms in the slightly different form in which they are presented in [AllHay87.1]. Let Z Afi be the Allen-Hayes theory, i.e. the set of formulas that are consequences of the axioms. We present a complete categorisation of the models of TA,. This enables us, via results in [LadMad87.l], to directly compare the strengths of the various first-order theories of intervals in [vBen83, AZZHay85, LadMad87.11, and further to show that TAN is decidable lLad87.41. In this section, we survey the technical results described in this paper. First we show that one axiom (Existential M5) is re- dundant. We then characterise the models of zA% and the important subtheory Zsu~ by considering certain syntac- tic definitions and their properties. We introduce ‘points’ as a definable equivalence relation on pairs of intervals (the term ‘intervah’ just refers to objects in the model). (Rather than develop a theory of pairs within the axioms, we use a syntactically definable relation with four interval arguments to define the equivalence relation on pairs of intervals). We call the equivalence classes pointclasses. We show that pointclasses are linearly ordered by a de&able relation (which again has to be a relation on four intervals rather than on pairs of intervals), as a con- sequence of the axioms. We associate to each interval two pointclasses, representing the ‘ends’ of the interval, and show these pointclasses are unique, for a given interval. We show that one axiom (M4) guarantees also that there is a unique interval corresponding to a given pair of point- classes. Z~UB does not contain M4. In fact, ZsuB with the addition of M4 gives TAX (see below). We can now show that the pairs of (ordered) distinct elements from an arbitrary unbounded linear order S, a structure which we call INT(S), forms a model of TAti, and conversely that any model of 2~3-1 is of the form INT(S), for some unbounded linear order S. 234 Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. When the axiom M4 is dropped, there may be an arbitrary number of intervals with given endpoint-classes, and we show that the models of TsUB are characterised by two parameters: e the (unbounded) linear ordering of the pointclasses m for each pair of pointclasses, the number of intervals with that pair as the ‘endpoints’. different Finally, we show how to complete the Allen-Hayes ax- ioms by adding an axiom N1, so that they have INT(Q), the rational intervals, as the only countable model up to isomorphism, if this is desired. The results of this paper are essential for the proof of decidability of ZAa. However, the result and proof are beyond the scope of this paper. We refer the reader to [Lad87.4]. What We Now Know We indicate briefly here what is known concerning the various interval theories. We do not have the space to include a detailed comparison, but the interested reader may find one in the longer version of this paper, along with proofs of the results in the technical section lLad87.31. Van Benthem considered first-order theories of inter- vals, first proved the countable categoricity of Th(INT(Q)) (the full first-order theory of rational intervals) [vBen83] and indicated an axiomatisation in [vBen84]. Ladkin and Maddux fLadMad87.11 f ormulated the Interval Calculus as a relation algebra in the sense of ,Tarski [JonTar52, Mad7’8], and associated with the algebra a first-order the- ory that they proved countably categorical, complete and decidable. It is a consequence of results in [LadMad87.1] on the interdefinability of the primitive relations that the formulations of van Benthem and Ladkin-Maddux define the same theory, even though they appear radically dif- ferent - the theory of intervals over an unbounded, dense, linear order. Ladkin proved that the theory admits elim- ination of quantifiers, and exhibited an explicit decision procedure, making use of the Ladkin-Maddux extension of Allen’s constraint satisfaction algorithm, and the quan- tifier elimination procedure, in lLad87.41. We show in this paper that the Allen-Hayes axioms define precisely the theory of intervals over an unbounded linear order, not necessarily dense. Hence this theory is logically weaker than Th(lNT(Q)), Since the addition of N1 to the Allen-Hayes axioms assures density, this gives yet another axiomatisation of Th(INT(Q)). Of course, logically weaker entails more models, which is what Allen and Hayes intended. They wanted the in- tervals over the integers, INT(Z), as a possible model of their theory, as well as INT(Q). The weaker theory is still decidable, but does not admit elimination of quantifiers [Lad87.4]. So it all fits together very nicely and everyone should live happily ever after . . . . . . . . . . . . . . . . . . Terminology We assume that the reader has familiarity with the ba- sic notions of first-order logic and model theory, as in [ChaKei73, ManWal85]. We include some reminders here. The only non-standard concept we use is that of an atransitive binary relation. The language of time interval theories, in the Allen- Hayes version, has a single primitive binary relation sym- bol 11 for meets. Since all other relations may be defmed from this in the Interval Calculus lLadMad87.11, it suEices to use this simple language. All our definitions below will assume this language. A theory T is a set of sentences that is closed under deduction. An axiomatisation of a theory T is a recur- sive set of sentences S such that T is the set of deductive consequences of S. T is axiomatisable if it has an axioma- tisation. A structure is a set of objects U, along with with a binary relation 110. We denote such a structure by (U, I/c). A model of a theory T is a structure such that all of the sentences in T are true in it. The class of all models of T is denoted Mod(T). The theory of the model M is the set of all sentences that are true in 1M, and is denoted by Th(M). Th(M) is complete (by construction). Note that M is a model for Th(M). A function 6’ : A!, + A42 is a homomorphism of models (MI, 111) and (M2,112) if and only if (Vr, y E Ml)(a:llly c--) %a9(Y))~ A n isomorphism is a one-to-one, onto ho- momorphism. Two models are isomorphic iff there is an isomorphism between them. A theory T is countably categorical iff all countable models are isomorphic i.e. there is only one countable model, up to isomorphism. A binary relation R (written infix) is atransitive iff (VP, q, r)(pRq & qRr --) (+Zr)); an ordering iff it is ir- reflexive, asymmetric and transitive; an unbounded or- dering iff it is an ordering, and also satisfies (Vp)(3q)(pRq) & (Vp)(ilq)(qRp); a linear ordering iff it is an ordering and linear. The following facts from model theory are relevant. A theory which is countably categorical is also complete. An axiomatisable, countably categorical theory is also de- cidable. The theory of unbounded dense linear orders is countably categorical. All countable models of the theory of unbounded dense linear orders are isomorphic to the rational numbers with the natural ordering, (Q, <). Fi- nally, there are uncountably many non-isomorphic count- able models of the theory of unbounded linear orderings. Ladkin 235 2 The Allen- ayes Theory zd7-1 The Allen-Hayes axioms for TAX are motivated by con- sidering intuitive properties of the relation meets over in- tervals from a linear order such as Q or 2. The intuitive definition of meets is given by the picture below: We give the formal definition in terms of intervals as pairs-of-points over some arbitrary linearly-ordered do- main S. e (a, b) is an interval if and only if a < b e (a, b) meets (c,d) if and only if b = c e INT(S) is the set of intervals on S, with the thirteen natural binary relations definable from the ordering on S Note in particular that there is no question of intervals being sets of points, and therefore no issue as to whether they include endpoints or not. Intervals are just pairs of points, and an endpoint is just one of these points. It does turn out that the class of open, closed, and half-open (at either end) intervals on the rationals is also count- ably categorical, and we can provide an extension of the Allen-Hayes axioms that have this structure as the only countable model, up to isomorphism lLad87.51. We give the Allen-Hayes axioms without much com- mentary, and refer the interested reader to [AllHuy85, All- Hay87.1, AllHuy87.2] for further motivation. The theory TAN is axiomatised by M1 - M5; equivalently, as we shall show, by Ml - M4. The theory &UB is axiomatised by Ml - M3 only, omitting M4. We use the symbol 11 for meets. The axioms are: Ml: (VP, q,~, NPIM & (PII & @IId + WbH which is intended to make the ‘meeting-places’ unique M2: (VI’, !I, T, d((Pk7’) & (Tlld -+ (PIIS @ (WPIIW 63 wwllPN) where @ is exclusive or, alternatives must hold. i.e. precisely one of the This axiom is intended to linearly-order the meeting places A/m (VPUq, a7llPll4 which is intended to ensure that the intervals are unbounded at either end of the time line M4: (VP, Q,T, s)(Plldb & P~~~~~~ -+ Q = T> which is to ensure that there are with particular given ‘endpoints’ unique intervals M5: Functional Form (Vp,q)(pl)q + ~~~d~llPll~lIS & TIKP + ~W) which is intended to guarantee the existence of a ‘union’ interval of two meeting intervals. M5: Existential Form (Vp,q)(pllq -+ (37’9 s7 WIPII~II~ & Ml4) In fact, the axiom existential M5 already follows from M1 - M3 (below). Existential M5 versus Functional M5 The operator + in Functional M5 may be introduced by SLolemisation in any given model of the axioms with Ex- istential M5, i.e. such a model may be augmented with the addition of a function so that it becomes a model of Functional M5 [ChaKei73]. We therefore prefer to use Existential M5, since Functional M5 leads to techni- cal difficulties which we prefer to avoid ([Lad87.3j). For example it dirties our tidy language . . . . . The Axiom M4 There are techniques for obtaining models of zdx from models of &JB. The relation of ‘having the same end- pointclasses as’ is an equivalence relation that preserves the primitive relation meets, and therefore any model of &JB has a homomorphic image that is a model of Zdl-I, obtained by ‘factoring through’ the equivalence relation, i.e. by identifying objects iff they have the same equiva- lence class JChaKei731. However, the models of ZSvB are not the intended models of the interval theory, since in general they may have different intervals with identical endpoints. Hence even though M4 is dispensable from the point of view of model theory, we need it to pick out precisely the intended models. Technical Results We present the definitions of ‘points’ in a model of the Allen-Hayes axioms, and analyse the models of the ax- ioms . The following lemma is due to Allen and Hayes: Lemma 1 atransitive. The relation II is drTefEexive, asymmetric and The next lemma shows that axiom M5 is dispensable: Lemma 2 The axiom Existential of the axioms Ml - M3. M5 is a consequence The next lemma shows that the function introduced in Functional M5 is dispensable. (This is just the theo- rem of Function Introduction in [Man Wal85], known as Skolemisation to model theorists [ChaKei73j). 236 Lemma 3 (Skolemisation) : Every model of the axiom A45 in the existential form may be extended (by adding a function) to a model of the axiom M5 in the operator form. We define the four-argument predicate that generates equivalence relation on pairs of meeting intervals. the Define Equiv(p, q, T, s) if and only if p]]q & r]]s & p]]s. We use the notation [p, 4 for the pair of intervals p and q, whenever pi/q. The notation thus includes an implicit assertion of 11. We shall write Equiv(p, q, T, s) as /p, q] N /~,a]. Using our notation, we could define Equiv(p, q, T, 3) by the biconditional: [p, q] N [T,s] if and only if plls. Technically, the notation [p, q] is only a convenience, and assertions involving terms of this form and N are just shorthand for assertions involving the 4-ary relation Equiv. The next lemma uses this shorthand. Lemma 4 (- is an Equivalence Relation) : (4 IPJ d - lP9 !?I (V lP, d - h-9 4 * h 4 - lP, !I1 (4 lP, d N b-9 4 - lvl 3 lP, !d - h VI We call the equivalence classes pointclasses, and we de- note the equivalence class of [p, q] by [[p, q]]. They will represent the ‘points’ in any model of the axioms IAN. Define the 4-ary relation PointLess(p, q, T, 8) as follows: PointLess(p, q, T, s) if and only if PointLess is heterological; that is, it’s not a pointless rela- tion. We denote PointLess(p, q, T, s) by the rather more perspicuous notation /[p, qJ 4 I/r, s]]. This notation is also just a convenience. Lemma 5 (4 is linear) classes of N 4 linearly orders the equivalence Theorem 1 (Models I) Given an arbitrary unbounded linear order < on a set S, the intervals of S, INT(S), form a model of TAti under the definition of ]I given ear- lier. Furthermore, the ordering + on equivalence classes of meeting intervals is isomorphic to the ordering < on S. Sketch of Proof: If two intervals meet, they have a member of S in common. It’s easy to check that the equivalence classes have the same member of S associated with each pair in the class, and that each member of S is associated with an equivalence class. To construct the required isomorphism, map [(a, b), (b, c)] to b. It is easy to see that 4 on the classes is preserved as < on S. End of Sketch. Corollary 1 There are uncountably many countable mod- els of the axioms ZAx. We shall show that the models of the theorem are the only models of IAx. We accomplish this by characterising the models of ZsuB, in such a way that the models of M4 are homomorphic images of these. Lemma 6 (Endpointclasses) For any p, there are unique equivalence classes PI and PI such that @?NP, 41 E Pl) & cwk PI E Pd Summarising what we have so far: associated with any object p in a model for Z SUB is a unique pair of equivalence classes. All intervals which meet p are included in some pair in one equivalence class, as are all intervals which are- met-by one of those. In the other are included in some pair all intervals which are-met-by p, and all intervals which meet one of those. The equivalence classes are linearly ordered. Given any model ~vI of ZSUB, form the set M’ of pairs of equivalence classes of meeting intervals under N, and using the linear order 4, form the intervals, and the meets relation on these by using the standard definition for pairs from a linearly ordered set. Call the resulting model INT(M), the interval structure of M. We can now state and prove our main result categorising the models of TAti. All of them are isomorphic to their interval structures. Theorem 2 (Models PI) INT(M) is a homomorphic im- age of M, and is a model of 1.47-I. Furthermore, if h4 is a model of TAN, they are isomorphic. Sketch of Proof: The mapping is p H ([[q,p]], [Ip, r]]) for any q, r that meet, respectively, are-met-by p. It’s easy to check that the relation 11 is preserved by this map- ping, and that the mapping is onto. Since this is the only primitive in the theory, this s&ices for the homo- morphism. To show isomorphism if M4 is true in M, note that if P,P’ ++ (kill, [b,4>, then qllp’ and p’llr ad hence p = p’, so the map is one-to-one. End of Sketch. Since the interval structures INT(M) are homomorphic images of each model Mof Z SUB, it follows that to discover the structure of models of ISUB, it suffices to look at the kernel of the homomorphism, which in each case is the equivalence relation pzqifandonlyif (3r7 6 s7 s’)(([h PI19 IP, r’ll> = m, qll, [[!I, S’IIN This is the equivalence relation of ‘having-the-same- endpoints-as ‘, and it’s easy to check that the same inter- vals meet p as meet q, and the same intervals are-met-by p as are-met-by q, when p N q. Hence the number of intervals in each N equivalence class may be chosen inde- pendently for each equivalence class. This may be stated more precisely in the following way: Let endpoints(p) be the pair ([[r, p]], [[p, r’]]). Let MULTI-INT(M) consist of the pairs (endpoints(p), p), with the relation of 11 defined as (endpoints(p), p) ]I (endpoints(q), q) if and only if p]]q. lladkin 237 It’s easy to check that pll q if and only if endpoints(p) II endpoints(q). Acknowledgements Lemma 7 MULTI-INT(M) is isomorphic to M. The isomorphism is defined by p H (endpoints(p), p). Another way of constructing MULTI-INT(M) is simply by taking INT(M) and, for each (a, b) E INT(M), adding an element (‘(a, b), p) for each p such that (a, b) = end- points(p). Thi s is sununarised in the following theorem. Theorem 3 (Models III) The models Of &!uB are com- pletely characterised by (a) the linear ordering 4 on the equivalence classes of N; (b) the number of elements in each equivalence class of N. Sketch of Proof: Given a model of the form MULTI- INT(M), we define a model 1M’with the elements ((a, b), ,@ for each p < c11, where cy is the cardinality (number) of the p such that endpoints(p) = (a, b). Define II on this model the same way as in MULTI-INT(M). We construct an isomorphism between the two models. End of Sketch. We have completely characterised the models of &vB, and the models of zA%. Extending the Theory We now give an axiom Nl that, added to zA%, gives Th(INT(Q)). Thus th’ IS axiom completes the theory z&j. @ Nl: (VP, q, r, 4 ( Point.Less(p, q, r, 3) + (3x9 Y) (PointLess(p, q, x, y) & PointLess(x, y, T, 3)) ) Nl expresses the density of the ordering + on point- classes. Translating it into the + notation should make this clear. Theorem 4 (Completion) The theory axiomatised by WI1 - M4, N1 is countably categorical, with all countable models isomorphic to INT(Q), and hence is Th(INT(Q)). 3 Summary We have characterised the models of the Allen-Hayes ax- ioms for time intervals, as structures of intervals over an arbitrary unbounded linear order. The characterisation shows that the Allen-Hayes axioms serve the purposes for which they were introduced. The characterisation has enabled a direct comparison of the different first-order theories of intervals. The Allen-Hayes theory is incom- plete, which was intended, and is weaker than the Ladkin- Maddux-van Benthem theory. We indicated how to com- plete the Allen-Hayes theory. We have noted that both the Allen-Hayes theory, and the stronger complete theory, are decidable. We thank Roger Maddux and Pat Hayes for much lively discussion, and Cordell Green, Director of Kestrel Insti- tute, for giving me time to think about all this. A1183 : Allen, J.F., Maintaining Knowledge about Tem- poral Intervals, Comm. A.C.M. 26 (ll), November 1983, 832-843. All84 : Allen, J.F., Towards a General Theory of Action and Time, Artificial Intelligence 23 (2), July 1984, 123-154. AllKau85 : Allen, J.F. and Kautz, H., A Model of Naive Temporal Reasoning, in Hobbs, J.R. and Moore, R.C., editors, Formal Theories of the Commonsense World, Ablex 1985. AllHay : Allen J.F. and Hayes, P. J., A Commonsense Theory of Time, in Proceedings IJCAI 1985, 528- 531. AllHay87.1 : Allen J.F. and Hayes, P. J., Short Time Periods, to appear, Proceedings of the 10th Inter- national Joint Conference on Artificial Intelligence, Milano, 1987. AllHay87.2 : Allen J.F. and Hayes, P. J., A Common- sense Theory of Time: The Longer Paper, Technical Report, Dept. of Computer Science, University of Rochester, to appear. ChaKei73 : Chang, C.C. and Keisler, H.J., Model The- ory, North-Holland 1973. Dow79 : Dowty, D.R. Word Meaning and Montague Gram mar, Reidel, 1979. HalSho86 : Halpern, J.Y. and Shoham, Y., A Proposi- tional Modal Logic of Time Intervals, in Proceedings of the Symposium on Logic in Computer Science 1986, 279-292, IEEE Computer Society Press, 1986. Ham71 : Hamblin, C.L., Instants and Intervals, Studium Generale (27), 1971, 127-134. Hurn79 : Humberstone, I.L., Interval Semantics for Tense Logic: Some Remarks, J. Philosophical Logic 8, 1979, 171-196. JonTar : Jonsson, B. and Tarski, A., Boolean Algebras with Operutors II, American J. Mathematics (74), 1952, 127-162. Lad86.1 : Ladkin, P.B., Time Representation: A Taxon- omy of Interval Relations, Proceedings of AAAI-86, 360-366, Morgan Kaufmann, 1986, also available in Kestrel Institute Technical Report KES.U.86.5. 238 Planning Lad86.2 : Ladkin, P.B., Primitives and Units for Time vlIKau86 : Vilain, M., and Kautz, H., Constraint Prop- Specification, Proceedings of AAAI-86,354359, Mor- agation Algorithms foT Temporal Reasoning, Pro- gan Kaufmann, 1986, also available in Kestrel Insti- ceedings of AAAI-86, 377-382, Morga,n Kaufmann, tute Technical Report KES.U.86.5. 1986. Lad87.1 : Ladkin, P.B., Specification of Time Depen- dencies and Synthesis of Concurrent Processes, Pro- ceedings of the 9th International Conference on Soft- ware Engineering (March 1987), Monterey, Ca, IEEE 1987, also available as Kestrel Institute Technical Report KES.U.87.1. Lad87.2 : Ladkin, P.B., The Completeness of a Natu- ral System for Reasoning with Time Intervals, to appear, Proceedings of the 10th International Joint Conference on Artificial Intelligence, Milano, 1987, also available as Kestrel Institute Technical Report KES.U.87.5. Lad87.3 : Ladkin, P.B., Models of Axioms for Time In- tervals, (the longer paper) Kestrel Institute Techni- cal Report KES.U.87.4. Lad87.4 : Ladkin, P.B., Deciding First-order Statements About Time Intervals, forthcoming Kestrel Institute Technical Report. Lad87.5 Ladkin, P.B., Including Points in Interval Ax- ioms, in preparation. LadMad87.1 : Ladkin, P.B. and Maddux, R.D., The Algebra of Convex Time Intervals: Short Version, Kestrel Institute Technical Report KES.U.87.2. LadMad87.2 : Constraint Propagation in Interval Struc- tures, forthcoming Kestrel Institute Technical Re- port. Mad78 : Maddux, R.D., Topics in Relation Algebras, Ph. D. Thesis, University of California at Berkeley, 1978. ManWal85 : Manna, Z. and Waldinger, R., The Logical Basis for Computer Programming: Vol 1: Deductive Reasoning, Addison-Wesley, 1985. New80 : Newton-Smith, W.H., The Structure of Time, Routledge Kegan Paul, 1980. PelA1186 : Pelavin, R., and Allen, J.F., A Formal Logic of Plans in Temporally Rich Domains, Proceedings of the IEEE 74 (lo), Ott 1986, 1364-1382. Rop79 : Roper, P., Intervals and Tenses, Journal of Philosophical Logic 9, 1980. vBen83 : van Benthem, J.F.A.K., The Logic of Time, Reidel 1983. vBen84 : van Benthem, J.F.A.K., Tense Logic and Time, Notre Dame Journal of Formal Logic 25 (l), Jan 1984. Ladkin 239
1987
42
635
Localized epresent at ion and Planni g Methods for Parallel Domains Amy L. Lansky* David S. Fogelsong** Abstract The primary goal of this paper is to examine the role that locality plays in domain representation and reasoning. In par- ticular, we focus on three uses of this structuring concept. First, we show how a localized specification methodology can be used to define domain properties and impose constraints only within relevant regions of activity. Second, by viewing certain types This paper presents a general method for structuring domains that is based on the notion of locality. We consider a localized domain description to be one that is partitioned into regions of activity, each of which has some independent significance. The of locations as regions of causal effect, locality can be used as a way of addressing the frame problem. Such use of locality has already been recognized in the AI literature [6,7], but has not been extensively explored. Finally, the localization struc- use of locality can be very beneficial for domain representation and reasoning, especially for parallel, multiagent domains. We show how localized domain descriptions can alleviate aspects of the frame problem and serve as the foundation of a planning technique based on localized planning spaces. Because domain constraints and properties are localized, potential interactions among these search spaces are fewer and more easily identified. ture of a domain can provide heuristics for problem-solving in that domain. We present a localized planning technique that partitions both the plan representation and the planning search space according to the structure of the domain. Because con- straints are localized, there are far fewer interactions between regional search spaces and, when they do exist, they are more readily identified. While the containment of such interactions is a goal of many existing planning systems [16,18,19,20], most do not localize domain descriptions sufficiently. As a result, the task of determining and coping with interactions is extremely 31 Introduction The use of hierarchy is a well-recognized representational tech- nique, not only in AI but in computer science as a whole. Such representations facilitate our understanding of domain descrip- tions and make it possible to use “divide-and-conquer” problem- solving techniques. However, hierarchical descriptions are only expensive. The basis of the work described in this paper is GEM (Group Element Model) [8,9,10,11], a model that explicitly represents regions of activity in the manner we have described - i.e., it al- lows them to be defined and grouped together in arbitrary ways and associated with ports of interaction. Besides its use of do- main structure, GEM is unusual in being an event-based (rather than state-based) framework. Domains are described strictly in terms of the events that occur within regions of activity and the causal and temporal relationships between those events. Domain properties are described by first-order temporal logic constraints that limit a domain’s potential behaviors. Several domain rep- resentations other than our own have been proposed that make use of events and event relationships [1,4,13,14]. However, GEM differs from most of these in having a purely event-based domain model (where state descriptions are derived from past event be- haviors), as well as in its emphasis on event localization. one of the many possible ways of subdividing a domain. Depend- ing on how one plans to use or view a domain representation, an appropriate decomposition might include regions that overlap, form disjoint sets, or take on any other structural configuration - they need not necessarily form hierarchies. In this paper we discuss a more comprehensive manner of structuring domains, one that utilizes the notion of locality. By “locality” we mean a very general notion of structure or decom- position. A localized domain description is considered to be one that is partitioned into regions of activity, each of which has some form of independent significance. Regions may be com- posed of related subregions of activity to form hierarchies or any other kind of structural configuration. The rest of this paper is organized as follows. In Section 2 we present a brief overview of the GEM model. In Section 3 we discuss the influence of locality on the frame problem. Finally, in Section 4, we describe GEMPLAN, a localized planner based on the GEM representation. 2 Model and Specification Language The GEM specification framework was designed for the descrip- tion of domains with intrinsic parallelism. It has been used not only for AI applications, but also for concurrent program specifi- cation and verification. In this section we shall try to suggest the general flavor of the domain model and specification language; Since we are particularly interested in representing parallel domains, it is also important to account for the potential in- teractions among regions. To help deal with this problem, we introduce the use of ports - well-defined region-boundary loca- tions in which interactions can take place. The notion of a port has been used extensively in the design as well as in some CAD systems [15]. of distributed systems *Artificial Intelligence Center, SRI International, 333 Ravenswood Av- enue, Menlo Park, California, and the Center for the Study of Language and Information, Stanford University. **Computer Science Department, Stanford University. This research has been made possible by the Office of Naval Research, under Contract N00014-85-C-0251, and by the National Science Foundation, under Grant IST-8511167. The views and conclusions contained in this paper are those of the author and should not be interpreted as necessarily representative of the official policies, either expressed or implied, of the Office of Naval Research, NSF, or the United States government. 240 Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. - (d) * (4 ==+ VW4 ==+ (w-0 &(U, eE1) ~(b, ell) E(c, eZ2) E(d, e/2) 4ek 9) +Q s> Figure 1: A World Plan much more rigorous and complete definitions can be found else- where [8,9,10,11]. While these previous papers have emphasized GEM’s use of event-based temporal-logic constraints, this paper focuses on GEM’s structuring capabilities. GEM’s underlying domain model is constructed in terms of eventsi that are localized into regions of activity. Event in- stances may be interrelated by three kinds of relations: the temporal order =+-, the causal relation w (modeling a direct causal relationship between event instances), and a simultane- ity relation + (modelling the necessary simultaneity of event instances). GEM utilizes two kinds of regions: elements and groups. Elements are the most basic type of region. Every event must belong to some element, and all events belonging to the same element must be temporally totally ordered - i.e., elements represent regions of sequential activity. Elements (and their con- stituent events) may then be clustered into groups. Each group represents a region of causally encapsulated activity. The causal laws associated with groups are described in detail in Section 3. GEM’s domain model can be viewed as a two-tiered structure. The upper level consists of world plans (see Figure 1). Every world plan consists of a set of events, elements, and groups, and their interrelationships. Each event in a world plan models a unique event or action occurring in the world domain, each relation or ordering relationship models a relationship between domain events, and each element or group models a logical region of activity.2 World plans are meant to convey known information about a domain. This information may be incomplete in the sense that some potential relationships are left undetermined. For example, if ==+-(el,e2) is true in a world plan, el must always occur before e2 in every behavior of the domain. However, if no relationship exists between the two events, they might occur in either order or even simultaneously. The lower tier of the GEM world model contains the set of potential behaviors or executions permitted by a world plan. These executions must conform to all the relationships established in the world plan - in essence, they represent its possible “completions.” For example, the world plan in Figure 1 can be executed in three possible ways: ‘While they are often considered distinct, we use the terms event, event instance, and action interchangeably. ‘We assume here that all events are atomic. The model is expanded in include nonatomic events elsewhere [8]. Execution 1: 1st a 2nd b 3rd c 4th d Execution 2: 1st a 2nd c 3rd b 4th d Execution 3: 1st a 2nd b,c 3rd d Note that, in the third execution, b and c occur simultaneously. Although we know that one of these world executions may occur, we cannot assume that any one of them actually does. All three executions are thus part of the lower level world model for this world plan. In GEM, these execution sequences of a world plan are modeled as linear sequences of histories - i.e., as a set of history sequences. Each history (Y may be viewed as a “state” that encompasses not only the state of the world at some given moment, but everything that has occurred up to that moment - i.e., it is a snapshot of past behavior.3 The GEM history sequences for the world plan in Figure 1 are as follows: Execution 1: (~0 ai oj Q, Q, Execution 2: (~0 cr; ok Q, cr, Execution 3: oe cr; a, (Y, where a0 is the empty history, cyi is the history with just event a, oi is a history in which a has been followed by b (but not c), ok contains a followed by c (but not b), (Y, b, and c, and (II, includes all four events.4 contains events a, Given this underlying world model, domains are described by GEM specifications. Each specification consists of a set of con- straints that limit the allowed executions or behaviors of that domain. A given world plan W is considered to satisfy a set of domain constraints if every one of its history sequences (i.e., executions) satisfies every constraint in the set. The task per- formed by the GEMPLAN planner is to construct world plans that attain some stated goal and satisfy all of a domain’s con- straints. Just as elements and groups model the structural aspects of a domain, they also serve as the structural components of the GEM specification language. Each specification is composed of a set of element and group declarations. Each element is associ- ated with a set of event types (the types of events that may occur at the element). Each element and group may also be associated with a set of first-order linear-temporal-logic constraints. These constraints are localized, applying only to those events occurring within the element or group in which they are defined. Every specification also ihcludes a set of default constraints imposed by the element/group structure of the domain itself. These default “locality” constraints and their effect on the frame problem are discussed in Section 3. Figure 2 illustrates a sample specification and a possible world plan for a cooking-class domain. The cooking class is described as a group consisting of a set of kitchen subgroups, all of which share a teacher element. Each kitchen also contains a set of student elements and an oven element. Typical constraints that might be used in this domain include rules regarding individ- ual student behavior, limitations on the use of the oven in each kitchen, requirements for student cooperation on certain tasks, 3The reader should be warned that the term &story has been used by others in different ways - for example, to denote a particular sequence of states. 40ne way of representing the possible history sequences of a world plan is as a branching tree. In this example, we would have / CYJ + (Ym + an @O - ff, - ffk 4 (Y,,, + C-t,, I am - ffn This corresponds to the branching tree of states used by McDermott; a chronicle corresponds to a history sequence [13]. Lansky and Fogelsong 241 mkingclass ki tchen2 polygon = group circle = element dot = event Student = ELEMENT TYPE EVENTS Prepare(cake) OvenRequest CONSTRAINTS student1 = Student ELEMENT student2 = Student ELEMENT student3 = Student ELEMENT END Student Oven = ELEMENT TYPE EVENTS Bakeccake) CONSTRAINTS oven1 = Oven ELEMENT oven2 = Oven ELEMENT END Oven 3 Locality and the Frame Problem Kitchen = GROUP TYPE teacher = ELEMENT (teacher, EVENTS (8):SET OF Student, Instruct 0 : Oven) CONSTRAINTS CONSTRAINTS : : END teacher END Kitchen kitchen1 = Kitchen GROUP (teacher,{studentl,student2},oven kitchen2 = Kitchen GROUP (teacher,,{student3},oven2) cookingclass = GROUP (kitchenl,kitchenP) CONSTRAINTS 1 END cookingclass Figure 2: A Cooking Class Domain or descriptions of appropriate reactions to the teacher’s instruc- tions. Figure 2 also illustrates the use of GEM’s region type def- inition and instantiation mechanism.5 In the full GEM model, region-type inheritance and refinement may also be used [8]. The constraints associated with elements and groups are writ- ten as first-order temporal-logic formulas which are then applied to history sequences. Temporal logic has a well-defined seman- tics and has been used extensively in concurrency theory. While we shall not discuss the details of the logic here, we shall illus- trate briefly how quite complex properties can be described. GEM’s temporal operators may be applied to sequences that go forward in time (using the operators 0 (henceforth), 0 (even- tually), () (next), and P U Q (P until Q)) as well as backwards in time (A (before), fi (until now), P 2 (Q back to P)). For instance, when past behavior dictates the course of future be- 5Event, group, and element instances are denoted in lowercase; types are capitalized. havior, we would typically use a constraint of the form: P 3 q Q. This may be read: “if P holds for the events in some history, then, for every history which follows in the history sequence, Q must hold.” This constraint form is commonly used for prior- ity requirements as well as for many other naturally occurring domain properties. A simple first-come-first-served requirement for use of an oven might be: (V ovenreql,ovenreq2:0venRequest, k:Kitchen) ovenreqlck A ovenreq2ck A ovenreql =a ovenreq2 > 0 [ serviced(ovenreq2) > serviced(ovenreq1) ] In other words, if two oven requests in the same kitchen oc- cur in some order, they must be serviced in that order. The notation serviced(e) is an abbreviation for a specific event for- mula that is true of histories in which an oven request has been fulfilled.6 An example of a,n eventuality constraint is the fol- lowing: occurred(ovenreq) > 0 serviced(ovenreq). That is, if an oven request occurs, it must eventually be serviced. Backwards temporal operators may be used to describe event preconditions. For example, justoccurred > A precondition(e) may read “if e has just occurred then precondition(e) must hold in the preceding history.” Probably the most significant and best-understood aspect of the frame problem is what Georgeff [6] has called the combinatorial problem, i.e., how to state which properties remain unaffected by actions. In a recent paper, he shows how independence ux- ioms can be used to solve this and other related problems. As- suming that one is able to state which events are independent of which properties, Georgeff offers a general-purpose law of persistence that guarantees that properties will remain unaf- fected by independent events. One of the undeveloped aspects of Georgeff’s theory is exactly how to specify independence ax- ioms. He suggests domain-structuring techniques as a possible solution. Hayes [7] has al so suggested that domain structure can be used as a way of delineating frame axioms. We now show how GEM’s use of locality can achieve precisely this objective. GEM specifications are associated with the following implicit constraints imposed by the structure of a domain: o All events belonging to the same element must be totally or- dered temporally. For instance, in our cooking class scenario, each oven can have only one item baking in it at a time. El- ements are often used to model limited resources which, by their very nature, are constrained to support only one action at a time. e Groups are used to represent regions whose boundaries limit inward causal effect.7 For example, in the cooking domain, 6As discussed elsewhere [8], GEM state descriptions are built strictly in terms of formulas on events. This way of defining state descriptions does not result in any loss of expressiveness and maintains the purity and usefulness of event-based descriptions. Indeed, priority properties such as these are much more awkward to describe in formalisms that are based strictly on state. ‘Thus, the effects of events are assumed to range freely unless they are explicitly blocked. While we could have made a group wall limit outward access as well as inward, we have found this one-way “wall” to be more useful. The effect of a two-way wall can be simulated by enclosing more regions within groups. 242 Planning the actions of students belonging to different kitchens cannot be related causally. However, within each kitchen, the actions of the teacher, the students, and the oven may be causally interrelated. In addition, because the teacher belongs to all kitchens, causal interactions between kitchens may propogate through the teacher. One exception to the group rule is the use of ports: “holes” in the group boundary. If an event is a port for a group g, that event can be affected by other events outside g. For example, we might declare certain student actions as kitchen ports. By doing so, these student actions may be affected by everyone in the cooking class. Let us assume that the atomic formula port(e,g) is true for every event e that has been declared a port of group g. Moreover, suppose that el belongs to element ell, e2 to element eZ2. Then the formal constraint on the causal re- lation imposed by group structure may be described as follows: constraint associated with a larger region containing R. For ex- ample, if the cooking class group as a whole were associated with constraints and properties that pertain to, or might be affected by, events of any student, then all students could affect one an- other by interfering with these “global” requirements. However, if a strict and more structured specification-writing methodology is adhered to, localization can be made tighter. For example, if the constraints associated with the cooking class as a whole are restricted to apply only to actions that each kitchen or student makes explicitly accessible (e.g., port events), then event inter- actions can be well delineated and kept under control. This kind of constraint localization also helps to ensure that subplans gen- erated by localized planning procedures will not interfere with each other, even at a more global level. 4 Localized Planning Method maycuuse(el,e2) - uccess(eZl,eZ2)V~ort(e2,g)Auccess(eZl,g)] We define eccess(x, y) to be true if any of the following holds: (1) 2 and y belong directly to the same group or (2) y is not contained within any group or (3) y is “global” to 2. We say that y belongs directly to a group g if it is explicitly declared as one of the components of group g. We consider y to be global to x if there is some surrounding group g’ such that y belongs directly to g’ and x in indirectly contained within g’. For instance, if we added a door element directly to the cooking class group, it would be global and thus accessible by every student. In this section we describe the GEMPLAN multiagent plan- ning system. Written in Prolog on a Sun 3/50, it has already been used to generate multiagent solutions to blocks-world prob- lems. Some of GEMPLAN’s important characteristics are the construction of synchronized plans through the satisfaction of first-order temporal-logic constraints; the use of localized plan representations and localized planning search spaces; an adapt- able, table-driven mechanism for guiding the planning search that can make explicit use of noninterference among localized constraints.g GEMPLAN’s task is to construct a world plan (i.e., a set of partially ordered, localized events) all of whose executions sat- Group structure is a natural way of defining event indepen- isfy a given set of domain constraints and achieve some stated dence in a general manner. ’ Suppose a given property P is goal. lo Given an initial world plan (possibly empty), the planner associated with activity in group R. If event e has no causal repeatedly chooses a domain constraint, checks to see whether access to activity in R (i.e., 7(3f,f&~)[muycuuse(e,f)]) or, al- the constraint is satisfied and, if it is not, either backtracks to ternatively, e has no causal access to any event in R that can an earlier decision point in the planning process or goes ahead influence P, then we can assert that e is independent of P. For and modifies the world pIan so that the constraint will be sat- example, since the entire cooking class lacks ports, no activ- isfied. From a conceptual standpoint, the planning process may ity outside the class can influence properties of the class. By be viewed as a search through a tree (see Figure 3). At each node helping to define independence in a succinct and well-defined of the tree is stored a representation of the currently constructed fashion, group structure helps solve the combinatorial problem. world plan. When a node is reached during the planning search, Elements also help address the combinatorial problem because a constraint is checked. To satisfy it, the search space branches they limit potential forms of parallelism within a domain. Re- for each of the possible ways of repairing or fixing the world plan. striction of the oven to sequential use, for example, ensures the These “fixes” may involve the addition of new events, elements, persistence of certain oven properties. groups, or event interrelationhips. Strictly speaking, of course, events also influence one another by virtue of the explicit constraints associated with domain re- gions. Depending on the nature and scope of constraints, actions within certain regions may or may not violate the constraints of other regions. This is precisely the advantage of GEM’s local- ization of constraints; if a given region’s constraint is known to be satisfied, the introduction of a new event at some other disjoint region can do nothing to violate that constraint. The GEMPLAN planner takes direct advantage of this guaranteed noninterference property. Of course, depending on the structure of elements and groups, noninterference cannot always be guaranteed: activity occur- ring within a particular region R might violate a more global 8Dynamic restructuring of groups and elements is also utilized in an expanded version of the GEM model [lo]. However, we do not use it in this paper or in the current version of GEMPLAN. In this paper, we shall concentrate on describing GEMPLAN primarily from an architectural point of view, stressing its local- ization of the planning search process. However, since the devel- opment of constraint satisfaction algorithms is one of our key re- search objectives, it merits some brief discussion here. Because of the intractability of solving arbitrary first-order temporal- logic constraints, we decided that a good initial approach to the - __ constraint satisfaction problem would be to use predefined fixes for common constraint forms. This approach is similar to Chap- man’s idea of cognitive cliches - i.e., utilizing a set of specialized theories that are common to many domains, rather than trying to solve for the most general theory [a]. The current GEM- ‘The current planner, however, does not make use of ports; it only takes advantage of the localization of constraints and the limitations imposed by’ group/element structure. “The stated goal of the world plan is viewed as one of the constraints to be satisfied. Lansky and Fogelsong 243 STU;ENTl A check constraint I I I KIT@1 add Figure 3 #: Localized Search Trees for the Cooking Class Domain check “Prepark -> Bake” constraint -------d-- “Prepare” event for its student subelement, studentl. Let us assume that we have reached the node labeled Nl for kitcheni, and that a set of Bake events has already been inserted into the world plan. At this point, we imagine that the following global kitchen1 constraint is checked: (V bake(cake):Bake)@ prepare(cake):Prepare) prepare(cake) * bake(cake) . In other words, each baking event must have been enabled by a student event that prepares a cake. Moreover, we also assume that another constraint allows each cake preparation to enable or cause only one baking event. If the first constraint has already been satisfied (i.e., all Bake events already have an correspond- ing Prepare event), the planner will move on to some other kitchen1 constraint. If this is not the case, however, there are two ways to proceed. First, there may be an existing Prepare event that could be used - i.e., a cake has been prepared by some student, but has no corresponding Bake event. In this case, a causal relationship would be added between the existing Prepare event and the lone Bake event. The other fix is to gen- erate a new Prepare event involving one of the students. This choice is illustrated in Figure 3 as a branch to the student1 search space. At this point, the search space for student1 is resumed where it had left off (in a state where all its inter- nal constraints had been satisfied), the new Prepare event is added, and the student’s local constraints are rechecked. After studenti’s constraints have been satisfied, control returns to the kitchen1 search space. Note that no rechecking of the local constraints for any other student, the teacher, or the oven, is nec- essary, since these could not possibly be affected by a student1 event. However some global kitchen1 constraints may have to be rechecked as a result of this change. The actual order in which constraints and fixes are applied is determined by a plan search table for each local region. This ta- ble can be set up by a user to define quite flexible kinds of search. The table provides three types of information: (1) the order in which to apply constraints, (2) the order in which to try con- straint fixes, and (3) when and where to backtrack. The partic- ular constraint, fix, or backtracking scheme chosen at any point in time is context-sensitive - it can be determined by the partic- ular situation at hand. Whenever backtracking occurs, the node left behind is still retained for possible later exploration. The search can thus use a mixture of depth- and breadth-oriented exploration, depending on the strategy determined by the table. If a user does not supply domain-specific search information, a default depth-first search strategy is used. The constraints and fixes within each region are chosen in a given order. Chronolog- ical backtracking is used when a plan cannot be fixed. Planning halts when either no new options can be explored or all con- straints have been checked successfully. The use of the GEMPLAN search table has proved to be a quite powerful and flexible means of guiding the planning pro- cess. It can easily be constructed to take advantage of a domain’s locality properties. When fixes modify the structure of a plan, rechecking for consequent interference with other constraints can be limited to those regions and constraints that could be af- fected. In contrast, most existing planning systems (which may also use “divide-and-conquer” methodologies) do not localize the PLAN system can satisfy the constraints used in the blocks- world domain: event prerequisites, constraints based on regular- expression patterns of events, the maintenance of state-based preconditions,11 and nonatomic-event expansion (into patterns of events). The planner also includes a facility for accumulat- ing constraints on the values of unbound event parameters. We intend to add several constraint forms in the future, including various kinds of priorities, mutual exclusion, and simultaneity. This will enable GEMPLAN to handle more sophisticated forms of synchronization than other existing planners. To solve propo- sitional constraints, we hope to utilize the algorithms conceived by Manna and Wolper [12] and implemented by Stuart [1’7]. The most important feature of GEMPLAN’s system architec- ture is its partitioning of the planning search space and plan representation in a way that reflects the group/element struc- ture of a domain. For each element and group there exists a local search tree and plan representation. The overall planning system may be viewed as a set of mini-planning systems, one for each region. In accordance with the structure of a domain, more global regions have access to their subregions’ plans and search spaces. It is these “parent-child” connections that form the glue with which the entire planning system is tied together. The plan descriptions associated with GEMPLAN tree nodes are built by using inheritance: only events and relations that represent changes from the parent node plan are stored. The entire plan at each node may thus be derived by following the plan inheritance chain, accumulating plan modifications along the way. This inheritance representation is not only compact, but is also well suited to localized plan representation; the same inheritance scheme can be used for consolidating local plan in- formation to form more global plan descriptions. Each group plan is described as the union of a set of plans for each of its composite regions (which will include all local event occurrences and relationships resulting from the satisfaction of local con- straints), along with any relations that are added by virtue of the global constraints. As an illustration, consider the search trees depicted in Figure 3 - one for the kitchen1 group of the cooking class, the other “The algorithm used is equivalent to an implementation of Chapman’s domain description sufficiently and therefore cannot exploit the truth criterion [3]. resulting properties of noninterference. 244 Planning In addition, most planning systems search the plan space in a fairly rigid way - typically, local expansion to a uniform level of description followed by interaction analysis. In contrast, the GEMPLAN planning search can be flexibly tuned. Depend- ing on the structure of a domain, the nature of its constraints, and the content of the search table, the search can be quite distributed and loosely coupled for regions with weak interde- pendencies, but tightly coupled when regions interact strongly. The table can also be set up to focus the search in prescribed ways. Researchers developing the ISIS scheduling system [5] have found resource- and agent-focused search to be useful for job-shop scheduling. In GEMPLAN, the search can be focused on any region, as long as the user has specified domain structure, regional constraints, and the search table appropriately. 5 Conclusions This paper has presented an event-based formalism, GEM, for representing parallel, multiagent domains. GEM can explicitly describe arbitrary forms of domain structure as well as com- plex constraints on events and their interrelationships. We have shown how the behavioral limitations associated with GEM’s structuring mechanisms (elements and groups) can be used to delineate event or property independence. We demonstrated how these implicit structural constraints, along with the local- ization of explicit constraints and the use of ports, can help solve aspects of the frame problem. We have also presented the GEMPLAN planning architecture, which directly partitions plan representation and the planning search space according to the group/element structure of a do- main. By employing a table-driven search mechanism, the plan- ning process can be guided to take advantage of the locality and interactional properties of a domain. We have used this system to construct parallel solutions to several blocks-world problems; it is our intention to extend its application to more complicated scheduling domains in the near future. Acknowledgments We would like to thank those readers and critics who have helped to improve the quality of this paper: Michael George& Martha Pollack, David Wilkins, Mark Drummond, Steven Rubin, Marc Vilain, and Save1 Kliachko. References PI PI [31 PI Allen, J.F. “Towards a General Theory of Action and Time,” Artificial Intelligence, Vol. 23, No. 2, pp. 123-154 (1984). Chapman,D. “Cognitive Cliches,” AI Working Paper 286, MIT Laboratory for Artificial Intelligence, Cambridge, Massachusetts (April 1986). Chapman, D. “Planning for Conjunctive Goals,” Masters Thesis, Technical Report MIT-AI-TR-802, MIT Laboratory for Artificial Intelligence, Cambridge, Massachusetts (1985). Drummond, M.E. “A Representation of Action and Belief for Automatic Planning Systems,” in Reasoning About Actions and Plans, Proceedings of the 1986 Workshop at Timberline, Oregon, M.P. Georgeff and A.L. Lansky (editors), Morgan Kaufman Pub- lishers, Los Altos, California, pp. 189-211 (1987). [51 PI [71 P31 PI PO1 WI WI [I31 P4 [I51 Pf-51 El71 P31 WI PO1 Fox, MS. and Smith, S.F. “ISIS - A Knowledge-Based System for Factory Scheduling,” Expert Systems, the International Journal of Knowledge Engineering, Volume 1, Number 1, pp. 25-49 (July 1984). Georgeff, M. P. “Many Agents are Better Than One,” in The Frame Problem in Artificial Intelligence, Proceedings of the 1987 Workshop, F. Brown (editor), Morgan Kaufman Publishers, Los Altos, California (1987). Hayes, P.J. “The Frame Problem and Related Problems in Artifi- cial Intelligence,” from Artificial Intelligence and Human Think- ing, pp. 45-59, A. Elithorn and D. Jones (editors), Jossey-Bass, Inc. and Elsevier Scientific Publishing Company (1973). Lansky, A.L. “A Representation of Parallel Activity Based on Events, Structure, and Causality,” Technical Note 401, Artificial Intelligence Center, SRI International, Menlo Park, California (1986), also appearing in Reasoning About Actions and Plans, Proceedings of the 1986 Workshop at Timberline, Oregon, M.P. Georgeff and A.L. Lansky (editors), Morgan Kaufman Publish- ers, Los Altos, California, pp. 123-160 (1987). Lansky, A.L. “A ‘Behavioral’ Approach to Multiagent Domains,” in Proceedings of 1985 Workshop on Distributed Artificial Intel- ligence, Sea Ranch, California, pp. 159-183 (1985). Lansky, A.L. “Specification and Analysis of Concurrency,” Ph.D. Thesis, Technical Report STAN-CS-83-993, Department of Com- puter Science, Stanford University, Stanford, California (Decem- ber 1983). Lansky, A.L. and S.S.Owicki, “GEM: A Tool for Concurrency Specification and Verification,” Proceedings of the Second An- nual ACM Symposium on Principles of Distributed Computing, pp.198-212 (August 1983). Manna, Z. and P.Wolper, “Synthesis of Communicating Pro- cesses from Temporal Logic Specifications,” ACM Trunsactions on Programming Languages and Systems, 6 (l), pp.68-93 (Jan- uary 1984). McDermott, D. “A Temporal Logic for Reasoning About Pro- cesses and Plans,” Cognitive Science 6, pp.lOl-155 (1982). Pelavin, R. and J.F. Allen, “A Formal Logic of Plans in Tem- porally Rich Domains,” Proceedings of the IEEE, Special Issue on Knowledge Representation, Volume 74, No. 10, pp. 1364-1382 (October 1986). Rubin, S.M., Computer Aids for VLSI Design, Addison-Wesley, Reading, Massachusetts (1987). Sacerdoti, E.D. A Structure for Plans and Behavior, Elsevier North-Holland, Inc., New York, New York (1977). Stuart, C. “An Implementation of a Multi-Agent Plan Synchro- nizer Using a Temporal Logic Theorem Prover,” IJCAI-85, Pro- ceedings of the Eighth International Joint Conference on Artifi- cial Intelligence, Los Angeles, California (August 1985). Tate, A. “Goal Structure, Holding Periods, and ‘Clouds’,” in Rea- soning About Actions and Plans, Proceedings of the 1986 Work- shop at Timberline, Oregon, M.P. Georgeff and A.L. Lansky (ed- itors), Morgan Kaufman Publishers, Los Altos, California, pp. 267-277 (1987). Vere, S.A. “Planning in Time: Windows and Durations for Activities and Goals,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-5, No.3, pp. 246-267 (May 1983). Wilkins, D. “Domain-independent Planning: Representation and Plan Generation,” Artificial Intelligence, Vol. 22, No. 3, pp. 269- 301 (April 1984). hansky and Fogelsong 245
1987
43
636
A Model for Concurrent Actions having Temporal Extent Richard N. Pelavin Philips Laboratories North American Phili s Cor oration Briarcliff Manor, RLY. 18510 James F. Allen Corn 8 uter Science De K artment niversity of Rot ester Rochester, N.Y. 14607 Abstract In this paper we present a semantic model that is used to interpret a logic that represents con- current actions having temporal extent. In an earlier paper [Pelavin and Allen, 19861 we described how this logic is used to formulate planning problems that involve concurrent actions and external events. In this paper we focus on the semantic structure. This structure provides a basis for describing the interaction between actions, both concurrent and sequential, and for composing simple actions to form com- plex ones. This model can also treat actions that are influenced by properties that hold and events that occur during the time that the action is to be executed. Each model includes a set of world-histories, which are complete worlds over time, and a function that relates world-histories that differ solely on the account of an action exe- cuted at a particular time. This treatment derives from the semantic theories of conditionals developed by Stalnaker [Stalnaker, 19681 and Lewis [Lewis, 19731. I. Introduction One of the most successful approaches to representing events and their effects in Artificial Intelligence has been situation calculus [McCarthy and Hayes,1969]. In this logic, an event is modeled by a function from situation, i.e. situation. instantaneous snapshot of the world, to This function captures the state changes produced by the event in different situations. A deficiency of this representation is that simul- taneous events cannot be directly modeled; one cannot describe the result produced by two events initiated in the same situation (see, 19861 who extends and however, Georgeff [Georgeff, modifies situation calculus so this can be done). Another deficiency is that situation calculus does not capture what is happening while an event is occurring. Thus, one cannot directly treat events that are affected by conditions that hold during execution, such as the event “sailing across the lake ’ This paper describes work done in the Computer Science Department at the University of Rochester. It was supported in part by the National Science Foundation under grant DCR- 8502481, the Air Force Systems Command, Rome Air Development Center, and the Air Force Office of Scientific Research under contract number F30602-85-C-0008. This contract supports the North East Artificial Intelligence Consortium (NAIC). 246 Planning which can occur only if the wind is blowing while the sailing is taking place. Allen [Allen, 19841 and McDermott [McDermott, 19821 have put forth logics that represent simultane- ous events and events with temporal extent. Allen develops a linear time model based on intervals, i.e. contiguous chunks of time. McDermott describes a branching time model where a set of instantaneous states are arranged into a tree that branches into the future. McDermott uses the term “interval” to refer to a convex set of states that lie along a branch in the tree of states. In both logics, an event is equated with the set of intervals over which the event occurs. Properties, which capture static aspects of the world, are treated in a similar manner. Each property is equated with the set of intervals over which the property holds. Simultaneous events can be described by stating that two events occur over intervals that overlap in time. One can also describe the properties that hold and the events that occur while some event takes place. Although these logics overcome some of the deficiencies of situation calculus, they are not ade- quate for reasoning about actions and forming plans. These logics lack a structure analogous to the result function in situation calculus that describes the result of executing different actions in different contexts. In situation calculus, the context is given by the situation in which an action is to be initiated. At each situa- tion, one can describe whether an action can be suc- cessfully executed and describe whether an action negates some property or does not affect it. This struc- ture also provides a simple basis for constructing com- plex actions, i.e. sequences of actions. Without extension, similar statements cannot be made in Allen’s and McDermott’s logics. For example, one cannot describe that an action does not affect some property or event, such as stating that raising one’s arm does not affect whether it is raining out. One cannot represent that some action can be exe- cuted only under certain conditions, such as stating that the agent can edit a document during interval i only if the text editor is operational during i. Allen’s and McDermott’s logics do not provide a basis for relating the conditions under which a set of actions, concurrent or sequential, can be executed together to the conditions under which the actions, making up the set, can be executed individually. Whether two actions can be executed together depends on how they interact. For example, one may be able to execute two actions individually, but not concurrently, sue h as “moving one’s hand up” and From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. “moving one’s hand down”. It might be the case that two actions can be executed together only under cer- tain conditions, such as two concurrent actions that share the same type of resource. Allen’s and McDermott’s logics can express “if actions al and a2 both occur during i, then there must be at least two resources available during i”. These logics, however, cannot distinguish whether “there are at least two resources available during i” is a necessary condition that must hold in order to execute al and a2 together, or whether this condition is an effect produced by the joint execution of al and a2. A detailed discussion of these issues is given in [Pelavin, 1987l.l To remedy these problems, we develop a semantic model that contains a structure analogous to the result function in situation calculus. In our models, world-histories and action instances take the place of situations and actions. A world-history refers to a complete world over time, rather than an instantane- ous snapshot. An action instance, refers to an action to be performed at a specified time. A world-history serves as the context in which the execution of an action instance is specified. This enables us to model the influence of conditions that may hold during the time that an action instance is to be executed, and, as we will see, provides a simple basis for modeling con- current interactions and for defining the joint execu- tion of a set of action instances. To describe these models, we extend Allen’s language, which is a first order language, with two modal operators. In this paper, we only describe the underlying semantic structure and do not discuss the syntax or interpretation of this modal language. Moreover, we focus on the portion of the model that pertains to modeling actions, after briefly describing the other components in the model structure. The reader interested in the language, axiomatics, or other details omitted in this paper can refer to [Pelavin, 19871 and [Pelavin and Allen, 19861. II. Overview of the model structure In each model, a set of world-histories and set of tem- poral intervals are identified. Each temporal interval picks out a common time across the set of world- histories. The intervals are arranged by the MEETS relation to form a global date line. The relation MEETS(il,i2) is true if interval i1 is immediately prior to interval iZ. In [Allen and Hayes, 19851, it is shown that all temporal interval relations, such as ‘*overlaps to the right’ and “starts”, can be defined in terms of MEETS. The model identifies the set of properties and events that hold (occur) at various times in the different world-histories. Formally, events and proper- ties are sets of ordered pairs, each formed by an inter- val and a world-history. If <i,h> E eu, then event ev occurs during intervai i in world-history h. Similarly, if <i,h> E pr, then property pr holds during interval i in world-history h. To capture the relation “if pro- perty pr holds over an interval i then pr holds over all intervals contained in i’, we restrict the models so 1 For example, in [Pelavin, 19871 we show why a branching time model cannot be used to interpret “action al can be executed during time i if we want to treat actions, such as “sailing”, that are influenced by conditions that hold during execution. that if il is contained in i%! and <i2,h> E pr, then <il,h> E pr. World-histories are arranged into trees that branch into the future by the R accessibility relation which takes an interval and two world-histories as arguments. Intuitively, R(i,hl,h2) means that hl and hi? share a common past through the end of interval i and are possible with respect to each other at i. This structure is identical to one found in [Haas, 19851 with the exception that Haas uses a time point to relate world-theories, rather than the end of an interval. Constraints are placed on R to insure that i) it is an equivalence relation for a fixed interval, ii) if R(il,hl,h2), then h1 and h2 agree on all events and properties that end before or at the same time as i1, and iii) if R(il,hl,h2), then R(i2,hl,h2) for all intervals i,$! that end at the same time as or before iI. In situation calculus, the execution of an action is given with respect to a situation, and an action is modeled as a function from situation to situation. In our model, the execution of an action instance is given with respect to a world-history, and an action instance is modeled as a function from world-history to set of world-histories. The rest of the paper is devoted to describing this function and showing how a function associated with a set of action instances can be con strutted from the functions associated with its members. In this paper, we will only discuss a type of action instance called a basic action instance. Basic actions [Goldman, 19701 refer to actions that are primitive in the sense that all non-basic actions are brought about by performing one or more basic actions under appropriate conditions. In [Pelavin, 19871, we describe how all other action instances (which we refer to as “plan instances”) are defined in terms of basic action instances. III. The I?,, function The result of executing a basic action instance with respect to a world-history is given by the F,, function. F,l takes a basic action instance bai and a world- history h as arguments and yields a nonempty set of world-histories that “differ from h solely on the account of the occurrence of bai’. Equivalently, we say that the world-histories belonging to F,l(bai,h) are the “closest world-histories’* to h where basic action instance bai occurs. The term ‘*closest” is a vestige from Stalnaker’s [Stalnaker, 1968 and Lewis’ [Lewis, cl 19731 semantic theories of con itionals from which our treatment derives. In the remainder of this section, we explain what we mean by “differing solely on the account of the occurrence of a basic action instance” and present the constraints that are imposed on F,l in accordance with these intuitions. Very briefly, if h2 belongs to F,l(bai,h), then h and h2 will coincide on all conditions that are not affected by the occurrence of bai. This includes conditions o;lt of the agent’s control, such as whether or not it is raining during some interval, and conditions that only refer to times that end before bai. One reason that F,l yields a set of world-histories, rather than a single one, is to provide for non- deterministic basic actions. Another reason for treat- ing F,l as a set is explained later. Pelavin and Allen 247 We use a term of the form “bu@i* to refer to a basic action instance whose time of occurrence is i. The treatment of F,l(ba@i,h) is trivial when baai occurs in h. In this case, F,l(ba@i,h) is equal to {h} reflecting the principle that a world-history is closer to itself than any other world-history. This is captured Gove; following constraint which is imposed on our . . BAl) For all basic action instances (ba@i), and world-histories (h), if h E OC(ba@i), then F,l(ba@i,h) = {h} where OC(ba@i) is the set of world-histories in which ba@i occurs F,l(ba@?i,h) is also set to {h} when bu@i’s stun- durd conditions do not hold in h. The term “standard conditions” is taken from Goldman [Goldman, 19701 although we use it in more general way. A basic action’s standard conditions are conditions that must hold in order to execute the action. For example, the standard conditions for “the agent moves its right arm up during time i’ include the condition that the arm is not broken during time i. We also use standard condi- tions to refer to the conditions under which a move is legal when modeling a board game. If bu@z’s standard conditions do not hold in h, then F,l(ba@i,h), which equals {h}, contains a world- history in which bu@i does not occur. In effect, if bu@Zs standard conditions do not occur in h, we are not defining “the closest world-history to h where bai@i occurs”. We treat the lack of standard condi- tions this way because we want to restrict F,l so that if hi? belongs to F,l(ba@i,h) then hi! and h agree on all conditions that are not affected, directly or indirectly, by bu@i. This restriction would be violated if bai@i’s standard conditions did not hold in h, but F,l(bai@i,h) contained a world-history hi? where baai occurs. This stems from an assumption that a basic action cannot affect whether or not its own standard conditions hold. F,l(ba@i,h) yields a non-trivial result when b&$‘s standard conditions hold in h, but bu@i does not occur in h. In this case, all the members belonging to F,l(ba@i,h) differ from h and bu@i occurs in all these world-histories. Consequently, we impose the following constraint: For all world-histories (h and h2) and basic action instances (baa), if Fcl(ba@?i,h) # {h} then F,#&W) G OC(ba@i) Typically, when h& belongs to F,l(ba@i,h) and h.2 is distinct from h (which we will assume in the rest of this section), the two world-histories will differ on more than the status of “b&X occurs”. We assume that the set of world-histories adhere to a set of laws that govern the relations between events, properties, and other objects in the world-histories. A world- history formed by just modifying h to make “b&i occurs” true may violate some laws. Consequently, h and hi? will also differ on some conditions that are related, directly or indirectly, to “bu@i occurs” by some set of laws. As an example, suppose that property prZ does not hold during interval i,2 in h, but there is a law that entails that if b&i occurs then pr& holds during ~2. Consequently, h and hi? must differ on the status of “pr2 holds during 2” since bu@i occurs in h.2 World-histories h and h,?? may also differ on conditions that are indirectly affected by bu@?i. Suppose that there is a second law that entails that if pry? holds during i2 then pr3 holds during i3. If prS does not hold during iS in h, then h and hi? will also differ on this condition. As a second example, consider a law that entails that bu@i and bu@@i cannot occur together. Thus, if bui?@i occurs in h, any world-history hi? belonging to F,l(ba@i,h) will d’ff f 1 er rom h because bai?@i does not occur in h,??. This type of relation, as we will see, forms the basis for detecting interference between basic action instances and is used when composing basic action instances together. We assume that the difference between h and hi! are minimal in that changes are only made in going from h to hZ to satisfy laws that would be violated if these changes were not made. They agree on all other conditions. This includes conditions out of the agent’s control such as whether or not it is raining out. We also constrain our models so that h and hi? agree on all conditions that refer to times that are prior to b&J’s time of occurrence. This is captured by a constraint relating F,l to the R relation which is given as follows: BA-Rl) For all world-histories (hl and h2), basic action instances (ba@i), and intervals (iO), if h2 E F,l(ba@i,h) and MEETS(iO,i), then R(iO,h,hS) BA-Rl entails the relation that two world-histories differing on the occurrence of bu@i must coincide on all conditions that end before the beginning of interval i. This restriction presupposes that there are no laws specifying whether or not a basic action instance, whose standard conditions hold. occurs. One reason why F,,(ba@i,h) yields a set of world-histories, instead of a single one, is that there may be many ways to minimally modify h to account for bu@‘s occurrence. For example, suppose that only two of the three basic action instances, bul@, bui@i, and bu?l@i, can be executed together. Also assume that both bu.2@ and bu@i occur in h. In this case, F,l(bal@i,h) ‘11 wr contain (at least) two world-histories: one where both bu1@i and buZ@i occur, but not b&?@i, and another where bul@i and ba3@i occur, but not bu,Z’@i. It is important to emphasize that the F,l function is part of the semantic model and thus there is no need to precisely specify this function when reasoning in our logic. We describe the world using a set of sen- tences in our language (which is described in [Pelavin and Allen, 19861 and [Pelavin, 19871) Typically, a set of sentences only partially describe a model; there may be many models that satisfy a set of sentences. The F,l function provides a simple underlying structure to interpret sentences that describe what a basic action instance affects and does not affect with respect to a context that may include conditions that hold while the basic action instance is to be executed. As we will see, it also provides a simple basis for modeling basic action instance interactions and for treating the joint execution of a set of basic action instances. 248 Planning IV. Composing action instances The result of executing a set of basic action instances together is computed from the individual members in the set. In other words, F,l applied to a set of basic action instances b&-set is defined in terms of F,l applied individually to each member in b&set. In this section, we will let F,l take a set of basic actions instances as an argument, rather than a single one; F,, applied to the singleton set {bai} is to be treated as we described F,i applied to bai in the last section. Any set of basic action instances can be composed together regardless of their temporal relation. More- over, the definition of F,l applied to b&set does not need to be conditionalized on the temporal relations between the members of b&set. So for example, the composition of two concurrent basic action instances is defined in the same way as the composition of two basic action instances that do not overlap in time. The following notation is introduced to succinctly present the definition of F,l applied to a basic action instance set and to present two related constraints. The constructor function “*‘I combines two func- tions from H to 2H to form a function from H to 2H, where H denotes a set of world-histories: fx*fy(h) =def hx~~h~fy(W The set of composition junctions of a basic action instance set is recursively defined by: i) A singleton basic action instance tion function: xh .F,l(Wl,h) set {bai} has one composi- ii) The composition functions of a basic action instance set C&set with more than one element: {bai’cmp 1 bai E bai- set and cmp is a composition function of (bai-set - bai)} If cmp is a composition function of bui-set, then cmp(h) yields the set of world-histories that would be reached by successively modifying h by the basic action instances belonging to b&set in some order. ALL-OC relates world-histories and composition functions: For any composition function of bai-set (cmp), ALL-OC(h,cmp) =def cmp(h) c OC(bai-set) If cmp is a composition function of b&-set, then ALL- QC h,cmp) \ is true iff the result of modifying h succes- sive y by all the members in b&set, in the order impli- tit in cmp, yields a set of world-histories where all the members in bui-set occur. Finally, the definition of F,l and constraints BA- CMPl and BA-CMP2 are given by: I cmdh) If there exists a composition F,,(bai-set,h) =,&f function of bai-set (cmp) such that ALL-OC(h,cmp) Ud Otherwise BA-CMF’l) For all world-histories (h) and basic action instance sets (bai-set), if there exists two composition functions of bai-set (cmpl and cmp2) such that ALL-OC(h,cmpl) and ALL-OC(h ,cmp2), then cmpl(h) = cmp2(h) BA-CMPB) For all world-histories (h) and composition functions (cmpl and cmp2),if ALL-OC(h,cmp2) and ALL-OC(h,cmpl*cmp2) then ALL-OC(h,cmp2*cmpl) In the following discussion, we examine BA- CMPl, BA-CMP2, and the definition of F,l(bai-set) for the case where b&set consists of two basic action instances, bail and buii?, that yield unique closest world-histories when F,l is applied to either of them at any world-history. In [Pelavin, 19871, a detailed expla- nation is provided for the other cases, such as when bui-set contains three or more members. The set {bail,bai2} h as two composition func- tions, which we will denote by bail*bai2 and bai2*bail. bail*bai2(h) yields a singleton set contain- ing the world-history obtained by modifying h, first by bail, then by bud?. bai2*bail(h) yields a singleton set containing the world-history obtained by modifying h, first by bui& then by bail. It is important to keep in mind that bail and buii? have fixed times associated with them and consequently may have any temporal relation. Thus, bail*bai2(h) does not necessarily describe the results of executing bail before buii?, since bui2 may be prior to or concurrent with bail. Let us first consider the case where bail’s and buiPs standard conditions hold at all world-histories. We say that bail and bud? interfere at world-history h if they cannot be executed together in the context given by h. If they interfere, we set F,l({bail,bai2),h) to {h}, treating {bail,bai2} as if its standard condi- tions do not hold at h. As an example, “move right $and up during i” and “move right hand down during 1 are basic action instances that interfere at all world-histories (when modeling a typical world). Con- versely, “move right hand up during i” and “move left hand down during i” do not interfere at any world- . history. We may also model basic action instances that conditionally interfere, ones that interfere at some world-histories but not at others. For example, if two concurrent basic action instances share the same type of resource, they interfere only at world-histories where there is not enough of this resource available during their time of execution. It is important to note that interference is defined relative to world-histories. Consequently, whether two or more basic actions interfere can depend on conditions that hold &n&g, execution. Some other treatments of interference in the AI literature, such as Georgeff Georgeff, 1.9861, provide for conditional interference, b ut onlv in the case when interference depends on conditions that hold just prior to execution. We can detect whether bail and buig interfere at a world-history h by examining F,l applied to bail and b&i? individually. Since we are assuming that bail’s (and buik”s) standard conditions hold everywhere, F,,({bail},h) yields a world-history in which bail occurs. Call this world-history hx. If bail and bu&! interfere at h, and consequently at hx, F,l( yields a world-history where b&Z occurs Pelavin and Allen 249 standard conditions hold at hx), but not bail. If they not interfere both bail and bui2 occur in el({bai2),hx) in which case we set F,l({bail bai2) h) to F,l({bai2},hx).2 S’ mce F,l({bai2},hx) is the iesult ‘of modifying h first by bail, then by bui2, it is equivalent to bail*bai2(h). We can also detect if bail and b&2 interfere by modifying h first by bai2, then by bail. bai2*bail(h) yields this world-history. If bail and bui2 interfere at h, then bail, but not bui2, occurs in bai2*bail(h). If they do not interfere, both bail and bui2 occur in bai2*bail(h). Moreover, if they do not interfere, we assume that modifying h, first by bail, then by bui2 yields the same world-history obtained by modifying h, first by bai2, then by bail. The definition of F,l and constraints BA-CMF’l and BA-CMP2 capture the treatment described above. If bail and bui2 interfere with each other at, h, then bail and bai2 do not both occur together in either bail*baZ(h) F~l({bail,bai2},h~:s Consequently, If bail and bui2 do not interfere, then they occur’ together in both bail*bai2(h) and bai2*bail(h). In this case F,l({bail,bai2},h) is set to bail*bai2(h) which equals bai2*bail(h) by constraint BA-CMPl. For the case where bail’s and bui2’s standard conditions hold everywhere, constraint BA-CMP2 insures that bail*bai2(h) and bail*bais(h) are compatible; they would be incompatible, if both bail and b&Z? occurred together in one of them, signifying that bail and bai$’ did not interfere at h, but did not occur together in the other, signifying they did interfere at h. The analysis described above also applies in less restrictive cases where bail’s and bui2’s standard con- ditions may not hold at all world-histories. This analysis is applicable as long as bail’s standard conch- tions hold at both h and F,l({bai2},h), and bui2’s stan- dard conditions hold at both h and Fcl({bail},h). Let us now consider the case where both bail’s and b&2’s standard conditions hold at h, but the occurrence of one of the basic action instances, say bail, ruins the others standard conditions. This situa- tion is treated as interference; F,l({bail,bai2},h) is set to {h}. If bail ruins bui2’s standard conditions with respect to h then bui2’s standard conditions do not hold in F,l({bail},h). C onsequently, bail, but not bui2, occurs in bail*bai2(h). By constraint BA-CMP2, both bail and bui2 will not occur together in bai2*bail(h) either. Thus, by the definition of Fcl, we see that F,l({bail,bai2},h) is set to {h}. The next case to consider is where bail’s, but not b&2’s, standard conditions hold at h. In this situa- tion, F,l({bail,bai2},h) is set to {h) unless the follow- ing two conditions hold: i) the occurrence of bail with respect to h brings about bui2’s standard conditions, and ii) they do not interfere with each other at h. If both i) and ii) hold, then both bail and bui2 occur together in bail*bai2(h). Consequently, by the definition of F,,l, F,l({bail,baiS},h) is set to bail*bai2(h). This case differs from the situation where both bail’s and bui2’s standard conditions hold 2 World-history hz and the world-history in F,l({bail},hx) are not necessarily distinct from h. For example, if both 6ao’l and b&b occur in h, then Fcl({bail},h) = F,l({baiS},h) = {h) by constraint BAl; consequently F,l({bail,bai2},h) equals {h}. in h; if bail brings about bui2’s standard conditions, then bail and bui2 do not necessarily occur together in bai2*bail(h) even though they do not interfere. Appropriately, constraints BA-CMPl and BA-CMP2 are not applicable in this case. The last case to consider is where both bail’s and bui2’s standard conditions do not hold in h. In this situation, F,l((bail,bai2},h) is simply set to {h}. V. Conclusion We have presented a model that provides for con- current actions having temporal extent. We have integrated Allen’s model [Allen,1984], which can treat simultaneous events having temporal extent, with a structure analogous to the result function in situation calculus. This structure captures the result of execut- ing an action at a specified time with respect to a con- text given by a world-history, i.e. a complete world over time. This enables us to model actions and action interactions that are affected by conditions that hold during execution. This structure also provides a simple framework for composing simple actions, both concurrent and sequential, to form complex ones. Acknowledgments The authors wish to thank Paul Benjamin for his com- ments on an earlier draft. References [Allen, 19841 Allen, J.F., Towards a General Theory of Action and Time, Artificial Intelligence 2~42 (1984), 123-154. [Allen and Hayes, 19851 Allen, J.F. and Hayes P.J., A Common-Sense Theory of Time, 9th International Joint Conference on Artificial Intelligence, Los Angeles, USA, August 1985. [Georgeff,1986] Georgeff M., The Representation of Events in Multiagent Domains, Proceedings of the National Conference on Artificial Intelligence, Phi- ladelphia, PA, August 1986, 70-75. [Goldman, 197Q] Goldman A. I., A Theory of Human Action, Prentice Hall, Englewood Cliffs, NJ, 1970. [Haas, 19851 Haas A., Possible Events, Actual Events, and Robots, Computational Intellagence 1,2 (1985) L ewis, 19731 Lewis D. K., Counterjuctuals, Harvard University Press, Cambridge, MA, 1973. [McCarthy and Hayes, 1969 McCarthy J. and Hayes P., Some Philosophical lJ roblems from the Stand- point of Artificial Intelligence, in Machine Intelli- gence, vol. 4, Michie B.M.D. (ed.), 1969 463-502. [McDermott, 19821 McDermott D., A Temporal Logic for Reasoning about Process and Plans, Cognitive Science 6,2 (1982), 101-155. [Pelavin and Allen, 19861 Pelavin R.N. and Allen J.F., A Formal Logii: of Plans in Temporally Rich Domains, Proceedings of the IEEE 17,lO ( October 1986), 1364-1382. [Pelavin, 19871 Pelavin R.N., A FBDrmul Logic for Plan- ning with Concurrent Actions and External Events, PhD Thesis, University of Roth., 1987 (expected). [Stalnaker, 19681 Stalnaker R., A Theory of Condition- als, in Studzes in Logical Theory, Rescher N. (ed.), Basil Blackwell, Oxford, 1968, 98-112. 250 Planning
1987
44
637
THE CONSISTENT LABELING PROBLEM IN TEMp0RA.L REASONING Edward P K TSANG Department of Computer Science University of Essex Colchester CG4 3SQ. U K ABSTRACT Temporal reasoning can be perfn-med by maintaining a temporal relation n&work, a complete network in which the nodes are time intervals and each arc is the temporal relation between the two intervals which it connects. In this paper, we point out that the task of detecting inconsistency of the network and mapping the intervals onto a date line is a Consistent Labeling problem (CLPJ. The problem is formalized and analyzed. The signijicance of identifying and analyzing the CLP in temporal reasoning is that CLPs have certain features which allow us to apply certain techniques to our problem. We also point out that the CLP exists when we reason with disjunctive temporal relations. Therefore, the intractability of the constraint propagation mechunism in tempo& reasoning is inherent in the problem, not caused by the represent&n that we choose for time, as [Vi&n $ Kautz 861 claims. 1 Introduction Temporal reasoning has recently been the subject of great attention in AI. Natural language understanding systems like [Bruce 721. [Kahn & Gorry 771. etc. and planning systems like DEVISER [Vere 831. TIMELOGIC [Allen & Koomen 831. ISIS [Fox & Smith 841 FORBIN [Dean 85][Miller et al. 851. TLP [Tsang 86b,87a]. etc. all in some sense model and reason with time. [Allen 831 suggests modeling time in an interval-based temporal seructure. He also presents a formalism for reasoning with disjunctive temporal relations among intervals. In this paper. we shall start by looking at Allen’s formalism. Then we shall identify the consistent labeling problem (CLP) in it, and show that this problem exists in point- based approaches as well, whenever we reason with all disjunctive temporal relations at the same time. The significance of identifying the CLP will also be discussed. II Temporal Reasoning by maintenance of a relation metwork In Allen’s temporal frame. each assertion is associated with an interval in which it holds. Intervals and their temporal relations can be represented by a complete simple graph which is called a Relation Network: G = (N. R) where N is a finite set of intervals (which form the nodes of G) and R is a set of temporal relations (which form the arcs). Between any two nodes X and Y in N, there exists an arc in R which goes from X to Y and another arc which goes from Y to X (hence G is complete). For convenience, we use Rxy to represent the temporal relation between intervals X and Y throughout this paper. Ryx is just the inverse relation of Rxy (since Rxy and Ryx must coexist, G is a simple graph). We follow [Allen 831 and use the following notations for primitive temporal relations: (< ,m.o,f,d,s,=,si.di$,oi.mi.> 1. Disjunctive primitive relations are represented by a list. For example x [< = >] Y means X is before, equal to or after Y. For all intervals i and j, if Rij is completely unconstrained, it can take any one of the 13 primitive relations as its value. Every arc Rxy in R must take one of the primitive relations as its value. Temporal relations are subject to constraints. A temporal constraint on Rxy is a restriction on the values that Rxy can take. Therefore, a temporal constraint C can be seen as a set of primitive temporal relations - an enumeration of all the values that the subject temporal relation can take in order to satisfy C. For example, if the proposition P holds in interval X. and -P holds in interval Y. then X and Y must not have any common subintervals. In other words, the constraint is: Rxy c 4 < m mi > ). Because of the linearity property of time in this logic [Tsang 86a]. for any three intervals X. Y and Z. the temporal relation Rxz is restricted by Rxy and Ryz jointly. Such constraints are called transitivity rules. A constraint propagation algorithm based on these transitivity rules has been presented in [Allen 831. [Tsang 86b] points out the need for checking consistency in relation networks. In planning, there is a need to map intervals onto date Zincs. simple structures where each time point has a place, and the points are linearly ordered. One way to prove the consistency of a relation network and map the intervals in it onto a date line is to assign a primitive temporal relation to each relation. This is a consistent labeling problem, which will be discussed below. The Shdsteut Labeling Problem A Consistent Labeling Problem (CLP) is defined as follows: We have a finite set of variables 2 = (X,, Xz, . . . . X,$. Cardinality of Z is n. Each variable Xi in Z has a finite domain of values. Constraints exist for subsets (of various sizes) of variables in Z. The task is to find a s=ohtion-tuple (which is a n-tuple),, which means the assignment of one value to each of the variables in Z such that all the constraints are satisfied. Tsang 251 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. This problem is called Constraint Satisfaction Problem in some of the literature. In some applications. the task is defined as finding all solution-tuples. We call the assignment of a value to a variable a label. For example <X, Vi> is a label assigning Vi t0 Xi A comPou.ucI label is the combination of more than one label, e.g. (<Xl, V,><X,, V,>...<X , Vk>). A k-constraint (denoted by C , to {true. false . If where 1 G k& is a mapping of k labels The label (<Xi, V,>...<Xi. Vi>) is admissible if C,(<X,. VI>...<Xi. Vi>> is mapped to true. For example, the 8-queens problem can be formulated as a CLP: The problem is to place 8 queens on a (8 rows X 8 columns) chess board, subject to the constraint that no two queens appear on the same row, column or diagonal. The 8 rows can be seen as variables X, to X,. Each of them can take an integer value between 1 to 8. X, taking the value k indicates that the queen in row i is placed on column k. Between each two variables Xi and Xj, the following binary constraints apply: (1) ii; Vi # Vj Vi + (j-i) f Vj Vi - (j-i) f Vj Most research in the CLP concerns binary constraints. [Freuder 781 ’ t d in ro uces the concepts of k-s&&ability and k-consistency. which apply to general CLPs with constraints of arbitrary arity. A network is k-satisfiable if for any k variables in the network, there exists a compound label on them which satisfies all the constraints amongst them. A constraint network being k-consistent implies that [Freuder 821: Choose any set of k-l variables. If L is a compound label on these variables which satisfies all the constraints on them, then for any kth variable that we choose, there exists a value that this variable can take such that the label of the kth variable together with L satisfy all the constraints on the k variables. If a constraint network with n variables is n-consistent, a solution tuple exists. constraints of Other research on CLPs concerning arbitrary arity can be found in [Nude1 83][Nadel 851. Iv The UP in temporal reasoning The problem of assigning a primitive relation to each temporal relation is a CLP. constraint network (CN> is: In this problem, the CN = (R, T) The set of nodes of CN is R. the set of arcs in the relation network G mentioned in section II. T is the set of constraints on elements of R. Since constraints could have any arity, it is difEcult to draw the constraint network graphically. The domain of each variable is the set of all primitive temporal relations, which we call PR: PR = { < .m.o.fi.dia.-si,d,f.oi.mi,>) For example in some relation network Go, let the set of nodes No be (A, B. C). The arcs in Go would be Ro = (Rab. Rba, Rbc, Rcb, Rat, Rca), which are the nodes of Go’s corresponding constraint network. T in CN consists of constraints of various arities on the temporal relations in R. The example “X and Y must not have any common subintervals” mentioned above is a unary constraint on Rxy. An example of a binary constraint is: “if A meets B. then C meets D”. which is equivalent to “Rab E 4m) --> Red E (m)“. An example of a 3-ary constraint is: “intervals P. subintervals” Q. and R must not have any common which means: Rpq E (< m mi >) V Rqr E 4< m mi >) V Rpr E (< m mi >) Each transitivity rule isin labels which have the form: fact a set of constraints on (<Rab.rab> <Rbc.rbcV <Rac.rac>) (Notice that the three labels concern the relations of exactly three intervals). Here the value of Rat is restricted by the values of Rab and Rbc together. Examples of constraints implied by the transitivity rules are: C(<Rab.m> <Rbc.=> <Rca,mi>) C(<Rab.m> <Rbc,m> <Rca,m>) -(mapped -(mapped to)-> to)-> For example in the above relation network Go, these might be the constraints “interval A precedes both intervals B and C. and B and C must start at the same time”. In this case, the set of constraints on Go’s corresponding constraint network is: Rab E 4 < m) Rbc E {s - si) (which implies Rba E {mi >)) Rat E (< m) (which implies Rcb E {s = si)> (which implies Rca E (mi >)) plus the transitivity rules. In planning. the temporal relations labeling problem exists only if we do not want to commit ourselves to any primitive relations until we need to do so (i.e. if we apply the least-commitment strategy). In such approaches, building up the relation network (identifying the intervals involved in the problem) and labeling the temporal relations are performed in two separate stages (see [Tsang 86bl). An alternative approach is to label all the temporal relations whenever new intervals are added to the relation network, and backtrack if overall inconsistency is detected. This approach is adopted by planners like NONLIN [Tate 771. In NONLIN. only temporal relations befme and after are considered: if two actions A and B conflict with each other. a commitment is made to either A befme B or A after B. This approach labels temporal relations before the whole CLP is formulated. One constructive way to prove the satisfiability of a constraint network is to 6nd a solution-tuple for it. However, this is a NP-complete problem as the search space is exponential in the number of nodes in the constraint network. [Freuder 781 presents an algorithm for tiding the set of all solution tuples without needing any searching and backtracking. However, this algorithm takes exponential time and space, and therefore, as Freuder admits, is not useful for [Freuder 821. practical applications 252 Planning V Spedic characteristics of CLPs in general CLP has specific characteristics in which it differs from general search problems. Some important characteristics of CLPs are: 1. The size of the search space is fixed and finite. Assume that there are n variables to be labeled. If we order these variables, the search space can be represented by a tree. Each node of this tree represents the choice point of assigning a value to a variable, and each branch represents the commitment of a label. The depth of this search tree is n and the branching factor of each level is Idil. where ldil is the cardinality of the domain of the variable Xi The number of leaves of the search tree is: n;, (Id)) 2. The subtrees under each branch are very similar. Assume that the variables are ordered, and Xi, Xj are variables. The same choices of labels for Xj would be available under each branch of Xi, where i< j. Constraint propagation may prune some future branches if we use lookahead search strategies. But basically the subtrees are very similar. 3. Choice of a value for a variable propagates through the constraints and might affect the choices of values for other variables. Because of these characteristics, specific heuristics can be used in the search of solution tuples. Some of them, e.g. lookahead. are summarized in [Haralick & Elliott 801. !Jearcb strategies in temporal relations labeling In searching for solution tuples. at least three orderings have to be decided: 1. Which variable to label next? [Freuder 821 presents an algorithm for tiding minimu. order graphs. The basic idea is to order the nodes in the constraint network so that those which have more constraints linked to them are labeled first. By doing so. one can minimize unnecessary commitments. However, this algorithm applies to binary constraint problems only. In the temporal relations labeling problem, every temporal relation is constrained by the same number of transitivity rules. Hence, it is likely that most orderings form a minimal order graph. [Haralick & Elliott 801 introduces the Fail First principle. One of the applications of this principle is to label the nodes which have the fewest available labels first. Doing so would minimize the size of the search tree. This principle is applicable to the temporal relations labeling problem. 2. Which value to try next? Having decided which variable to label next, we have to choose which of the available values to try next. One heuristic is to try the least restrictive value first, in the hope that unnecessary backtracking can be avoided. Ordering of the values according to their restrictiveness is normally domain-dependent. In the temporal relations labeling problem, primitive temporal relations can be ordered by their restrictiveness. The order is shown below. with the less restrictive relations at the top: 3. 1. 2. Io’Ol;] . 3. [d di] 4. [m mil 5. [fi s si f] 6. i-1 A primitive temporal relation between two intervals is more restrictive if it requires more start/end-points of them to be equal. By trying the least restrictive available relation first. there is less chance of having to backtrack. However, we sometimes want to pack the intervals as tightly as possible. For example, in planning problems one may want to minimize the overall duration of the schedules generated. In this case, we might want to order the primitive temporal relations as follows: 1. [-I 2. [Ii s si f] 3. [o oil 4. [di d] 5. [m mil 6. I< >I It is likely that the more the relations at the top of this sequence are used in the labeling, the more efficient that the resulting plan would be, though local optimality may not lead to global optimality. Finding optimal schedules (schedules which needs the least amount of time to tiish) is a hard problem. This heuristic can only increase our chance of finding, efficient plans. Which inference to do next? The Fail Fist Principle suggests that those inferences which are most likely to fail should be performed first. However, there seem to be no general rules as to which inference is most likely to fail % this application domain. Anyhow, such rules will tend to vary from domain to domain. In order to detect inconsistency at an earlier stage during the search, we can use a lookahead strategy. Looking ahead prevents us from rediscovering inconsistency repeatedly [Mackworth & Freuder 851. Allen’s algorithm in [Allen 831 can be used to maintain 3-consistency in the constraint network during the search. VTLl time versus Discussion point-base4i representition of Since points have strict linear ordering [Turner 841 [Tsang 86a]. one might wonder whether the CLP still exists when we reason with points rather than intervals: in other words, in a point-based representation. could a constraint network which was locally consistent be unsatisfiable? If it could not. then why should we reason with intervals and get ourselves involved in the CLP? Assume that we have a relation network of points: GP = (Np, Rp) Tsang 253 The nodes (Np) are points and each arc represents the relation of the temporal relation between its connecting points. If the network is totally unconstrained, the values that each arc can take is one of befwe. equul or after. which we denote by C, - and >. The constraint network associated with Gp is: CNP = (Rp. Tp) (3x3-1 9 where Tp is the relations of points, set of e.g. transitivity rules on x<y & y-z -> xc2 plus the problem-spe&c constraints on Rp. One can prove that if Tp consists solely of unary constraints plus the transitivity rules, then CNp is always consistent, provided that 3-consistency is maintained (unlike networks of intervals. see proof in [Tsang 87b]). However, we argue that: IF‘ we reason with points, AND want to reason with disjunctive temporal relations, THIZN we still have the CLP, which appears in a different form. This can be illustrated by an example. Assume that we have the following interval-based relation network: Gi - (Ni. Ri) where Ni={A. B) and Ri={Rab} (A and B are intervals). (For simplicity, we treat Rba and Rab as the same element in Ri. This will not affect our discussion below.) We further assume that there exists a unary-constraint on Rab: (I) A [< >] B Associated with Gi is the constraint network: CNi = (RI. Ti) where Ti is the set of transitivity rules on intervals. together with (I). Let us find the point-based relation network: GP = (NP. RP) and constraint network: CNP - (RP, TP) which correspond to Gi and CPIi. Obviously, Np - {start(A). end(A). start(B). end(B)] (I) in Ti means: (11) end(A) < start(B) ; OR m end(B) < start(A) Among the 4 points, there are 6 bii temporal relations. (Again we treat Ryx as the same element as Rxy in Rp). Therefore Rp is the set of those 6 bii relations. Let D(x,y) represent the domain of the relation between points x and y. (For all X, y, D(x.y) - [ < - >] if it is totally unconstrained.) Then by definition of an interval, we have the following unary-constraints in Tp: (Dl) D&tart(A),end(A)) - [ < 1 CD21 D&tart(B).end(B)) = [<I A little reflection should convince the reader that (11) and (I2> imply the following unary-constraints in Tp: D(start(A),start(B)) = [ < >I D(start(A).end(B)) = [C > 1 D(end(A).start(B)) = [< >I D(end(A),end(B)) = [C >] The constraint network CNp now has: Tp=((Dl) to (D6) plus the 9 transitivity rules) As said before. a CNp of such form can always be labeled. However, one must note that this CNp is not equivalent to the above CM. This CNp allows relations that CNi does not. For example: start(A) < start(B) < end(B) < end(A) is a consistent labeling in CNp. but is not allowed in CNi. The fact is, in order to represent CNi by a point-based representation, we need to add to Tp the following binary constraints: (Cl) IF D(start(A&art(B))-[ C] THEN D(end(A).start(B))P[ < 1 (C2) IF D(start(B).start(A>>-[ < ] THEN D(end(B).start(A))=[ c 1 (C3) IF D(end(A).end(B%[ < 1 THEN D(end(A).start(B))=[ < 1 (04) IF D(end(B).end(A))=[ < 1 THPN D(end(B).start(A))-[ c 1 sot.0 has: represent an interval-based constraint network which A set of unary-constraints: DXY = [...I. and 169 transitivity rules, which are 3-ary constraints. In a point-based constraint network we need: a set of unary-constraints: D(x.y) = [...I. and (3X3-1 9 transitivity rules (on C, = and > >, and additional binary constraints like (Cl) to (C4) above. When binary constraints are added. the overall consistency of the constraint network is not guaranteed. One can translate any relation network from an interval-based representation to a point-based representation. But solving the CLP in one representation is as nontrivial as solving it in the other. In fact. the above CLP exists only when we consider disjunctive temporal relations. Most implementations of point-based temporal reasoning modules consider one conjunctive set of temporal relation (among points) at a time, and therefore do not have to face this problem. [Vilain Bt Kautz 861 concludes that: 1. determining consistency of statements in Allen’s interval algebra is NP-hard. and Allen’s constraint propagation algorithm is incomplete: 2. constraint complete. propagation ina “time point algebra’ is where “time point algebra” refers to a point-based representation and its constraint propagation mechanism. Vilam & Kautx suggest that “the tractability of the point algebra makes it an appealing candidate for representing time’. We feel that Allen’s algebra and Vilain & Kautx’s time point algebra cannot be compared in such a straightforward way because in Allen’s formalism 254 Planning disjunctive temporal relations are handled at the same time. Allen’s constraint propagation algorithm is incomplete in the sense that it can only maintain 3- consistency, not overall consistency of the constraint network. But disjunctive relations among points are not handled at the same time in the time point algebra - when point A has to be before or after point B. the problem has to be treated as two separate problems. By avoiding reasoning with disjunctive relations, the time point algebra achieves completeness in the constraint propagation mechanism. VII.2 Consideration of metric properties of time In this paper, we have discussed temporal reasoning concerning relative temporal relations (e.g. before. meet. etc.). We must emphasize that a relation network in which consistent labeling exists may not be consistent with regard to the metric properties of time: constraints such as duration of intervals, absolute labels of starting or ending times. We believe that reasoning with metric properties is a nontrivial problem, and linear programming is a general tool for it. Discussion of this problem is beyond the scope of this paper, but see [Tsang 86b,87b]. VIII summarg In this paper, we have identified and analyzed the CLP in temporal reasoning. We conclude that this CLP arises when we want to reason with disjunctive temporal relations, irrelevant to the choice between point-based or interval-based representation of time. Identifying and formalizing the CLP in temporal reasoning is significant because specific characteristics exist in CLPs which allow us to apply certain techniques for temporal labeling. Acknowledgements The author is indebted to Jim Doran and Sam Steel for many invaluable discussions on this topic. Chris Trayner, Richard Bar& Anthony Cheng and John Bell give useful comments on this paper. This project is supported by the studentship gxof the Department of Computer Science, University of . [Allen 8t Koomen, 831 J.F. Allen, & J.A. Koomen. Planning using a temporal world model. IJCM-83, 741-747 REFERENCES [Allen. 831 J.F. Allen, Maintaining Knowledge about Temporal Intervals. CACM ~01.26. no.11. November. 1983. 832-843 [Bruce, 721 B.C. Bruce. A model for temporal references and its application in a question answering prOgram. AI 3(1972X l-25 [Dean. 851 T. Dean, Temporal Imagery: An Approach to Reasoning with Time for Planning and Problem Solving, Ph D Dissertation, Yale University, October 1985 [Fox & Smith, 841 M.S. Fox, & S.F. Smith, ISIS - A Knowledge-based system for factory scheduling. Expert Systems. Vol.1 No.1. 1984. 25-49 [Freuder, 781 expressions, 958-966 EC. Freuder. Synthesizing constraint CACM November 1978. Vol 21. No 11. [Freuder, 821 E.C. Freuder. A suficient wndition for backtrack-fi-ee search, J ACM Vol.29 No.1 January(l982). 24-32 [Haralick & Elliott, 801 R.M. Haralick, & G.L. Elliott, Increasing tree search e&iency for constraint satisfczction problems. AI 14(1980) 263-313 [Kahn 8t Gorry. 771 K.M. Kahn, & G.A. Gorry, Mechanizing temporal knowledge. AI. 9(1977) [Mackworth & Freuder. 851 A.K. Mackworth. Bc E.C. Freuder. The complexity of some polynomial consistency algorithms for constraint satisfaction problems. Al 25(1985) 65-74 [Miller et al.. 851 Miller, Firby & Dean, T., DeadtzHes, Travel Time, and Robot Bobtern solving. Manuscript, 1985. Yale University, (A shorter version of it appears in IJCAI-85. 1985, 1052-1054) [Nadel, 851 B.A. Nadel. The Consistent Labeling Problem, Part I: Background and Problem Formulation. The Univ of Michigan, Technical report CRL-TR-13-85. 1985 [Nudel. 831 B.A. Nudel. Consistent-labeling problems and their algorithms: expected-complexities and theory- based heuristics, AI 21. July 1983 [Tate. 771 A. 888-893 Tate. Generating project networks. IJCAI-5. [Tsang. 86a] E.P.K. T sang. 7%~ In&rval Structure of Allen’s Logic. Technical Report CSCM-24. University of Essex, April, 1986 [Tsang, 86b] E.P.K. T sang, Plan Generation using a Temporal Frame. ECAI-86. July. 1986 [Tsang. 87al E.P.K. Tsang. TLP - A Temporal Planner, Proceedings, AISB-87. Edinburgh, April, 1987 [Tsang. 87b] E.P.K. T sang, Planning in a temporal frame: a partial world description approach, PhD dissertation, University of Essex, in preparation. 1987 [Turner, 841 R. Turner. tigics for AI. Ellis Horwood series in Artificial Intelligence, 1984 [Vere. 831 S.A. Vere, Planning in Time : windows and duration for activities and goals,. IEEE Trans. on Pattern Analysis and Machine Intelligence. May 1983. Vol PAMI- No.3, 246-267 [Vilain. 861 M. Vilain, 8t H. Kautz. Constraint propagation algorithms for temporal reasoning. AAM-86.377-382 Tsang 255
1987
45
638
Ratil E. Valdks-Pkrez Computer Science Department Carnegie-Mellon University Pittsburgh PA 15213 Abstract A popular representation of events and their relative alignment in time is James Allen’s intervals and algebra. Networks of disjunctive interval constraints have served both to assimilate knowledge from ambiguous sentences, and to hold partial solutions in a planner. The satisfiability of these networks is of practical concern, and little has been achieved beyond proving that determining satisfiability is NE-hard. This paper scrutinizes the interval representation and its mechanisms. We make explicit the unstated assumptions of the mechanisms, introduce several useful theorems regarding interval networks, distinguish three types of inconsistency exhibited by these networks, and point out under what conditions these inconsistencies are detected. Finally the theorems, observations, and distinctions regarding inconsistency are exploited to design a practical algorithm to determine the satisfiability of an interval network. The extension of our results to two- dimensional spatial reasoning is under investigation. 1. Introduction One way to represent events extending over time is by the use of the interval algebra, popularized by Allen [Allen 831, and incorporated into the planner in [Allen & Koomen 831. Some problems accompanying its use have been cited Main & Kautz 861, notably the lack of a suitable practical algorithm to determine the satisfiability of a set of assertions in the interval algebra. This paper mathematically characterizes Allen’s interval algebra and makes explicit the assumptions that underlie it. We treat the issue of satisfiability in the light of two new theorems regarding networks of intervals. The insight provided by these theorems and other observations is exploited to state a practical algorithm to determine the satisfiability of a given interval network. Finally, the development here should suggest a way to analyze disjunctive constraint networks that use a different algebra. 2. Events as Intervals Simple intervals are convenient to represent events that began and ended, and that occurred continuously between those two times. An example of such an event is a visit paid to a friend on a previous day. The use of intervals to depict the temporal extent of such events leads to a temporal ordering of these events by comparing the interval endpoints. By considering all alignments of the four endpoints, one arrives at Allen’s thirteen possible orderings between two intervals, shown in the following Table. lThis research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976, monitored by the Air Force Avionics Laboratory under contract F33615-M-K-1520. Allen’s 13 Interval Orderings -z before > after m meets mi met-by 0 overlaps oi overlapped-by S starts si started-by f finishes fi finished-by d during di contains = equals So, for example, the Carter presidency “meets” the Reagan presidency, because the end of the one event coincides with the beginning of the other. However, the meaning of certain linguistic assertions is not captured by any single ordering. A statement such as She telephoned my friend during my visit yesterday at his home. requires a disjunction of orderings, to express the ignorance of whether the telephone call ended before, after, or at the same time as the visit. We denote the disjunctive relation between two intervals by a set or list of orderings. These intervals are treated as unknozons, because if the positions of two events along the time dimension were known precisely, then the relation between them would of course be a single ordering. Figure I: Equivalent Representations An advantage of depicting events as intervals, versus an equivalent endpoint-based representation, is the concise way that a disjunctive relation between two events is expressed by a single relation. This conciseness has favorable computational consequences, as discussed below. In Figure I, disjointness (e >) is shown as an interval relation on the left, and as an equivalent disjunction of endpoint relations on the right? We remark that while the interval relation is along a directed edge, the relation in the reverse direction obtains by simply inverting each ordering, according to the lines in the Table above. 3. Inference Through Transitivity It is possible to obtain a relation between two intervals despite the lack of an assertion directly mentioning the intervals. [Allen 831 gives a table for calculating a relation between two intervals A and C by combining the known relation of each with interval B. Given AR,B and BR2C, the relation R, @ R2 between A and C ‘The subscripts 1’ and ‘u’ denote respectively the lower and upper endpoints of an interval. 256 Planning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. follows by forming the cross product of the sets R, and R,, composing each resulting ordered pair by looking up the result in the transitivity table, and taking the union of the resulting sets. For example, {o d s) 0 (s si =) = tt(o,s) u tt(o,si) u tt(o,=) u tt(d,s) u tt(d,=) u tt(s,s) u tt(s,si) u tt(s,=) = tt(d,si) U {o)u[difio)u(o)u(d)u(>oimidf)u (d) u (s) u (s si =) u (s) = (odifid>oimifssi=). The relations in the first line are interpreted linguistically as ‘ended while the other occurred’, and ‘started at the same time as.’ The transitive relation means ‘ended after the other started.’ Newly inferred relations, such as the above, have to be reconciled with the current direct relation. For the interval algebra, the reconciliation involves conjoining the old relation with the new, because both must be true. For example, we convert each relation to its equivalent disjunction: reconcile[{rr r2 . ..). Is1 % . ..)I = r,hsr v r1As2 . . . v r2+ v r2s2 . . . Since the 13 orderings are mutually exclusive, ri/\sj is false unless ri=si. Therefore, the reconciliation of two interval relations is just their intersection. Depending on the application, relations generated externally are either hypothesized by a planner or input as domain facts. Before the new relations are assimilated, we imagine a group of intervals each related by the tautological relation: (<>mmiooiddissiffi=) consisting of all possible orderings. When the first new relation is input directly or inferred through transitivity, it is reconciled with the tautological relation. 4. Networks of Intervals When there are many intervals, the relation between two intervals of interest may be affected by the transitive relations involving any third interval. One may desire to know the relations between several pairs of intervals, so the need arises to make explicit, or explicate, all the implicit relations obtained by transitivity. It is also useful to know whether the relations are globally consistent; if they are not, then the validity of inferences obtained by transitivity is suspect. Moreover, if the interval representation is part of a planner, as in [Allen & Koomen 831, then inconsistent hypothesized relations should be noticed, in order that the plan be realizable. The need to compute the consequences of a set of relations is met by casting intervals and relations as the nodes and edges of a (directed) graph, or network, in order to apply known graph algorithms. The transitive closure algorithm (TCA) explicates all the relations contained in paths through the network, reconciles the different paths between each pair of nodes, and places the result in the edge connecting the pair. The algorithm will often detect inconsistencies, as discussed below. As formulated in [Aho et al. 741, the TCA terminates in time cubic in the number of nodes, at which point we say the network is closed. [Allen 831 presents an incremental version of the algorithm to use when a few new relations are added to an already closed network. A closed network has an edge between each pair of nodes, i.e. is a complete graph. If one of the edges is the null relation, then the network is unsatisfiable. If the network has no null relations, then we ask: Does there exist a globally consistent assignment of a single ordering to each edge? If there is such a labelling, then the network is satisfiable, and we call the edge assignment a solution. [Aho et al. 74) states sufficient conditions on a network algebra that guarantee that any closed network within the algebra is minimal, meaning that there is no smaller graph having the same solutions. If any closed network is minimal, then a closed non-null network has a# solution: otherwise the null network would be smaller and have the same number of solutions, namely none. It is a desirable property that any closed network be minimal, because then the sa&fiabili~ty time TCA. of any is found bY the polynomial- Unfortunately, the interval algebra fails to meet the sufficient conditions, so- that a closed network in this algebra is not necessarilv minimal. [Montanari 741 also discusses c&ditions that guarantee- minimality, which do not hold here either; this is discussed fully in [Valdes-Perez 861. [Vilain & Kautz 86) finally proved that the question of satisfiability in these networks hard, so that the closure cannot in general be minimal.3 is NI;- To summarize, it is desirable to exulicate all the relations in a constraint network for two reasons: A First, to find the tightest relation between all node pairs, and second, to detect inconsistencies of the type discussed in the next section. A closed non-null network generally is not minimal, hence possibly urtsa tisfiable. Further comuutation is needed to determine satisfiability, and to construct a solution in the favorable case. This paper shall propose a solution to this problem. 5. Sources of Unsatisfiability We distinguish three types of network unsatisfiability. b Type 1: If the al ge raic domain of the nodes is insufficiently large, then there are too few values to satisfy the network relations. Figure 2: Insufficient Domain For example, the network relations in Figure 2 require that all intervals be distinct, but the intervals’ domain has onlv two members. This type of inconsistency relations and the algebraic domain. depends on both the n&work In the literature on the interval representation, the usual (unstated1 assumption is that the domain of possible values for intervals is large enough so that inconsistencies of type 1 do not arise. In practical applications it may be otherwise. To state the second type of inconsistency, we first Type 2: present a theorem proved in [Valdes-Perez 861: Theorem 1: If an interval network is closed and non- null, then ‘=’ is a member of the composition of the relations along any loop in the network. Therefore, if there is a loop for which the ordering ‘=’ is not a member, then the network is unsatisfiable, whether unclosed or already null. Figure 3 on the following page illustrates such a loop, which we call an absurditv. Traversing this loop yields the contradiction that an interval is less than itself. This type of unsatisfiability is exactly what is detected by the TCA; its nature is characterized by Theorem 1. 3oUr notion of closed differs from that in [vilain & Kautz 861, for whom ‘closed’ corresponds to oiFF&ing of ‘minimal.’ Our use is consistent with that of Nontanari 741 and Waldes-Perez 861. Valdb-P&e+ 257 obtained transitively. Therefore, if all of the n(n-l)(n-2)/3! triangles of a network are stable, in the sense that the mentioned replacement does not change the existing relation ik, then the network is already closed. Figure 3: Inconsistent Loop The third and final type of unsatisfiability is that which Tvpe 3: remains after a network is closed. We interpret this type as follows. When attempting a labelling of the network, an already labelled subnetwork may require a label Ll for an edge E to avoid a loop contradiction of type 2. Another subnetwork may require a different label L2 for E for the same reason. This situation makes the network unsatisfiable, but is disguised network by the relation (Ll L2 . ..) for E. in the closed disjunctive Figure 4: A Closed Unsatisfiable Network An unsatisfiable closed network from Figure 5 in [Allen 83) is shown on the left of Figure 4. The attempt at labelling in Figure 4 on the right stalls, because either available label for the edge BC causes an absurdity at ABCA or BCDB. 6. The CTosure of a Singleton is Minimal A singleton is an interval network having a single label for each edge. Since a singleton uses the same transitivity algebra as above, and similarly reconciles the relations obtained through different paths by set intersection, there is no reason to expect that its closure is minimal. However, a closed singleton is indeed minimal, a fact needed for our satisfiability algorithm below. Theorem 2: A closed non-null interval network having a single disjunct at each edge is satisfiable. In the solution, each edge is labelled with its single ordering.4 We note that the purpose of the theorem is not to suggest using the TCA to find the satisfiability of a singleton; this is done more efficiently by separating intervals into their endpoints and translating the interval orderings into precedences and coincidences between these endpoints. Solving the result is quadratic in the number of intervals. This fact is used by our algorithm below as an iteration invariant; at a certain step, the current labelled subnetwork is always minimal. 8. Current Approaches Given the exponential nature of the satisfiability problem, [Vilain & Kautz 861 lists several options. One option is to limit the problem to small (subjnetworks, which could be done hierarchically, as in [Allen 831. However, the resulting subnetworks still need to be solved efficiently. rVilain & Kautz 86) discusses other problems with hierarchization. A second option is to resign oneself to not knowing whether a given network has a solution. One can still compute new, possibly invalid, relations through transitivity, and be content with detecting inconsistencies of type 2. Figure 5: Interval Relation = Endpoint Relations A third way is to trade off some expressiveness for a gain in tractability, in the style of [Brachman & Levesque 841. Consider that there are 2r3-1 possible non-null relations between two intervals, of which merely thirteen are nondisjunctive. However, [Vilain & Kautz 861 points out that some of the disjunctive relations are expressed without disjunction by simple precedences among endpoints, as shown in Figure 5. It is easy to enumerate systematically the disjunctive relations that are nondisjunctive in the corresponding endpoint network. We may then use these relations as our representation language, and formulate coherent linguistic interpretations of them. Evidently this language is considerably less expressive than the full disjunctive interval relations; the gain is a quadratic execution time, via the search for certain cycles, versus an exponential. In the remainder of this paper we introduce an alternative algorithm that tests satisfiability by constructing a solution, does not sacrifice expressiveness, and is intended for practical use. but still-disjunctive representation may nevertheless possess type-3 Theorem 2 shows that the difficulty of determining satisfiabihty arises from disiunction. not strictlv from the vocabularv of interval orderings nor ‘from the transit&y table. Hence, a le& expressive inconsistencies that remain undetected by the closure. One way to reduce expressiveness and eliminate type-3 inconsistency is discussed below in section 8. 9. A Satisfiability Algorithm The algorithm shown on the next page terminates and reports The algorithm was conceived using the theorems presented correctly pither a earlier as insight; the theorems also justify several of the steps. The search framework is a variant of the dependency-directed consistent labelling of the network or backtracking (DDB) introduced in [Stallman & Sussman 77) and unsatisfiability.5 further developed in [Steele 801. The asymptotic complexity remains, of course, exponential; the gain in practice arises from quick pruning and clever backtracking. 7. A Remark on the Transitive Closure Before presenting the algorithm to construct a solution, if one exists, of a general disjunctive interval network, we need the following observation. Observation 1: The transitive closure algorithm at each step examines only some three nodes i,j,k and their joining edges (i.e. -a triangle). This step- replaces the relation ik by reconciling it with the relation ij * jk 5As is usual, type-1 unsatisfiability is disregarded, meaning that the algebraic domain of the intervals is assumed large enough so that the intervals of any 4This theorem is also proved in WaldesPerez 861. consistent network can be assigned values that fulfill the network relations. One such domain is the positive real numbers. 258 Planning ;;; A Constructive Satisfiability Algorithm ; ;; read ‘btl’ as ‘backtracklist’ Totally order in 0 the graph's edges; an edge nearer the tail of 0 is more recent.6 VeEedges: btl(e)t(). Eta. :;; let the first edge in 0 succeed 0 while 3 an edge E’ succeeding E in 0 begin EC-E'. more candidates, then backtracking recurs, which explains the guard of the most deeply nested while. 10. Properties The algorithm is theoretically interesting because it is conducted entirely within the original interval network representation; it makes-no use, for example, of endpoint graphs. label E with its first candidate label. while 3 an absurd triangle (E,ei,e,) ; ; ; TEST or’ 3 a nogood NG that is a subset - of the current labelled network besin case absurd triangle : btl(E) cbtl(E)u(e,,e,) nogood NG : btl(E) tbtl(E)uNG-{E}. while there is no next candidate for E besin If btl(E) is empty then return(Failure) . Assert btl(E) as a nogood. ;;; ASSERT E,tmost recent edge in btl(E) . btl (E,) +btl (El -tE,l U btl(&). Vecedges: if e was labelled after E, do unlabel (e) . btl(e)c{). reset the next candidates for e to its original set. EtE,. end. label E with its next candidate. end. enZ7 return (Success) . Abstractly, the algorithm proceeds by repeatedly selecting an edge E and testing its edge labels; backtracking - to choices made before E - is done only when no label for E is consistent with the currently labelled subnetwork. A key aspect of the [Stallman & Sussman 771 approach to DDB is the use of nogoods. A nogood, depicted either as a list or as the negation of a conjunction (NAND), is a set of choices at choice- points that cannot be jointly present in any solution. The purpose of a nogood is therefore to enable abandonment of a search path as fruitless. Nogoods are normally discovered by analyzing inconsistent states to find those choices that were jointly responsible for an inconsistency (there may be several). Nogoods can also be derived by the resolution rule of inference of propositional logic [Nilsson 801, as explained in the Appendix. Our algorithm needs to save only nogoods created by resolution, for reasons discussed below. Each edge E has a backtracklist that makes available backtrack destinations whenever the candidate labels at E are exhausted. backtracklist collects those edge-labels less recent than E that were jointly contradictory with an edge-label for E.s Each time that an edge-label at E fails, before another label is tried for E, the case statement updates the backtracklist btlIE1. Each time that an edge-label at E fails, and there is no other label for E, the search backtracks to the most recent edge E, in btlIE1, and updates btl[E,l by adding to it btl[EI - minus E, itself. If E, has no ‘We assume in further discus&on that the first clause is evaluated first. The clean separation between type-2 and type-3 inconsistencies in the algorithm is remarkable. Clause 1 of the TEST in the 2nd while hindles typg 2, bY intercepting any potential triangular absurdity, two edges of which are then recorded in the backtracidist for the-edge. The second clause of TEST encounters those contradictions already catalogued at ASSERT, which we examine next. Figure 6: Type-3 Inconsistency When the candidates at an edge are exhausted in the body of the 2nd while, the contents of the backtracklist are asserted as a nogood at ASSERT, as justified in the Appendix. To illustrate that this nogood represents a type-3 inconsistency, we consider the case of two candidate edge-choices at E that contradict previous choices E,E, and EsEC as shown in Figure 6. Qur goal is to show that the loop E,E,E,E, contains the ‘=’ ordering. By assuming the contrary, and using that Ese E,oE, we deduce that there is the triangular absurdity E,E,E,.’ However, the first clause of step TEST intercepts all such triangles, and we arrive at a contradiction. Theorem 1 justifies clause 1 of the TEST: no satisfiable network can forbid an interval to equal itself. Theorem 2 and Observ. 1. provided the insight that by designing an algorithm with an iteration invariant of a triangularly stable singleton network, the network is always minimal. Therefore there is no need to test the global consistency of the current labelled subnetwork. We have used a variant of DDB in order to ensure completeness and termination. The standard DDB as described in [Stallman & Sussman 771 and [Steele SO] is apparently incomplete, because a backtrack destination is chosen arbitrarily, which does not ensure a systematic and finite traversal of the search space. In any case, our algorithm could instead use this DDB, by sacrificing completeness for the efficiency, during backtracking, of not resetting those choice-points more recent than the backtrack destination. 11. Extensions We are currently examining an extension of the interval algebra and our algorithm to architectural layout [Baykan & Fox 87J. The objects to be laid in this application are two-dimensional rectangles, so that binary constraints between objects are expressed as a pair of interval orderings. The same problem of satisfiability of a completed layout plan arises here. Some differences are, for example, the desire to incorporate ternary and higher constraints into the satisfiability tester. Ternary constraints are not expressible in a network, but they are easily integrated into our algorithm in clause 1 of TEST, which checks all triangles about to be completed. Another change is needed because architects prefer to generate all solutions, if feasible, in order to let the practicing architect choose from among them. &rhejoint contradiction muld also have involvededges morerecent than E,but gs was already labelled, because the algorithm constructs a complete network theywouldalready havebeendiscardedby backtrackingtoE. Lxforeaddinganewnode,asqxcifMbytheorderweimpoxdontheedges. ValdCs-Piirez 259 12. Conclusion This paper has examined a simple but noteworthy knowledge representation language used in AI, explicated assumptions that underlie it, characterized its properties, proved several theorems concerning it where few had existed, and used these theorems and observations to design an algorithm that finds the satisfiability of a set of assertions in the language. Our work follows the analytical approach of others if hat have systematically characterized domains such as inheritance systems [Touretzky 861 and frames [Brachman & Levesque 841. Acknowledgments The author thanks Mark Derthick, Oren Etzioni, and the reviewers for helpful comments on drafts of this paper, and Dave Touretzky for suggesting the final form of the satisfiability algorithm; responsibility for errors remains with the author. He also thanks Danny Sleator and the MIT Hardware Troubleshooting group headed by Randall Davis for many fruitful discussions. Appendix. Propositional Resolution From the first two propositions in: YAvP,, AvPz -+P1X’2 one can infer the third. Imagine a search problem in which some choice-point Cl? has available only the choices A and B. Then any solution to the problem must include A or B. At a certain point, suppose that choice A is tried, which proves inconsistent with C, and then choice B is tried and proves inconsistent with D. The statement of the feasibility of Cl?, and the two nogoods, look like this: AvB -,AvTC YBVTD from which follows the proposition and nogood T C v TD. Therefore, while trying the choices ck at a choice-point, the union of the k nogoods obtained, minus the ck elements, is itself a legitimate nogood.1° References A.V. Aho, J.E. Hopcroft, and J.D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974. J.F. Allen. Maintaining knowledge about temporal intervals. Communications of the ACM, 1983,26(22), 832-843. J.F. Allen and J.A. Koomen. Planning using a temporal world model. Proceedings of I]CAI-8, pages 741-747,1983. C.A. Baykan and M.S. Fox. An investigation of opportunistic constraint satisfaction in space planning. To appear in IJCAI Proceedings, 1987. R.J. Brachman and H.J. Levesque. The tractability of subsumption in frame-based description languages. Proceedings of ZJCAZ-84, pages 34-37,1984. R.M. Haralick and G.L. Elliot. Increasing tree search efficiency for constraint satisfaction problems. Artificial ZnteZZigence, 1980, 14, 263313. A. Mackworth. Consistency Intelligence, 1977,8,99-118. in networks of relations. Artificial U. Montanari. Networks of constraints: fundamental properties and applications to picture processing. Information Sciences, 1974,7,95-132. N.J. Nilsson. Principles of ArtificiaZ InteZZigence. Tioga Publishing Company, 1980. R.M. Stallman and GJ. Sussman. Forward reasoning and dependency-directed backtracking in a system for computer- aided circuit analysis. Artificial InteZZigence, 1977,9,135-196. G.L. Steele Jr. The Definition and Implementation of a Computer Programming Language Based on Constraints. Ph.D. thesis, Massachusetts Institute of Technology, 1980. D.S. Touretzky. The Mathematics Kaufmann Publishers, 1986. of Inheritance systems. Morgan R.E. Valdes-Perez. Spatio-Temporal Reasoning and Linear Inequalities. Memo 875, MIT Artificial Intelligence Laboratory, 1986. R.E. Valdes-Perez. Knowledge-Based Schematics Drafting: Aesthetic Configuration as a Design Task. MIT Al Lab Working Paper 292,1987. M.B. Vilain. A system for reasoning AAAI, pages 197-201, 1982. about time. Proceedings of M. Vilain and H. Kautz. Constraint propagation algorithms for temporal reasoning. Proceedings of AAAI, pages 377-382,1986. lo Nlsson 801 describes resolution in detail. Resolution of nogoods is mentioned in [Steele 801; it and the search regimen in this paper was also used in valdes-Perez 871 and is more fully explained there. 260 Planning
1987
46
639
Validating Generalized Hans in the Presence of Incomplete Information Marianne Winslett* Computer Science Dept., Stanford University Stanford, CA 94305 Abstract. Let Robbie be an agent possessing a gen- eralized plan for accomplishing a goal. Can Robbie use his plan to accomplish the goal without passing through any of a set of forbidden world states en route to the goal? This situation arises if, for example, Robbie must accomplish the goal with some additional constraints (“Can I get to the airport in time without speeding?“). There are two poles in the spectrum of methods Robbie can use to test his plan in the new world situ- ation, each with its own advantages and disadvantages. At one extreme, Robbie can choose to express the new world constraints as additional preconditions on all the operators used for planning. At the other extreme, Rob- bie can attempt to prove that the new constraints are satisfied in every possible world that could arise during execution of the plan, from any initial world state that is consistent with his axioms. In this paper we examine the tradeoffs between these two opposing approaches, and show that the approaches are in fact very similar from a computational complexity point of view. I. htrodiaction Given a goal 5: that an agent will often need to achieve, it is natural to look for a means of reducing the time spent searching for a means to achieve S. If the search space for G is large, then it may well be more efficient for the agent to store a macro-operator [Fikes 721 or a skele- tal or generalized plan (Friedland 79, Schank 77, Stefik 801 for achieving G, rather than searching through the problem space each time G and similar goals arise. We assume that this store-versus-compute controversy has been decided in favor of storage of a generalized plan for some of Robbie’s goals, such as driving to the air- port. Further, we assume that Robbie has a means of selecting a generalized plan relevant to the situation at hand and of binding the free variables in that plan to the appropriate entities for the current situation** [Al- terman 86, Dean 85, Tenenberg 861. The flow between * ATT Doctoral Scholar; additional support was provided by DARPA under contract N39-84-C-0211. ** Of course, this is a research problem in its own right. operations in the resulting plan can be depicted graph- ically, as in the informal graph here of a simple plan for getting a drink of water. A plan P is a sequence* of operators. A path of execution through P is a sequence of complete- information world states So, 5’1, . . . , S,, , where So is the initial state of the world, where Sj is obtained from Sj-1 by applying the jth operator in P to Sj , and where each Sj is consistent with the agent’s knowledge base (MB) and with a first-order encoding of P (described below). Suppose that Robbie now must test whether his plan P for accomplishing goal G is still valid when there are new constraints on the permissible state of the world at each step of the execution of the plan. We assume that the new constraints can be formulated as a first- order formula c~, quantified over situations.** In Rob- bie’s KB, it may well be the case that complete precon- ditions for successful execution of P have already been regressed [Waldinger 771 through all operators in P, to form one initial overall precondition C. In the airport example, C might dictate that Robbie have a driver’s license and have easy access to a working automobile. Such a regression guarantees that the agent’s goal (e.g., a timely airport arrival) can be attained from any initial world state that is consistent with the KB and with the instantiated form of 6. Unfortunately, this guarantee of correctness does not persist when the new constraint o on speeding is added to Robbie’s KB, because even if the initial state of the world satisfies Q, some state Robbie goes through on a path to G might violate (Y. Robbie might have a general plan to get to the airport such that getting there in time would unfortunately require speeding. To see how this problem manifests itself in a for- * This directly extends to plans with conditional appli- cation of operators and with operators with multiple possible outcomes. ** To simplify the presentation, we will make the restric- tion that the new constraint contain only one situation variable. For example, we will not consider constraints on transformations between situations. Winsleta 261 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. malization of Robbie’s plan, let us introduce some ter- minology that will be used throughout the remainder of this paper. We have already described P and C, forms of a generalized plan and its overall precondition. The new constraint is denoted by a. Situation vari- ables and constants are written s, s’, se, si, etc. The situation describing the initial state of the world is so; Note that SO need not completely specify the state of the world; So is a complete-information world state consis- tent, with all knowledge about SO. The result of applying the first operator in P to SO is situation s1 (e.g., si = result(startUpCar(s0))); and so on until situation s,, the final result of the plan, is defined.* The definitions of SO through sn constitute an encoding of P. For any world state Si on a path of execution through P, the sit- uation corresponding to Si is situation si. Finally, for any formula 4 with a single quantified situation variable x, let +[s] be the formula created by binding x to the constant s. We now give a definition of plan validity. Assume that C[sc] is true (e.g., lspeeding(se)). For P to be valid given the new constraint CX, cp must be satisfied at every world state along every path of execution through P. More formally, P is valid in the presence. of the new constraint QI if C[sc] is true, and for every world state Si along every path of execution through P, o[si] is true. Of course, one cannot prove validity by testing each possible Si separately, because the combinatorial possibilities are computationally overwhelming. To check the validity of P more efficiently, it might seem to suffice to check whether G is true in situ- ation sn -for example, to check whether Robbie arrived at the airport in time. Unfortunately, this is insufficient; as long as C[sc] is true, G[s,J will be also. For example, suppose that Robbie has a plan to conquer his thirst during dinner by getting a drink of water, as shown above. This can be represented in a simplified portion of Robbie’s KB by using four pred- icates to describe the state of the world: atBldg(bldg, sitn), inRoom(room, sitn), thirsty(sitn), has(item, sitn); three plan operators: goToKitchen, pour, drink; and frame axioms telling what aspects of world state are not affected by application of operators. The resulting KB fragment appears in Figure I.** * A similar coding using conditionals can be used for plans with non-sequential structure. ** This is certainly not intended as a definitive or prescrip tive encoding of thirst-quenchery; rather, it is a simple encoding that is sufficient for our purposes. We have omitted some im- portant rules, such as type information, have lazily represented rooms within buildings as constants rather than functions, and have simplified the “pour” and “drink” operators. Initial situation: thirsty(sc) inRoom( diningRoom, SO) Operators: Vs inRoom(kitchen, result(goToKitchen, s)) Vs [inRoom(kitchen, s) --) has(water, result(pour, s))] V’s [has(water, s) --) (1 thirsty(result(drink(s))) A lhas(water, result(drink(s))))] Frame axioms: Vs [thirsty(result(goToKitchen, s)) * thirsty(s)] V’s Vx [has(x, result(goToKitchen, s)) * has(x, s)] Vs Vx [atBldg( x, result(goToKitchen, s)) ++ atBldg(x, s)] Vs [thirsty(result(pour, s)) cf thirsty(s)] V’s Vx [x # water 3 (has(x, result(pour, s)) w has(x, s))] V’s Vx [x # water + (has(x, result(drink, s)) * has(x, s))] (other frame axioms showing that atBldg and inRoom are unaffected by pouring and drinking) Additional axioms: Vs Va: Vy [(atBldg(x, s) A atBldg(y, s)) - x = y] Vs Vx Vy [(inRoom(x, s) A inRoom(y, s)) - x = y] Figure 1. Simplified portion of agent’s KB. Then Rosie asks Robbie if he knows how to get water during dinner with the additional constraint that in a restaurant, Robbie should never be in the kitchen. The new constraint and the resulting encoding of plan P are shown in Figure 2. Encoding of plan P: Sl = result(goToKitchen, so) s2 = result(pour, si) s3 = result(drink, ~2) New constraint Q: Vs[atBldg(restiant, s) + linRoom(kitchen, s)] Instantiated goal G[sn]: -&hirsty(ss) Figure 2. Plan to quench thirst, and a new constraint. Is P still valid no matter whether Robbie is at home or at a restaurant? Obviously not, because there is one path of execution through P in which Robbie is in a restaurant kitchen. Let KB+ be Robbie’s KB plus Q and the encoding of P in Figure 2. Then invalidity of P cannot be detected by a test for logical consistency 262 Planning of KB+ , because KB+ is provably logically consistent .* Further, KB+ logically implies +hirsty(ss), so one can- not detect invalidity by a test for provability of S.** We conclude that, in general, simple checks for consistency are insufficient to show validity when new constraints are introduced. In restricted cases, however, a simple check for consistency does suffice. If P is completely invalid, in the sense that every path of execution through P passes through a world state that is inconsistent with c~, then MB+ will be inconsistent. For exam- ple, suppose the KB contains the additional formula atBldg(restaurant, SO). From KB+ one can prove, as before, latBldg(restaurant, SO); hence KB+ is inconsis- tent. This implies that P will be valid if (1) the initial state of the world is completely determined by the KB, and C[so] is true; (2) all operators in P are deterministic; and (3) MB+ is consistent. Unfortunately, as shown by Robbie’s thirst-quenching plan, this approach does not extend to the common case where unknown, missing, or incomplete information is relevant to P. 2. Prove-Ahead and Prow-As-You-Go Assuming that an agent wishes to validate a plan com- pletely before beginning its execution, there are two main approaches to an efficient and general means of plan validation. The first is to regress constraint CY through all KB operators to form additional precondi- tions on those operators. In other words, add additional preconditions to each operator 0 so that 0 can never be applied if the resulting situation would violate o. In the restaurant example, this can be done by adding an ad- ditional condition on the goToKitchen operator, so that Robbie cannot go into the kitchen if he is in a restau- rant. We call this the prove-ahead approach, because we find the possible effects of 0 on cy ahead of time and act to prevent violations of LY. After this regression phase is complete, we can test the plan for validity by either of two methods: either regress to a new overall plan precondition C’ and check provability of C’[so], or else step through the plan operations and test whether the new preconditions of those operators are satisfied at each stage of execution. There is a philosophical motivation for the pure prove-ahead approach, in which all constraints are re- * A more reliable sign of trouble is that KB+ logically im- plies that Rohbie is not in a restaurant in state So. This is because the path of execution in which Robbie is initially in a restaurant gets pruned from the tree of possible plan executions, because it is inconsistent with CV. ** As mentioned earlier, if a KB logically entails C[SO], then KB+ must logically entail G[s,]. gressed through all KB operators: once a complete re- gression has been done, any plan where G[s,J is provable is a valid plan, no matter what the initial world state and no matter what branching occurs during plan exe- cution. With pure prove-ahead, Robbie need not worry about detecting constraint violation at plan generation and validation time; he need only check preconditions. The alternative to the prove-ahead approach in this example is to prove that at each situation on a path of execution through P, Robbie is not in a restaurant kitchen. More formally, given that axiom <Y is satisfied in an initial situation SO, one must prove that (Y is also true in situation ~1. If one can prove a[si] for each situation sd in the encoding of P, then P is valid. We call this technique the prove-as-you-go approach, because we step through the operators of the plan in order, and for each operator 0 prove that a is true in the situation that results from applying 0 to the previous situation. The remainder of this paper is a discussion of the advantages and disadvantages of the prove-ahead and prove-as-you-go approaches. We show that these two paradigms are at opposite ends of a spectrum of approaches, yet are computationally quite similar. 3. The Qualification Problem Both the prove-ahead and prove-as-you-go approaches are methods of dealing with the qualification proHem [Ginsberg 87, McCarthy 69, McCarthy $01: what pre- conditions must be met in order for an action to suc- ceed? In the real world, it is impossible to enumerate all the factors that might cause failure of a plan such as for a trip to the airport. This means that the philosophical motivation behind prove-ahead must ultimately be frus- trated: except in simple systems, one cannot enumerate all the prerequisites for an operator. In attempting to do so, one will simply clutter up the MB with a sea of inconsequential preconditions for operations, and lower the intelligibility of the KB for outside reviewers. Even were an exhaustive list of preconditions available, one would not in general want to take the time needed to prove that all the preconditions were satisfied. This problem also arises in the prove-as-you- go approach, however; one may not be able to afford the expense of proving that all constraints will be sat- isfied after an operation is performed. In section 6, we will discuss the relative adaptability of prove-ahead and prove-as-you-go to partial testing of preconditions and constraints. 4. A Comparison of Computational Coma The computational complexity of prove-ahead and prove-as-you-go depends on Robbie’s language and KB. Winsleti 263 Depending on the form of his KB, testing plan validity can be in any complexity class: from polynomial time worst case on up through undecidable. The goal of this section is not to differentiate between these classes, but rather to show that prove-ahead and prove-as-you-go are in the s-e complexity class for any given type of KB. We will do this by comparing the requirements that prove-ahead and prove-as-you-go impose on a theorem- prover. In specific KB and plan instances, prove-as- you-gomay be less costly than prove-ahead; this is par- ticularly true if the agent is not interested in repairing invalid plans. Because additional information about the state of the world may be available at the time an oper- ator in the plan is applied, prove-ahead does not detect invalidity as quickly as does prove-as-you-go.* This tardy detection of invalidity arises because pure prove-ahead requires two rounds of proofs. Robbie must first regress (Y into new preconditions, and then test whether the new preconditions are satisfied during execution of P. While prove-ahead may make the same set of calls to a theorem-prover as does prove-as-you-go, prove-ahead will not discover that P is invalid until the second phase of its computation, when the new precon- ditions are checked against world state information. A hybrid approach can be used to overcome this flaw in part, but prove-ahead will still require two rounds of proofs. Prove-ahead has another computational disad- vantage when compared with prove-as-you-go, in that prove-as-you-go can take advantage of all the state in- formation available when trying to prove satisfaction of cr. Prove-ahead, on the other hand, first derives a most general condition under which (Y will be satisfied, and then checks to see whether that condition holds in the situation at hand. For example, consider the constraint that the car stay on the road at all times while driving. General preconditions for this condition may be very difficult to find. On the other hand, it may be trivial to show that the car is on the road right now; for example, Robbie may have a primitive robotic function available that tells him that the car is now in the middle lane. To elucidate this point further, we describe a method of implementing prove-ahead and prove-as-you-go. Let s be a situation on a path of execution through P, and let s’ be the situation resulting from applying the next operator 0 in P to situation s. (As usual, the state of the world in situations s and s’ need not be fully determined by the KB.) The prove-as-you- go method requires that one prove cy[s’] given cr[s]. If +A more accurate cost comparison tized cost of prove-ahead (section 5). must consider the amor- the proof fails and P is not to be repaired, then the validation process terminates at this point. If the proof fails and P is to be repaired, then two repair tactics are possible: depending on the method used to estab- lish cy[s’], the reason for the failure can be converted into additional preconditions on operators and/or additional conditions for C. For example, in many cases verifica- tion of a universal constraint cy can be reduced to testing a number of ground instantiations of CY [Winslett 871. If any of these ground instantiations is not provable, then this pinpoints a case in which o is violated. After this violation is repaired, the process is repeated, searching for another violation of CY in situation s’. The prove-ahead method also requires that one prove cu[s’] given cy[s], with the additional proviso that s can be any legal situation, not just one on a path of execution through P. In other words, in attempting to prove cr[s’], one cannot use any state information about s that could be deduced from P. Further, failure to find a proof does not mean that P is invalid; invalidity can only be ascertained by repairing 0 and then check- ing its new preconditions against initial world state in- formation. In addition, prove-ahead requires detection of all violations of cy[s’] before determining whether P is valid. For example, a prove-ahead approach to the restaurant constraint would generate the new precon- dition +nBuilding(restaurant, s) for the goToKitchen operator, even if the first step of Robbie’s plan were to go home. Finally, once the regression is complete and any invalidities have been detected, repair of P is ac- complished by adding additional constraints to C. In the worst case the potential computational ad- vantages of prove-as-you-go will not materialize. For ex- ample, if there is no helpful state information available for situation s, then prove-as-you-go will not have that advantage over prove-ahead. More precisely, suppose P contains situations sc through sn, and let Q be the ith operator of P. Suppose the agent finds a prove-as-you- go proof of a[si] such that that proof does not contain any sj, for 0 5 j 5 n, other than si and si-1. Then prove-ahead is as easy as prove-as-you-go for operator Qi, as-essentially the same proof may be used for prove- ahead. 5. Hybrid Approaches In general, neither the pure prove-ahead nor the pure prove-as-you-go approach will dominate in efficiency; a hybrid approach will be much more satisfactory. For example, there is no need to regress Q through an op- erator until it is actually used in a plan. Then if (Y is a temporary constraint (such as a prohibition on speeding 264 Planning while Rosie be done.* is in the 4, a complete regression will not Regression through the operators in a particu- lar plan is one point in the spectrum between pure prove-ahead and pure prove-as-you-go, a hybrid be- tween the two approaches; such intermediate points abound. For example, Robbie might deliberately choose not to regress o through a particular operator 0 even though 0 appears in the plan at hand; this might be advisable if cr was a temporary constraint and/or no plan repair was contemplated. He might choose prove- as-you-go for certain pairs of operators and constraints, and apply prove-ahead to the remainder. If Robbie can predict how often a particular prove-as-you-go proof would be repeated in the future, he can use measures of storage cost and other, less tangible factors (see section 6) to estimate the amortized cost of prove-ahead over all repetitions of that proof [Lenat 793. The amortized cost of prove-ahead can then be used as a basis for choice between prove-as-you-go and prove-ahead. Robbie need not apply the two phases of prove- ahead sequentially; the gap between regression and new precondition testing in prove-ahead can be narrowed by partial merging of the generation of new precondi- tions and their testing. If Robbie begins by regress- ing (Y through the first operator 0 of his plan, then he can immediately check to see whether the new precondi- tions for 0 are true in state SQ. If the preconditions are not true, then Robbie knows that P is not valid, and can proceed to repair or else search for another plan. For even more rapid detection of invalidity, Robbie can check each new precondition as it is generated. Prove- ahead will still require two rounds of proofs, however. Robbie can even choose dynamically between prove-ahead and prove-as-you-go. For example, he can validate the first few steps of P using prove-ahead, and then decide, on the basis of the additional information available at that point, that prove-as-you-go is the best choice for the remainder of the validation sequence. 6. Comparative Extensibility of Prove-Ahead and Prove-As-You-Go As described earlier, prove-as-you-go has a computa- tional advantage over prove-ahead in its use of all state information available at a stage of plan execution. The counterpart of this prove-as-you-go advantage is the possibility in the prove-ahead world of choosing overly general preconditions. For example, suppose Robbie has a constraint on successful driving that the car must stay on the road at all times. The exact preconditions for maintaining this constraint at each moment are very complicated; they depend on how far Robbie is from the edge of the road, how fast he is going, etc. It would be easier to forgo exact preconditions and use a sim- pler subsuming condition, and run the risk of possibly rejecting a plan due to too-strict preconditions. Another disadvantage of prove-ahead is that it will generate many operator preconditions, and the ex- act relation of those preconditions to the situational con- straints will not necessarily be clear. This has repercus- sions for Robbie’s ability to explain his decisions to an outside agent. Robbie will need some means of tagging preconditions and their associated constraints; this is a second-order concept. One may well argue that in a model of human car-starting, unlikely and inconsequen- tial preconditions should not be stored with the opera- tors that they impact. Rather, a human would call on its deductive facilities in the event that the car failed to start, to try and trace the origin of the failure to a combination of unchecked constraints. This tagging consideration assumes greater im- portance if we abandon the assumption that Robbie never makes a move without consulting his theorem- prover. For example, Robbie may assign numerical mea- sures of importance to preconditions and constraints, based on the likelihood of their being violated in the current situation and on the magnitude of the reper- cussions of their violation [Finger 861. Then Robbie can choose which preconditions and constraints to check be- fore performing an action, based on the computational resources available to him and the importance of the checks. Rowever, in order to assign measures of impor- tance to preconditions, again it is necessary to tag pre- conditions and their associated constraints.* Otherwise the consequences of failing to check a precondition will be unclear, for without an additional round of proofs, Robbie cannot easily tell which constraints may be vio- lated if a particular precondition is ignored. For example, Robbie may decide that it is not worthwhile to look for a potato in the tail pipe before turning the ignition key, for violation of this condition is unlikely, and ignoring it will not have a disastrous effect on the state of the world anyway. In contrast, Robbie might check this constraint after a fierce Idaho could be stored as first- the KB. Efficient utiliza- the use of special control Winslett 265 rainstorm, or if there had been explosive tailpipe potatoes. a recent rash of highly cost of the two approaches, i.e., consider the number of times a proof would have to be repeated under prove-as- 7. Conclusions Let Robbie be an agent faced with the task of validating a plan P in the presence of a new constraint (u. If the initial state of the world is fully determined by informa- tion on hand and P is deterministic, then P is valid iff Robbie’s knowledge base (KB) is consistent with CE and an encoding of P. However, another means of valida- tion is needed when Robbie has insufficient information about the current state of the world. We have identified two extreme approaches to this validation problem. In the pure prove-ahead ap- proach, Q! is transformed into additional preconditions on Robbie’s KB operators, and then the new operator preconditions for plan P are checked to see if they are true in the current situation, either by regressing those conditions or by direct checks. In the pure prove-as-you- go approach, to determine whether P is valid, Robbie must prove that (Y is true at each step of P, given that Q holds initially. The prove-ahead and prove-as-you-go methods are computationally quite similar: for a given type of KB, they are in the same computational complexity class. In practice, prove-as-you-go may be less costly than prove-ahead, as its search for violations of (Y has a narrower focus. For example, any violation of CY de- tected by prove-as-you-go may actually arise during the execution of P; but prove-ahead may locate many po- tential violations of Q that could not arise in P before finding those that do. This computational advantage arises because prove-as-you-go can utilize information about the state of the world that arises from prior oper- ators in the plan. It may be much easier to prove that a constraint. is satisfied at a particular point in the exe- cution of a plan than to solve the prove-ahead problem of finding general preconditions for satisfaction of that constraint. Pure prove-ahead and pure prove-as-you-go fall at two ends of a spectrum; in practice, we expect a hybrid approach to perform better than either extreme. In choosing whether to apply prove-ahead or prove-as-you-go to a particular operator and constraint, one must consider factors other than simple computa- tional complexity. Intuitively, the prove-as-you-go ap- proach is best for “obscure” constraints. Prove-ahead is best for constraints that will be checked often, as un- der prove-as-you-go the same proofs would be performed time after time. An informed choice between prove- ahead and prove-as-you-go must consider the amortized you-go, and also measure costs of storage, comprehen- sibility to outside agents, and extensibility to heuristic methods of planning: the store-versus-compute tradeoff once again. Acknowledgments This paper arose from discussions of the planning prob- lem with J. Finger, H. Hirsh, L. Steinberg, D. Subra- manian, C. Tong, and R. Waldinger, who gave lively and patient arguments for the merits of and underlying motivations for the prove-ahead and prove-as-you-go ap- proaches, and suggested directions for extensions. References [Alterman 19861 R. Alterman, “An adaptive planner”, Conference On Artificial Intelligence, 1986. National [Dean 851 T. Dean, “Tempoml reasoning involving counterfactu- als and disjunctions”, Proceedings of the international Joint Conference on Artificial Intelligence, 1985. pikes 721 R. E. Fik es, P. E. Hart, and N. J. Nilsson, “Learn- ing and executing generalized robot plans”, Artificial Intel- ligence 3:4,#1972. Finger 861 J. Finger, “Planning and execution with incomplete knowledge”, unpublished manuscript, 1986. [Friedland 791 P. E. Friedland, Knowledge-based experiment de- sign in molecular genetics, PhD thesis, Stanford University, 1979. [Georgeff 851 M. P. Georgeff, A. L. Lansky, and P. Bessiere, “A procedural logic”, Proceedings of the International Joint Conference on Artificial Intelligence, 1985. [Ginsberg 871 M. Ginsberg and D. E. Smith, “Reasoning about action I”, submitted for publication. [Lenat 791 D. B. Lenat, F. Hayes-Roth, and P. Klahr, “Cognitive economy in artificial intelligence systems”, Proceedings of the International Joint Conference on AI, 1979. [McCarthy 691 J. McCarthy and P. J. Hayes, “Some philosoph- ical problems in artificial intelligence”, in B. Meltzer and D. Michie, eels., Machine Intelligence 4, Edinburgh Univer- sity Press, Edinburgh, 1969. [1McCarthy SO] J. McCarthy, “C’ ncumscription-A form of non- monotonic reasoning”, Artificial Intelligence 13, 1980. [Sacerdoti 771 E. D. Sacerdoti, A structure for plans and behavior, Elsevier North Holland, New York, 1977. [Schank 771 R. C. Schank and R. P. Abelson, Scripts, plans, goals, and understanding, Lawrence Erlbaum, Hillsdale NJ, 1977. [Stef% SO] M. J. Stefik, Planning Stanford University, 1980. with constraints, PhD thesis, [Tenenberg 861 J. Tenenberg, “Planning with abstraction”, Na- tional Conference on Artificial Intelligence, 1986. [Wahlinger 771 R. J. Waldinger, “Achieving several goals simul- taneously”, in E. W. Elcock and D. Michie, eds., Machine Intelligence 8, Halstead/Wiley, New York, 1977. [Winslett $71 M. Winslett, Updating databases with incomplete information, PhD thesis, Stanford University, 1987. 266 Planning
1987
47
640
Rules for the Implicit Acquisition of Knowledge About the User Robert Kass and Tim Finin Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 191046389 Abstract A major problem with incorporating a user model into an application has been the difficulty of acquiring the in- formation for the user model. To make the user model effective, past approaches have relied heavily upon the explicit encoding of a large amount of information about potential system users. This paper discusses techniques for acquiring knowledge about the user implicirly (as the interaction with the user proceeds) in interactions between users and cooperative advisory systems. These techniques were obtained by analyzing transcripts of a large number of interactions between advice-seekers and a human expert, and have been encoded as a set of user model acquisition rules. Furthermore, the rules are domain independent, sup- porting the feasibility of building a general user modelling module. I. Introduction With the development of knowledge-based systems, computers are now being used for tasks that previously required signif- icant human intelligence. As computers assume these tasks, expectations about their behavior have evolved as well. Sys- tems that exhibit human-like reasoning abilities are expected to interact in an intelligent manner. Thus, humans might expect a system to (among other things) understand natural language, be able to infer intentions that are not explicitly stated, and tailor system responses to the individual user. One feature important to systems that support intelligent interaction is the ability to maintain information about their to have models of their users. users-such systems are said A uSer model can be loosely described as a collection of assumptions or beliefs the system holds about the user. In this sense, all computer programs have some implicit user model, since they make assumptions about how the user will inter- act with the program. Of more interest are systems that keep explicit information about each individual user, using this i& formation to tailor their communication with the user. Infor- mation that a system might keep about the user includes: the user’s goals and plans, the user’s beliefs or knowledge about the domain of discourse, objective properties about the user such as age or name, and the user’s beliefs about other agents (such as the system itself). the ‘This work was supported by grants Digital Equipment Corporation from the Research Office and User modelling systems built in &cent years2 have demonstrated two major problems. First, acquiring knowledge about the user is very difficult. Second, user models seem to be restricted to the specific system for which they were created. Thus, developing a new system requires the development of a new user model. A solution to these problems enhances the feasibility of building a general user model [Finin and Drager, 19861 that can be used for multiple systems. The research described in this paper addresses both gen- eral user modelling problems. In fact, solving the first problem goes a long way towards solving the second as well. This pa- per presents a group of user model acquisition rules that can be used to build a model of the user during an interaction. These rules were developed after study of an extensive collection of transcripts of conversations between advice-seekers and a hu- man expert. The rules are domain independent, thus the user modelling portion of the system can handle different applica- tions that have a similar form of interaction. Sections II. and III. briefly discuss the user model acquisition problem and gen- eral user modelling, while the following four sections present some of the model acquisition rules; section VIII. discusses future work planned in this area. A fuller treatment of the topics in this paper can be found in [Kass, 19871. II. User Model Acquisition In most existing user modelling systems, knowledge about the user is acquired explicitly, with information about the users directly asserted by the system designers. The most common method of asserting this information is to pie-encode the con- tents of the user model. Pre-encoding may take several forms: (1) a range of possible beliefs about the user may be listed in the model, (2) assumptions about all users may be collected into a generic model, or (3) assumptions may be collected into stereotypes [Rich, 19791 reflecting the beliefs of classes of users. When a new user interacts with the system, the user modelling process consists of identifying which pre-encoded information most accurately explains the observed behavior of the user. Most user modelling systems rely on the user model dur- ing the course of the interaction, hence a robust user model must be developed quickly. Generic and stereotype modelling 2Kass and Finin, 19871 presents a survey of user modelling for natural language systems, while Bass, 19861 surveys user modelling in intelligent tutoring systems. Kass and Finin 295 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. approaches are particularly attractive because they can rapidly develop a large set of beliefs about a particular user. A generic modei provides an initial set of assumptions about the user, while a stereotype approach will also provide a large set of beliefs once a stereotype (or several stereows) is triggered. Unfortunately, the amount of information that k&t be explicitly pre-&coded can be prohibitive. In fact, for many systems, building the user model can be much more time con- suming than building the domain knowledge base, making the implementation of a user modelling system very unattractive. Furthermore, specific user models must be built for each ap- nliratinn yrrvubr” lA. The user model acquisition techniques discussed in this paper take a different approach to acquiring the user model. These techniques build the user model implicitly-as the sys- tem interaotc with the IICPI- 1mn1;r;t l,PPP mnrlnl a.-n..~aX#w3 C”A.I ll.C” lLIVL” ..LUI lrll” UUVI. LIIAyLIwIL uobd,1. III"UbI ak.yul~lu"ll minimizes (or even eliminates) the need for explicit coding of user model information. Thus, effective implicit user model acquisition can greatly reduce the development effort required to implement a user modelling system. Implicit user model acquisition techniques have not been used extensively in the past because they have not performed weii. It has been generally believed that the content of the communication between user and system is too limited to quickly build a robust model of a new user. The goal of this paper is to show that implicit acquisition techniques can ---: -,-,-. ---Leq- .- --l_.-+ ---de* T. 3.‘. r* . . . . yuuuy pruau~t; d roousr moael. ffl aomg so, me acquisiuon rules rely. on certain features of human behavior, using infor- mation obtained from user and system behavior (as well as the domain model of the underlying application and the current model of the user) as clues to infer more general information about the beliefs and knowledge of the user. In fact, the rules are capable of producing a model that can support a substan- tial portion of the behavior of the expert participant in the transcripts studied. There are three senses of generality that apply to user mod- ellinsx, 1Jser models mav he peneral w;th ~gttect tn the mnw -L_-- --------L ----, -- c) ------ lc-v’ -- -a- . -.“o- of users they can handle. Most user modelling systems have this form of generality. User models may also be general with respect to the form of interaction with the user. Such a user model could effectively deal with interactions that might include menus, graphics, or natural language. Finally, user models may be general with respect to the domain of the inter- w.tinn A rl~.m~;n-e~eww-~l ,,QP,. mfirlnl on..lrl hn ..~.a,4 L. ex:,cl+am~. LLYbA” I.. 1 a u” IIAcLIII-~~11~~c.u UUcll IfiI” \+UI La”ULLL “b l.bJbAa 111 DJ JLbAlIJ covering a diverse range of applications. Completely general user modelling allows the user mod- elling portion of the system to be an independent module that rnllertc and maintainc infnrmatinn nhnllt the IIPPVE ad *.1..* -vYII..Y . ..I.. *.l-...-1I” LI.L” IA..CICI” a. UVVYC Cll” UYYL”, UllU communicates with other modules in the system via a well- defined interface. Such a user modelling module would use four sources of information for making inferences about the user: the behavior of the user observed by the system, the be- havior of the system observable by the user, the domain knowl- edge of the underlying application, and the current model of Figure 1: User modelling module sources of information the user. The organization of a system incorporating a user modelling module is illustrated in figure 1. -----.-4 T”niS Parr -will fOC-~S Ofi User mQdeii~ig that is genercu with respect to the domain. (User generality is assumed to be a requirement of any user modelling system.) There are two reasons for this limitation. First, there is an existing trend .,.,,“ ..Ac. c..:1,4:.., A,...“ ...:..b :“ ..4be..w.Ad.., nw,n,nmn “ ..#.h .xX. ‘%“ ..a..+ LUWcllUJ UUllUlll~ uulIlulll-llIuqJGIIuGllL JJJLGlIW, m.lbII aa wqJGlL system shells. An expert system shell provides the reasoning and control structures for a system, and is capable of reasoning with knowledge bases from a variety of domains. A domain- independent user modelling module can thus be used in con- junction with other domain-independent modules to enhance the capabilities of such systems. The second reason for focusing on domain generality is that building user models that are general with respect to the form of interaction is very difficult. Many of the implicit acquisition rules assume particular interaction characteristics. Shifting the form of interaction can affect not only these as- sumptions, but even the methods used to access the interaction between user and system. (Consider the difference between natural language interaction and interaction using graphics and a mouse.) Restricting the foorm of interaction thus constrains the model acquisition problem, enabling useful assumptions about the behavior of user and system. In this work, the form of interaction is limited to cooper- ative advisory systems. A cooperative advisory system has a ,...L”h.-r+:nl Lr.A,. s.cl-.....l.T.A#vd #al..-..+ n ~nv+:n..lor AAm” :e . . . . ..ec. Juuz9uLllLLa.l uuuy v1 NlUWlcuLjC auuuraycul.lmmI.l uuuuull,u3111Lj this knowledge to give advice to users. Since it is cooperative, the system will try to anticipate the user’s needs and goals, tai- loring its interaction to be as helpfL1 as possible. Although not 9 rpnllirement nf rnnnrvative aAvicnrv cvctemc thic wnrk alcn u *~LLY”I..“I.L “1 Yvvyvluu.” uu. ‘““‘J “J YC”.‘““, CI.L” I. v--L UYV assumes that the user and system communicate with each other through a natural language interface. The knowledge of the advisory system consists of factual knowiedge about concepts and things in the world, and the reasoning rules it uses to give advice. The information to be modelled is primarily long term, such as the user’s knowledge about the domain that tends to 296 Cognitive Modeling Figure 2: ML-ONE representation of concepts derived from “I have $40,000 in moneymarket” persist over many interactions. The rules presented in this paper have been developed by an- alyzing interactions between human experts and their clients. The data examined includes transcripts of approximately 100 interactions from a radio talk show entitled “Harry Gross: Speaking about Your Money.“3 The examples used in this paper are taken from these transcripts. These conversations are appropriate for analysis since they represent a situation similar to what might occur in a cooperative advisory system: the form of interaction is quite limited (the participants com- municate via telephone), the callers vary in their knowledge of the domain, and the expert has no pre-defined model of the caller. The user model acquisition rules in the following sec- tions should be considered to be reasonabb rules-they are not absolute. Exceptions (which are sometimes quite easy to find) exist for each rule. This does not detract from the effec- tiveness of the rules, since the acquisition rules are intended to draw conclusions a human would reasonably make. Some- times these conclusions will be over-ridden as new information arrives; sometimes the rules will draw conclusions that are not correct-humans have the same problem. It might be con- venient to think of the acquisition rules as default rules [Re- iter, 19801, but other approaches, such as evidential reasoning methods, could be used as well. The rules can be loosely partitioned into three categories: communicative rules, model-based rules, and human behavior rules. Communicative rules focus on the communication be- tween the system and the user. Model-based rules depend on certain relationships in the structure of information between the domain model and the current model of the user. Human behavior rules depend on features of human behavior that are 3The transcripts were made by Martha Pollack and Julia Hirschberg from shows originally broadcast on station WCAU in Philadelphia between Febru- ary 1 and February 5, 1982. typical or in some sense universal. The following sections look at these classes of rules, and discuss several rules in detail. A complete description of all of the rules can be found in [Rass, 19871. . es The communicative rules are triggered by statements made by the user or the system, deriving information about the user based on the conventions governing normal discourse between cooperative agents. The class of communicative rules can be further divided into direct inference rules and inzplicature rules. Direct inference rules are concerned solely with the in- formation contained in a statement. When the user makes a statement, the user modelling module can assume the user be- lieves that statement. This can result in a number of assertions to the user model concerning the user’s beliefs about the con- cepts and attributes mentioned in the statement. For example, the statement “I have $40,000 in money market” could produce the following output from the parser: 3x1 (investment A 3z2(investor(xl, x2) A x2 = user) A 3X3(instrument(xl, x3) A moneymarlret(xs)) A Zix4(amount(xl, x4) A dollar(x4) A x4 = 40,00@) This, in turn, can be decomposed to produce 15 simple asser- tions used to generate the concept and role definitions for a I&ONE knowledge base depicted in figure 2.4 Furthermore, statements may have presuppositions, which the user must be- lieve as well. Kaplan [Kaplan, 19821 and Kobsa wobsa, 19841 have presented methods for computing these presuppositions, which may be asserted to the user model, The implicature rules are inspired by Grice’s maxims for cooperative communication @rice, 19751. The assumption 4These steps have been implemented in a Prolog program that takes a first-order logic representation of a statement and builds a NIKL (New Imple- mentation of KLONE) Moser, 19831 knowledge base. Kass and Finin 297 that the user is striving to be cooperative provides the system with certain expectations about user behavior. These expecta- tions can be exploited to draw inferences about what the user does and does not know. The remainder of this section presents three rules inspired by the maxims of relation, quantity, and manner; illustrating them with examples from the transcripts. Relevancy Rule Grice’s maxim of relation tells a speaker to make the contents of an utterance relevant. Assuming the user obeys this maxim, the user mo&lling module can assume what the user says is relevant. The rule can be stated as follows: Rule 1 If the user says P, the user modelling module can as- sume that the user believes that P, in its entirety, is used in reasoning about the current goal or goals of the interaction. In addition to claiming that the user believes what is said is relevant, the relevancy rule states that the user believes that everything in the statement is relevant. This can be illustrated with an example (the client’s statements am preceded by a “C” and the expert’s by an “E”). C. I just retired December first, and in addition to my pen- sion and social security I have a supplemental annuity which I contributed to while I was employed from the state of New Jersey mutual fund. I’m entitled to a lump sum settlement which would be between $16,800 and $17,800, or a lesser life annuity and the choices of the annuity would be $125.45 per month. That would be the maximum with no beneficiaries. E. You can stop right there, take your money. In this example the caller believes a large amount of informa- tion is required to answer her question, thus she proceeds to talk about her recent retirement and source of income. The expert recognizes that the only information relevant in deter- mining how she should take her supplemental annuity is the value of the annuity if taken as a lump sum versus the monthly payments that could be received. Once the expert has this in- formation (and has identified the caller’s goal) he interrupts and provides the answer. Meanwhile, in modelling the caller, the expert can conclude that she has little knowledge of the reasoning involved in making the decision. She feels that all the information she has listed is important to the reasoning process, while in fact it is not. Thus the relevancy rule can be used to acquire information about incorrect reasoning a user may perform. Sufficiency Rule The sufficiency rule is inspired by the maxim of quantity. The system can reason as follows: if the user were complete’iy knowledgeable about the domain, he would provide informa- tion sufficient for the system to satisfy the user’s goal. Suppose what the user says turns out to be insufficient? In this case the user must lack some knowledge that the system has. A user may have three types of knowledge about entities in the domain knowledge base: knowledge of an entity, knowledge of the relevance of an entity, and knowledge of the value of an entity. When the user is cooperative, yet omits a piece of information that the system knows is relevant, it is due to a lack of knowledge of one of these three types. The sufficiency rule says: Wulle 2 If the user omits a relevant piece of information from a statement, then either the user does not know of that piece of information, does not know whether that information is relevant to his current goal or goals, or does not know the value for the piece of information. Once again an example will illustrate this rule. C. E. C. E. C. E. C. E, C. I’ve got $2250 to invest right now in an 18 month cer- tificate and I don’t know whether to go the variable rate or the fixed rate now or the fixed rate later. Have you any money invested now? Yes, I do. In what? I’ve got $5000 in a money market fund. Have you anything in certificates or anything else? I’ve got three stocks. Three separate stocks? Yes sir. In this conversation, the caller believes his initial statement of the problem is sufficient for the expert to make a decision. Instead, the expert realizes a lot more information about the caller’s investments are needed. The expert proceeds to ask questions to obtain this information. Even then, the caller provides minimal answers because he does not know what additional information is relevant until the expert specifically asks for it. In this case it seems obvious that the user knows the additional information, he just does not realize it is relevant. In using the sufficiency rule, the user modelling module must be able to “turn around” the reasoning rules in the domain model, in order to identify properties that are relevant. This collection of relevant properties creates an expectation of the information the user should provide. Information in the set of expectations that is not provided thus must be information the user lacks knowledge of. The sufficiency rule might be strengthened further. If the user is being fully cooperative he will try to be as helpful as possible. Suppose the user knows a piece of information, but does not know its value. Por example, the user might know that the due date of a money market certificate is relevant information, but not know the actual due date. A truly coop- erative user would tell the system that he does not know the due date. Thus the sufficiency rule might be limited to con- clude that either the user does not know of the information, or does not know that it is relevant. Furthermore, if the user does not know of the information, he certainly cannot believe it is relevant, so the sufficiency rule could make a definite conclusion in this case. Although the strengthened sufficiency rule seems attrac- tive, that level of cooperation by the user does not seem likely. People are reluctant to display their ignorance. Thus, when they don’t know something, they avoid mentioning it, even when they believe it is relevant. 298 Cognitive Modeling Rule 7 If the user is the agent of an action, then the user mod- elling module can attribute to the user knowledge about the action, the substeps of the action, and the factual information related to the action. For example, if the user says “I just rolled over two CD’s,” the user modelling module can recognize that the user is the agent of the “roll over” action. The agent rule will thus conclude that the user knows about the steps involved in rolling over a CD. Furthermore, the agent rule will also assert that the user knows about related facts, such as: that CD’s have a due date, that money from a CD can be reinvested, that CD’s are obtained from banks, and so on. Thus the agent rule can be particularly powerful in contributing information about the user. Evaluation Rule The evaluation rule uses the domain model’s reasoning knowledge to make conclusions about the reasoning knowl- edge the user has. Many times in advisory interactions the expert must evaluate beliefs held by the user or actions per- formed by the user in light of the extensive knowledge the expert has. If the expert’s evaluation differs from that of the user, this indicates that the user does not know some of the reasoning that the expert used. The rule is stated as follows: Rule 8 If the system is able to evaluate actions taken by the user given a certain situation, and those actions do not con- form to the actions the system would have taken, then the user modelling module can identijlj, portions of the reasoning done by the system that the user does not know about. An example of the use of the evaluation rule can be seen in the following conversation. The next step is to implement the model acquisition rules in a user modelling system to test their effectiveness. This implementation will comprise one portion of a general user modelling module. This module will assume that the user communicates with the underlying application via a natural language interface, having access to the output of a parser encoded in a meaning representation language (MRL). An un- derlying domain model for investment securities has been built and will be used to test the acquisition rules. This model uses the KL-ONE-like language NlKL Moser, 19831, and currently consists of over 150 concepts. There are a number of issues not discussed in this paper that will be addressed in the implementation. These issues include the use of a truth maintenance system to manage the non-monotonic nature of user modelling, and the development of rules to arbitrate between the acquisition rules when they conflict regarding the user’s knowledge of a particular item. A more complete description of current research and future plans can be found in [Kass, 19871. pinin and Drager, 19861 T. Finin and D. Drager. GUM&: a general user modelling system. In Proceedings of the 1986 Conference of the Canadian Society for Computa- tional Studies of Intelligence, pages 24-30, 1986. [G&e, 19751 H. P. Grice. Logic and conversation. In P. Cole and J. L. Morgan, editors, Syntax and Seman- tics, Academic Press, New York, 1975. [Kaplan, 19821 S. J. Kaplan. Cooperative responses from a portable natural language database query system. Artij- cial Intelligence, 19(2): 165-188, 1982. C. I have $10,000 in stocks and $10,000 in a savings fund E. Why so much in a savings account? In this conversation the caller makes a statement that leads the expert to an immediate evaluation: she has too much money’in her savings account. ’ Thus, in modelling the caller, the expert can conclude that she does not recognize the reasoning that implies that a lot of money in a savings account will result in a relatively low return. VPIL @onclusion [Kass, 19861 R. Kass. The Role of User Modelling in Intelli- gent Tutoring Systems. Technical Report MS-CIS-86-58 (Lint Lab 41), Department of Computer and Information Science, University of Pennsylvania, 1986. [Kass, 19871 Id. Kass. Implicit Acquisition of User Models in Cooperative Advisory Systems. Technical Report MS- CIS-87-05, Department of Computer and Information Science, University of Pennsylvania, 1987. [Kass and Finin, 19871 R. Kass and T. Finin. Modelling the user in natural language systems. Computational Lin- guistics, Special Issue on User Modelling, 1987. The long term goal of this research is the development of a general user modelling system that can act as a repository of knowledge about individual users, and service the needs of a number of applications. The primary obstacle to this goal is the difficulty of explicitly acquiring and building the user models, motivating the development of a set of rules (currently 18,8 of which are discussed in this paper) that can be used to acquire knowledge about the user implicitly from his interaction with the system. 5These conversations were transcribed in February, 1982, when the U.S. inflation rate was near its peak. At that time the interest rate on a savings account was considerably less than what could be earned in a money market fund, so having a lot of money in a savings account was always a bad idea. [Kobsa, 19841 A. Kobsa. Three steps in constructing mutual belief models from user assertions. In Proceedings of the 6th European Conference on Artificial Intelligence, pages 423-427, 1984. maser, 19831 M. G. Moser. An Overview of NIKL, The New Implementation of KL-ONE. Technical Report 542 1, Bolt, Beranek and Newman, 1983. [Reiter, 19801 R. Reiter. A logic for default reasoning. Arti- jscial Intelligence, 13(1):81-132, 1980. @Xich, 19791 E. Rich. User modelling via stereotypes. Cog- nitive Science, 3~329-354, 1979. 300 Cognitive Modeling
1987
48
641
Case-Based Problem Solving wit Knowledge ase of Learned Cases Wendy G. Lehnert Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 Abstract Recent experiments indicate that a case-based approach to the problem of word pronunciation is effective as the basis for a system that learns to pronounce English words. More generally, the approach taken here illustrates how a case-based reasoner can access a large knowledge base con- taining hundreds of potentially relevant cases and consolidate these multiple knowledge sources us- ing numerical relaxation over a structured net- work. In response to a test item, a search space is first generated and structured as a lateral in- hibition network. Then a spreading activation algorithm is applied to this search space using activation levels derived from the case base. In this paper we describe the general design of our model and report preliminary test results based on a training vocabulary of 750 words. Our ap- proach combines traditional heuristic methods for memory organization with connectionist-inspired techniques for network manipulation in an effort to exploit the best of both information-processing methodologies. I. Introduction While many researchers have proposed various rea- soning mechanisms for case-based systems [Bin, 1986, Kolodner, Simpson, and Sycara-Cyranski, 1985, Ham- mond, 1986, Rissland and Ashley, 1986, Lebowitz, 19861, we have very little experience with truly large memories. A realistic memory for any case-based reasoner must contain hundreds or thousands of cases. With this many available cases we can expect to see significant competition among a large number of potentially relevant cases at any given time.’ It is therefore important to develop techniques for memory access and conflict resolution which will enable US to effectively arbitrate large numbers of contributing cases. It is all too easy to design ad hoc heuristics that apply to a limited set of examples operating in conjunction with a hand-coded memory. We have developed a more general ‘This research was supported by NSF Presidential Young Inves- tigators Award NSFIST-8351863 and DARPA grant N00014-85-K- 0017. strategy for handling case-based memories with a special concern for the difficulties of scaling up. The problem of word pronunciation is an ideal task for experiments in case-based learning and reasoning. Sym- bolic pronunciations available in a standard dictionary grant us easy access to a large corpus of data which is com- pletely free of the representational problems encountered in tasks like legal reasoning or medical diagnosis. By cir- cumventing these otherwise important issues, we have been able to concentrate our efforts on the automatic construc- tion of a large case-based memory, indexing techniques for that memory, and strategies for resolving competition + among multiple cases. To illustrate the effectiveness of our ideas we have im- plemented PRO, a system that learns to pronounce words using a knowledge base organized around cases. PRO cre- ates its knowledge structures in response to supervised training items where each item contains a word and a se- quence of phonemes representing that word’s correct pro- nunciation. At any time we can interrupt PRO’s learning mode to test PRO on arbitrary vocabulary items. We cur- rently have an on-line data base of 850 word/pronunciation pairs which can be used as training items, test items, or both. Construction 0 e When PRO examines its training items it does not create one case per training item. Rather, PRO segments an input word into a partition of substrings which maps onto the targeted phoneme string in a credible fashion. We will refer to each mapping from a substring to a phoneme as a “hypothesis.” A case is then a fixed-length subsequence of hypotheses contained in the mapping of a segmentation to a phoneme sequence. To illustrate, suppose we have associated the segmen- tation (SH OW T I ME) with the phoneme sequence (sh 6 t ; m). This produces five hypotheses (SH/sh, OW/6, T/t, I/;, and ME/m). In principle, we could design our cases to record sequences of either arbitrary or fixed lengths (al- though sequences of length 1 would fail to encode any con- Lehnert 301 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. textual information). The selection of a case length is a design decision that benefits from some experimentation: we have found that 3 is an effective length for the word pronunciation task. So the item “showtime” is then asso- ciated with the following seven cases: 1. 2. 3. 4. 5. 6. 7. (START**/0 START*/0 SH/sh) (START*/0 SH,‘sh OW/a) (SH/sh OW/G T/t) K’” w I/7) 11; MV-4 (I/7 ME/m END*/0) (ME/~ END*/0 END**/O) The null hypotheses serve only to mark places at the beginning and end of each word. These place markers are necessary if we want to maintain sequences of uniform length. The cases PRO identifies correspond to a moving “window” on the lexical item, although we cannot say that the window has a fixed length in terms of the let- ters spanned. Since each hypothesis may contain 1, 2, 3, or even 4 letters (as in OUGH/ii) from the input word, the size of this window can vary from 1 letter (at the beginning or end of the word) to as many as 12 letters (at least in principle) depending on the hypotheses involved. Each case identified during training is indexed under the substring of its leading hypothesis and stored within a tree structure.2 The indexing substring points to a sep- arate tree for each hypothesis associated with that sub- string. For example, the substring “0~” could have two hypotheses associated with it: OW/a (as in “show”) and OW/ou (as in “how”). To store the sequence (OW/B T/t I/i) we would add this case to the tree headed by OW/& If T/t has never followed OW/8 before, we must create a new branch for the tree. Otherwise, if T/t had been en- countered after OW/a during training, we would traverse the branch already constructed and next check to see if I/? is present in the tree at the next level. Since we are operating PRO with a fixed sequence length of 3, each of our trees is limited to a depth of three hypotheses. PRO updates its knowledge base by expanding tree structures as needed, and updating frequency data for each case encountered during training. Each node of a case tree is associated with a positive integer which indicates how many times this particular node has been visited during training. If T/t has followed OW/a 13 times before, we will now update that count to 14. If I/; has never followed (OW/a T/t) b e ore, we create a new node for the tree and f 21n fact, we also index each case under the last hypothesis as well as the first. Indices using trailing hypotheses access trees that traverse case sequences backwards whereas cases indexed by leading hypotheses go forwards. This dual encoding becomes important when we use frequency data during test mode. initialize its frequency count at 1. It follows that frequency counts can never increase as we traverse branches out from a root node: frequencies typically diminish as we move downward through a tree. In general, there is more than one way to segment a character string and match those segments against the phoneme sequence encoding the string’s pronunciation. PRO must therefore access its knowledge base during training in order to identify preferred segmentations with high degrees of credibility. Since segmentation errors dur- ing training encourage additional segmentation errors dur- ing subsequent training, as well as impaired performance in test mode, PRO is very conservative about segmentation judgments. If PRO cannot identify a preferred segmen- tation, PRO will ignore the training item and make no attempts to modify its knowledge base in response to that item. This strategy of “timid acquisition” is the only way to guarantee effective learning in the absence of negative examples [Berwick, 19861. We have found that a very effective heuristic for filter- ing multiple segmentations can be devised by maximizing (1) known hypotheses, (2) new hypotheses which partially match some known hypothesis, and (3) known hypotheses with high frequency counts. However, these filters can still fail if PRO is subjected to an “unreasonable” training ses- sion at the start. To get PRO off on the right foot, we must begin with an initial training session that makes it’ easy for PRO to identify valid hypotheses. At the beginning, when PRO’s knowledge base is sparse, it is important to train PRO with training pairs that do not result in multiple segmentations. In gen- eral, three-letter words satisfy this constraint because most three-letter words map to a sequence of three phonemes. Once PRO has built a knowledge base in response to some such initial training session, we can move on to four and five-letter training words with confidence. Apart from this general restriction, PRO is not overly sensitive to the de- sign of its training sequences. At worst, PRO will not learn anything from a poorly placed training item if it has not “worked itself up” to that item adequately. ase: When PRO is in test mode it receives a lexical item and attempts to produce a unique phoneme sequence in re- sponse to that item. Because PRO’s knowledge base does not remember training items in their entirety, there is no guarantee that PRO will produce a correct pronunciation for a word it previously encountered during training. How- ever, PRO does tend to have a somewhat higher hit rate for items seen in training compared to novel test words. We will discuss PRO’s performance in test mode at the end of this paper. 302 Cognitive Modeling PRO begins its analysis of a test word by produc- ing a search space of all possible hypothesis sequences it can associate with that word. Note that this search space will not, in general, contain all possible segmenta- tions of the input word since most segmentations will not be associated with hypotheses recognized by the knowl- edge base. For example, any segmentation of “showtime” containing the substring “wti” will be rejected since PRO will not have any hypotheses in memory using the string “wti.” The complexity of our search space is therefore lim- ited by the knowledge base available to PRO. PRO also limits its search space by eliminating any segmentations which place hypothesis boundaries between letters that have never been divided between two hypotheses during training (PRO creates a small data base of this informa- tion during training in addition to the case-based memory described above). For the task of word pronunciation the greatest sources of ambiguity result from vowels and vowel combinations, so those are places where the search space tends to “fan out.” The search space of possible word pronunciations which PRO generates must now be resolved to a single preferred pronunciation. This selection process is made on the basis of information available in PRO’s case base. To access the case base, we transform our search space into a structured network utilizing lateral inhibition and spread- ing activation. To begin, we create inhibitory links between any pair of hypotheses that share overlapping substrings from the input word. All such hypotheses are in competition with one another and must resolve to a preferred winner. These inhibitory links provide negative activation throughout the network, which is essential to the process of identifying a preferred path through the net. Figure 1 shows a sam- ple search space for the word “showtime”. (In actuality, this search space would be much larger if PRO had any substantial training behind it). All positive activation for the network comes from the case base by adding addi- tional ‘“context-nodes” to the network. A context node is added to the network wherever three consecutive hypothe- ses correspond to a complete branch of some tree in PRO’s knowledge base. The context node is then connected to the three hypotheses that spawned it, and initialized at a positive level of activation. This level of activation is com- puted on the basis of frequency data available in the case trees.3 Once all possible context nodes have been gener- ated, connected up, and initialized, we are ready to relax the network. SThe precise value is f(x,y) = rnd (10(1-(f-21(1-~ll) where x and y represent the frequency of this particular hypothesis sequence relative to the first hypothesis in the sequence and the last hypothesis in the sequence. In other words, x tells us how often this sequence follows an instance of the first hypothesis and y tells us how often this sequence precedes an instance of the third hypothesis. x,y E (O,l], f(x,y) E (1,101 and f(x,y) - 10 as either x or y - 1. Figure 1: A search space network A standard relaxation algorithm (see Feldman and Ballard 1982) is then applied to our network representa- tion until all activation levels have stabilized or the number of iterations reaches 30. In general, the network stabilizes before 20 iterations, and we then evaluate the activation levels associated with each candidate pronunciation in the network. A path in the network receives as its activation level the lowest activation level found over all the hypoth- esis nodes contained in that path. Happily, most paths zero-out, leaving only a few with positive levels of activa- tion, Of those with any positive activation there is usu- ally one with a maximal activation level. In the case of a unique maximal path, we have a strong preference for a single word pronunciation. In the case of multiple maxi- mal paths, PRO picks one arbitrarily and returns that as the preferred pronunciation. Once a test item has been resolved, PRO discards the network representation constructed for that item and moves on to the next test item with a clean slate. No mod- ifications to PRO’s knowledge base are made during test mode. Lehnert 303 At this time we have assembled a corpus of 850 train- ing items consisting of words chosen at random from a dictionary and ranging in length from 3 to 8 letters. We have collected data for PRO’s performance on a test set consisting of the first 200 words from a 750-word train- ing session. This test set of “familiar” words contains 50 three-letter words, 100 four-letter words, and 50 five-letter words. We have also tested 100 novel words not found in the 750-word training session (50 four-letter words and 50 five-letter words). We ran PRO on both test sets at three different times during training: (1) after 250 words, (2) after 450 words, and (3) after all 750 words. It is important to note that the complexity of a given test item changes as PRO processes additional training items and increases its knowledge base. There are three factors that contribute to the complexity of PRO’s task during test mode: (1) the number of hypotheses in mem- ory9 (2) the number of cases (hypothesis sequences) in memory, and (3) the frequency data associated with each case in memory. We will refer to these three factors as the “Hypothesis Base,” the “Case Base,” and the “Statistical Base.” As the Hypothesis Base grows, we will see the search space for a given test item increase since additional seg- mentations may be possible and more hypotheses may be associated with each plausible segmentation. As the Case Base grows, we see more context nodes generated in re- sponse to a given test item since there are more hypothesis sequences available to reinforce the search space. As the Statistical Base grows, we do not see additional complexity in the structures we generate, but we may see some effects on the time’required to stabilize the network during the re- laxation process. On the basis of only 750 training items, it is not possible to say whether effects from a growing Sta- tistical Base will alter the number of iterations required during network relaxation. We can easily plot growth curves for the Hypothe- sis Base as a function of training items processed. Table 1 shows how the number of hypotheses increases during training. Note that a sharp growth rate during the first 200 items (630 phonemes) drops off to a much slower growth rate during the remaining 550 items. During the initial growth spurt we average about 1 new hypothesis for ev- ery 2 training items. After the initial growth spurt, we pick up roughly 1 new hypothesis for every 10 training items. By the end of this training session PRO has iden- tified 180 hypotheses. While the growth rate during the last 550 items appears to be linear, we must assume that it will become increasingly harder to identify new hypotheses as more training items are processed. Since we have not trained enough to see our curve level off, it is difficult to say where the ceiling on hypotheses might be. However, it is safe to say that this growth process will eventually reach an asymptote, at which point the Hypothesis Base cannot increase the complexity of our search spaces any further. Hypothesis Base 200 , 03 0 500 1000 1500 2000 2500 3000 Target Phonemes Processed during Training Table 1. Hypothesis Base Growth Curve A similar growth curve for the Case Base tells a differ- ent story. Table 2 shows how the number of cases in PRO’s memory increase as PRO processes the same 750 training items. Now we see a largely linear function throughout: PRO generates roughly 2.1 new cases for each training item it processes. By the end of this training session PRO has generated a knowledge base containing 1591 cases. Target Phonemes Processed during Training Table 2. Case Base Growth Curve As with the Hypothesis Base, we must assume that the Case Base will eventually saturate and cease to acquire new cases. Unfortunately, our limited training experience does not allow us to speculate about how many training items might be required before saturation sets in. At the very least, we can say that saturation in the Hypothesis Base necessarily precedes saturation in the Case Base, and it will probably take the Case Base a while to settle down after the Hypothesis Base has stabilized. Unlike the Hypothesis Base and the Case Base, the Statistical Base will continue to change as long as training continues. If training items are repeated, we would not see any operational differences in the Statistical Base, as long as all training items are repeated with the same fre- quency. However, interesting effects would be derived from the Statistical Base if some segment of the training cor- pus were repeated heavily in the absence of compensating 304 Cognitive Modeling repetitions throughout the entire training corpus. Words encountered often would then influence the relaxation pro- cess more heavily than words seen less frequently. This is an important feature of our model as far as psychological validity is concerned. Since the frequency distribution for words in everyday use is not uniform, one could argue that not all words are equal in the mind of a lexicon-processing human. Any psychological account of phonetic interpre- tation should therefore be responsive to this question of frequency distributions and predict behaviors that vary in response to manipulations of word distributions. Given the increased complexity of PRO’s pronuncia- tion task as its knowledge base grows, it is not surprising to see some degradation in PRO’s test mode performance as we move through the training corpus. After training on 250 words, PRO returns a hit rate of 94.7% on the num- ber of phonemes it can correctly identify in the test corpus of familiar words. By the time PRO has processed 750 training words, this hit rate has dropped to 89.9% - the error rate has doubled. At the same time, the Hypothe- sis Base has grown by a factor of 1.4 and the Case Base has expanded by a factor of 2.5. It is interesting to note that PRO’s performance is not significantly correlated with word lengths: shorter words do not necessarily fare better than longer words. While PRO’s performance drops slightly over time for familiar words, we see an increase in performance levels for novel words. After 250 words PRO correctly identifies 66% of the phonemes for our test group of 100 words not present in the training corpus. By the time PRO has processed 750 training words, this success rate has risen to 75%. It is not surprising to see PRO behave differently for these two groups of test items in its early stages of training. Further experimentation is needed to determine whether these two performance curves eventually converge and stabilize. 100 90 80 70 60 50 40 30 20 10 0 Percentage of Correct Phonemes 0 trained rords 0 novel sords LI baseline e 500 1000 1500 2000 2500 3000 Target Phonemes Processed during Training Table 3. Comparative Hit Rates in Test Mode Table 3 shows the two performance curves for famil- iar and novel words along with a baseline curve designed to factor out the effects of the Case Base. The percent- ages contained in the baseline result from test runs that substitute random guesses in place of PRO’s relaxation al- gorithm. For these baseline hit rates we generate search spaces derived from the Hypothesis Base just as PRO does. But instead of adding context nodes and relaxing the net- work, we simply pick a path at random from the search space. After 250 words, the random algorithm exhibits a hit rate of 60%. After 750 words, this hit rate has dropped to 55% due to the larger search spaces that result from a growing Hypothesis Base. The network representation generated by PRO dur- ing test mode provides a simple formalism for defining a search space of possible responses to a given test item. This search space is further influenced by the addition of context nodes derived from PRO’s Case Base. The re- laxation algorithm applied to this network representation provides us with a powerful strategy for integrating the relative strengths of relevant cases. A large number of competing cases and mutually-supportive cases influence the contributions of one another and eventually stabilize in a global consensus. When evaluating PRO, we must remember that the techniques used here will be most effective in a domain characterized by a tendency toward regularities. In spite of its idiosyncrasies, the phonetic patterns of English do satisfy this requirement to a large extent. Even so, excep- tions abound and PRO must maintain a delicate balance between its ability to recognize special “one shot” cases and its responsiveness to general patterns. For example, “move” “ love” and “cove” each require a different pronunciation for “0”. However arbitrary these may be, PRO can learn to favor the correct pronunciation based on the contextual influence of an “m”, “1”) or “c” before the segment “ov.” But now consider what happens when we add an “r” to the end of each word. “lklover” and “lover” retain the vowel sounds present in “move” and “love”, but “cover” is not consistent with the sound from “cove.” On the other hand, “over” is consistent with “cove.” Using PRO’s limited case length of 3 hypotheses, it is possible for PRO’s Case Base to miss essential discrim- inating features when arbitrary conventions of this sort arise. Failings of this kind do not imply that PRO is seri- ously flawed. We could easily increase PRO’s case length to 4 and handle the above instances without difficulty. If we increased the case length we would necessarily increase the size of the Case Base, but we would also decrease the number of context nodes we generate since it is harder to match a sequence of 4 hypotheses than a sequence of 3. A design modification of this sort could only result in im- proved performance, but at the cost of greater memory requirements. If anything, we should be surprised that a case length of 3 is as effective as it is. If we did enough training to see the Hypothesis Base and Case Base approach saturation, we would see the ef- fects of the Statistical Base come into play. It is con- ceivable that continued enhancements in the Statistical Base might reverse any negative effects that appear during growth periods for the Hypothesis Base and the Case Base. We would speculate that the chances of this happening are greater the sooner the Case Base settles down. If the Case Base continued to grow at a significant rate after we had exhausted half the words of the language, and performance continued to degrade as the Case Base grew, then it is un- likely that statistical effects would have enough impact to do any good that far along. For this reason, it might be desirable to minimize the size of the Case Base. At the current time too little is known about PRO’s long term performance to say much about these tradeoffs. Although we have characterized PRO as a case-based reasoning system, it may be more narrowly described as a memory-based reasoning (MBR) system [Stanfill and Waltz, 19861. Within the MBR paradigm, PRO is very similar to MBRtalk [op. cit.] in its overall goals despite major differences between the two approaches. In terms of performance, MBRtalk attains a hit rate of 86%, but this is after training on 4438 words (PRO attained 76% after 750 words). It is also the case the MBRtalk requires a massively parallel architecture (the Connection Machine) while PRO runs reasonably in a Common Lisp environ- ment with 4M of memory. Apart from further experimentation with PRO by ex- panding its training corpus, we see two general directions for future research. On the one hand, we would like to identify other problem areas where numerical relaxation is an effective strategy for accessing large knowledge bases or- ganized around cases. Any domain where frequency data tends to correspond with generalizations is a good can- didate for these investigations. On the other hand, we also want to investigate methods for generalizing numeri- cal relaxation to symbolic processes of constraint propaga- tion. Symbolic constraint propagation is a richer and more powerful technique than numerical relaxation. In domains where numerical data is inappropriate or unobtainable, we would still like to pursue the notion of network stabiliza- tion as an effective means for mediating competing cases in a large knowledge base of available cases. We therefore view PRO as a single application illus- trating a general framework for case-based reasoning sys- tems. The general utility of these methods can be deter- mined only by extensive experimentation. For example, the ideas behind PRO are now being applied to the task of conceptual sentence analysis [Lehnert, 1986, Lehnert, 19871. As we gain more experience with PRO’s approach to case-based reasoning and memory organization, we will be in a better position to characterize the tasks and knowl- edge domains best suited to these techniques. [Bain, 19861 Bain, W. A Case-based Reasoning System for Subjective Assessment, Proceedings of the Fifth National Conference on Artificial Intelligence pp. 523-527, 1986. [Berwick, 1986] Berwick, R.C. Learning from positive- only examples: the subset principle and three case stud- ies in Machine Learning vol. 2. (eds. Michalski, Car- bone11 and Mitchell). pp. 625-645. Morgan Kaufmann, 1986. [Feldman and Ballard, 19821 Feldman, J.A., and Ballard, D.H. Connectionist models and their properties Cogni- tive Science Vol. 6, no. 3. pp.205-254, 1982. [Hammond, 19861 Hammond, K. CHEF: A Model of Case- Based Planning, Proceedings of the Fifth National Con- ference on Artificial Intelligence pp. 267-271, 1986. [Kolodner, Simpson, and Sycara-Cyranski, 19851 Kolodner, J., Simpson, R., and Sycara-Cyranski, K. A Process Model of Case-Based Reasoning in Problem Solving, in Proceedings of the Ninth International Joint Conference on Artificial Intelligence pp. 284-290, 1985. [Lebowitz, 19861 L b e owitz, M. Not the Path to Perdition: The Utility of Similarity-Based Learning, Proceedings of the Fifth National Conference on Artificial Intelligence pp. 533-537, 1986. [Lehnert, 19861 Lehnert, W.G. Utilizing episodic memory for the integration of syntax and semantics CPTM #15. Department of Computer and Information Science, Uni- versity of Massachusetts, Amherst, MA, 1986. [Lehnert, 19871 L h e nert, W .G. (in press) Learning to In- tegrate Syntax and Semantics, Machine Learning Work- shop, 1987. [Rissland and Ashley, 19861 Rissland. E. and Ashley, K. Hypotheticals as Heuristic Device Proceedings of the Fifth National Conference on Artificial Intelligence pp. 289-297, 1986. [Stanfill and Waltz, 19861 Stanfill, C., and Waltz, D. To- ward memory-based reasoning. Communications of the ACM, vol. 29, no.12. pp.1213-28, 1986. 306 Cognitive Modeling
1987
49
642
aining Logic Progra the ATMS1 Nicholas S. Flam, Thsrnas 6. iettdch and Department of Computer Science Oregon State University Corvallis, Oregon 97331 Abstract Two powerful reasoning tools have recently appeared, logic programming and assumption-based truth main- tenance systems (ATMS). An ATMS offers significant advantages to a problem solver: assumptions are eas- ily managed and the search for solutions can be car- ried out in the most general context first and in any order. Logic programming allows us to program a problem solver declaratively-describe whet the prob- lem is, rather than describe how to solve the prob- lem. However, we are currently limited when using an ATMS with our problem solvers, because we are forced to describe the problem in terms of a simple language of forward implications. In this paper we present a logic programming language, called FOR- LOG, that raises the level of programming the ATMS to that of a powerful logic programming language. FORLOG supports the use of “logical variables” and both forward and backward reasoning. FORLOG pro- grams are compiled into a data-flow language (simi- lar to the RETE network) that efficiently implements deKleer’s consumer architecture. FORLOG has been implemented in Interlisp-D. (A) (8) (Cl Figure 1: Consumer Architecture Example One problem solver architecture that exploits these ATMS benefits is called the consumer architecture [deK- leer, 1988c]. To characterize the kinds of computation the consumer architecture can perform, we can view it as di- rectly executing the following logical language. In this language, (referred to as the CA language) a problem is described as a set of well-formed formulas each of which must have one of the following forms: Truth maintenance systems provide several benefits to problem solving systems [deKleer, 1986a,b]. The ATMS allows the problem solver to store multiple, contradictory states during search. Each state corresponds to a COW tezt, a different set of assumptions made during the search. To provide maximal sharing between different contexts, a global database is employed with each fact assigned a unique AT&&S-node that holds a label describing the set of contexts to which it belongs. The problem solver informs the ATMS of each inference. The ATMS caches the new derived fact and its justification in a global dependency structure. By caching all inferences performed, the ATMS can ensure that no inference is performed more than once. By maintaining updated labels on each fact, the ATMS ensures that facts are maximally shared among differ- ent contexts. By maintaining the dependency structure, dependency-directed backtracking can be supported. the vz P&E) 3 P&e) VzP,(z) E P&) L where L is a ground atomic formula and each Pi is a dis- junction of the form El v E2 v . . . v En, Each Ej is a formula of the form VjjPl A Pz A . . . A P,. The basic formula is a universally quantified bi- conditional or implication. The left- and right-hand sides are both disjunctions of conjunctions (with nesting per- mitted to any depth). The third form-the ground atomic formula-is employed to state basic facts to the system. The main limitations of this language are that only uni- versal quantification is permitted and values are restricted to constants (zero order terms). To understand example below: the consumer architecture, consider ‘This work was supported in part by the National Science Foundation under grants DMC-8514949 and IST-8519926 and by a contract from Tektronix, Inc. Vs A(s) I B(s) (1) 24 Al Architectures From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. tr’¶ %?) 3 C(q) (2) A(l)* (3) The problem solver performs forward chaining. When- ever the antecedent pattern (e.g., A(s)) of any implica- tion is satisfied by a conjunction of facts in the database (e.g., A(l)), the implication “fires.” A job, called a con- sumer, is put on the problem solver’s global agenda. Each consumer is assigned an ATMS-node that is justified by the ATMS-nodes of the facts that satisfied the antecedent pattern. Hence, consumers are already linked into the de- pendency structure even though the resulting assertion, B(l), has not been entered into the database (see Figure l(b)). The consumer’contains the “detached” consequent of the implication (B( 1))) f ormed by instantiating the right hand side of (1) with the bindings generated from the an- tecedents. Problem solving proceeds by picking consumers off the agenda and running them. When a consumer is run, the new detachment is asserted into the database, possibly satisfying other implications and generating new consumers. When B(P) is asserted, a new consumer is gen- erated (through satisfaction of (2)) and put on the agenda, as illustated in Figure l(c). The consumer architecture has many advantages. In- ferences can be made in any order and in any context. No inferences are missed, and no inferences are done twice. This is possible because the consumer architecture solves the un-outing problem. This problem arises when a line of reasoning is interrupted, so that reasoning can proceed in a different context. The un-outing problem is the problem ‘of resuming the reasoning in the previous context. DeK- leer solves this problem by imposing three conventions on the problem solver: The problem solver must keep the ATMS fully in- formed. All information that the problem solver uses in making an inference (i.e., all the antecedent facts) must be reported to the ATMS as justifications for the resulting consumer. e A clear distinction is drawn between the problem solver and the ATMS. The problem solver must gen- erate all consumers irrespective of the status of their labels. Even if a consumer’s support is currently be- lieved to be contradictory, the consumer must be pro- duced and put on the agenda. This ensures that the inference will be made if ever the supporting context is un-outed. The problem solver’s state is explicit. A consumer, although it is part of the problem solver’s state, is like a “virtual” fact. It has an ATMS-node and is linked into the dependency network, but has not been asserted, and hence has no consequences. By linking it into the dependency structure, the ATMS ensures that any changes to the consumer’s underlying support will be reflected in the consumer’s label. The un-outing problem is elegantly solved. To move be- tween contexts, we choose consumers from the agenda (by inspecting their labels), rather than choosing implications to fire. The advantages afforded by the consumer architec- ture and the CA language are apparent. Unfortunately, there are significant limitations that currently restrict the usefulness of the ATMS. We identify the following related problems: Lack of expressiveness. The CA language limits vari- ables to be universally quantified-we are unable to exploit the power of the existentially quantified vari- able. Values are limited to constants. The representa- tional power of terms for capturing structured objects, such as trees and lists, is not available. Lack of reversibility. Both the CA language and con- sumer architecture offer no support for logical vari- ables. This is because we are restricted to ground facts. To illustrate the usefulness of logical variables, consider the time-honored append program in Prolog [Clocksin and Mellish, 19841, [Sterling and Shapiro, 19861. Without logical variables, we can only ask ques- tions of the form: uppend([l], [Z, 3l,[J, 2,3/). Logical variables stand for computed answers and allow us to “run” the program in many ways-as a generator*of solutions rather than as just a test of solutions. For example, in Prolog we can solve, uppend([l],[Z,?l],A) and get back that A=[1,2,31. In fact, if we give Prolog awend(X K [A 2,31) we get all four possible answers back as values for X and Y. Hence, by employing log- ical variables, we can exploit the denotational aspect of logic programs through reversible execution. Only forward reasoning is possible. The consumer ar- chitecture and ATMS are limited to forward reasoning only. This is because the ATMS relies on each derived fact having a justification-a set of antecedent facts. Hence, the direction of inference must be the same as the direction of logical support. In forward chaining, we reason from antecedents to consequences, but in backward chaining we go the other way-from conse- quences to antecedents. Hence, we have no justifica- tions (until the problem solving is complete). FCRLCG raises the level of programming the consumer architecture to that of a powerful programming language, while retaining all the advantages afforded by the ATMS. This is achieved by significantly extending the CA lan- guage and the underlying consumer architecture to over- come the problems identified above. The CA language is generalized to include most of first order logic, including existentially-quantified variables and arbitrary term strut- tures. The consumer architecture is correspondingly ex- tended to support these new features. Logical variables are implemented through their logical equivalent, Skolem con- stants, while unification is replaced by equality reasoning. Finally, FCRLCG supports backward reasoning by refor- mulating it into a form of forward reasoning that yields identical behavior (but different logical semantics). A. F A FORLCG program is a set of well-formed formulas sim- ilar to CA language, except: Each Pi is a disjunction of the form Fl v F2 v . . . v F,, , Flann, Dietterich, and Corpron 25 where each Fj is an existentially quantified formula of the form or 3y Pl A Pz A . . . A P, G. The basic formula allows both the left- and right-hand sides as disjunctions of existentially-quantified conjunc- tions (with nesting permitted to any depth). The ground literal is replaced by G-a predicate that may contain con- stants, Skolem constants and arbitrarily term structures. To clarify our description of FORLOG, we first give an example of FORLOG running the ubiquitous append pro- gram. Consider the following FORLOG program, which is derived from the Prolog append program. v 2, Y, 2 apwnd(x, Y, 2) 1 (x= [] A y = 2) v 3 xl,xrpzr z = [ZIIG] A z = [zllzr] A append(sr,y,zr) (4 apwnd([l], [2,3], ski) (5) The symbol slur is a Skolem constant. From this program, FORLOG generates the following inferences. By satisfying (4) with (S), FORLOG obtains ([I] = [ ] A j&3] = sh) V The first branch of this disjunction is obviously false, be- cause the list [I] is not nil. Hence, FORLOG can infer that the second branch is correct. Because the cons func- tion is a constructor, we can infer from [l] = [zr]z,.] that Xl = 1 and x, = [ 1, and we can substitute these values for all occurrences- of the existential variables 51 and x,. Unfortunately, we cannot infer the value of z,, but we can Skolemize it to be &. This reasoning produces This example demonstrates that FORLOG can exe- cute append in much the same way that Prolog does. How- ever, where Prolog would use “logical variables,” FCRLOG uses Skolem constants, and where Prolog would use unifi- cation, FORLOG employs equality reasoning. Thus, we can do “backward chaining” in FORLOG but in such a way that the direction of inference is the same as the direction of logical support. Hence, we have satisfied one of the constraints of the consumer architec- ture. We obtain the FORLOG append program directly from the Prolog program by first taking the completion [Clark, 19781. The basic idea of the completion operation is to find all the knozon ways of proving a predicate P (i.e., all the clauses with append as head) and assert that these are the only way to prove P. Once we have done this we can, without affecting the semantic content, change the im- plication (from clauses to head) into an equivalence ( z ). The FORLOG program is simply the other half of this equivalence (compared with Prolog). Hence, through this method, we can reverse the implication and align it with the direction in which we wish to reason. For more details, see [Corpron et al., 19871 and [Corpron, 19871. Before we describe how FORLOG is implemented, we re- view the requirements imposed on a problem solver by the consumer architecture: 1. If the antecedent pattern of any implication is satis- fied by a set of facts in the global database, then the consequent must be detached. 2. Detaching a consequent must generate all the relevant consumers, each having a form suitable for satisfying other antecedent patterns and each justified by all the antecedent facts. The following section describes how the FORLCG problem solver sat i&es these requirements. Sk1 = [I]&] (7) awnd([ I, [2,3], S~Z) (8) Formula (8) is, in a way, a recursive subgoal, and it can be applied to axiom (4) in the same way as the original assertion (5). When this is done, FORLOG obtains (I]=[] A [W]=sh)v (3 zl~~c~zr [] = [xllG] A Sk2 = [xl lzr] (9) A append( xr, [2,3], G)) A. FORLOG agld the Cc~nsumer As&i- tecture Running a FORLOG program entails applying a restricted theorem prover to the set of FORLOG expressions. The theorem prover basically applies msdecs ponens-detaching consequences when the antecedents are satisfied. In other words, we implement this theorem prover in the consumer architecture. Pn our description of FQRLOG below, we emphasize both logical and implementation issues. This time, the second branch of the disjunction can be ruled out, for it is impossible for [xl lz,] to ever be equal to [ 1. In the first branch of the disjunction, [ ] = [ ] tells us nothing, but the remaining fact P, 31 = Sk2 (10) completely determines the value of ski. Indeed, from as- sertions (7) and (lo), FORLOG infers that ski = [l, 2,3]. (11) First, consider point 1 above, the satisfaction sf an- tecedents. In both FORLOG and the CA language, an antecedent pattern consists of a disjunction of conjunc- tions of predicates, nested to any depth. We need only consider a DNF form, since any deeply nested expression can always be “multiplied out” to DNF form. Each dis- junct can easily be implemented as a separate conjunctive pattern in a RETE network [Forgy, 19821. When we are dealing only with ground facts, the RETE implementation provides an efficient method of determining when patterns are satisfied. However, because FORLOG supports Skolem 26 Al Architectures constants, this basic simple example: mechanism is insufficient. Consider a ‘ifs P(s) A Q(s) 1 Z(s) (12) If we know the facts P(2) and Q(2), we know that (12) is satisfied. But if we only know P(3) and Q(skl), we cannot directly determine if (12) is satisfied. If at a later time the problem solver determines that ski = 3, then (12) is satisfied with support, ski = 3, P(3) and Q(sk1). Hence, we must supplement the simple RETE implementa- tion with an equality system whose responsibility is to de- termine whether antecedent patterns are satisfied through equality information such as skl = 3. In both FORLOG and the CA language, a detached consequent is always converted into a collection of atomic formulas before being asserted into the database. This pol- icy serves to simplify the subsequent task of determining which implications are triggered by these assertions. In the CA language, this process is straightforward, since we only have universal quantification. Again we need only consider the DNF form. Each disjunct of the detached consequent will be a conjunction of ground atomic formu- las since the literals will be completely instantiated with constants from the antecedent satisfaction. Each of these ground facts produces a single consumer. There is a com- plication when we have more than one disjunct, since we may not know which of the disjuncts are true. Consider the following example: V w adult(w) 1 woman(w) V man(w) (13) If we know aduZt(Chris) , we cannot determine which dis- junct is true, so we must assume both are true. In other words we apply the following axiom: aIbVc I- a > [Choose(B, C} aAB1 b (14 aACIc] where upper case letters denote new assumptions and Choose{B,C} informs the ATMS that B V C is true. Hence, we create a new assumption for each disjunct and, since the antecedent is satisfied, put two consumers on the agenda: Assertion Justification 1) woman( Chris) (fw (15) 2) man( Chris) (A % where A is the ATMS-node of adult(Chris) and 63 stands for “assume that Chris is a wornann and C stands for “as- sume that Chris is a man.” In FORLOG, detaching consequents is significantly more complex because of the use of existential variables. There is insufficient space to describe how all consequents in FORLOG are detached, we will simply follow the append example introduced in section B. (see [Dietterich, Corpron and Flann, 19871). When we detach the consequent of the append exam- ple (4), we first try to determine which of the disjuncts is true. To do this we apply the following axiom: a V b I- (-a I b) A (lb 5 a). (16) If we know one branch of a disjunction is false-the other must be true. By applying this axiom, FORLCG can avoid the unnecessary generate and test involved in applying (14). In step (9) of the example, we determined that the first branch of the disjunction was true and formed the con- sumer given in step (10). In step (6) the second disjunct was ruled in. Here we first perform two unifications, one of which produced the consumer (7) and then instantiated the append form to obtain the consumer (8). To perform these unifications and determine the dis- junct, FQRLCG applies the following axioms: If f and 9 are distinct constructors, then the following identity theory must hold: f (Xl, . . ..GJ #9(Yl,-,Ytn) Xl # Yl v * - - v xn # Yn 1 f(xl, f(%... , &a) = f(Y1, - - - 9 ***9&a) # f(Yl9.*-9Yrn) yn) 3 x1 = yl A . . . A x, = y,, f(z) # x f(z) is any term in which x is free (17) In other words, the entities denoted by two constructors are equal if and only if all of their components are equal (unifiable). In general, detaching a consequent in FCRLCG in- volves a complex sequence of actions. In the append exam- ple, we first determine the branch. Then we may perform some unifications and, depending upon the unifications, create Skolem constants. Finally we produce consumers. Rather than determine which steps to perform each time we detach this consequent, we “compile” the append’im- plication into a form that will automatically perform these actions each time we detach. This form is described in the next section. a FO implemenatatisn FQRLQG implications are compiled into a data-flow like language. that has much in common with the RETE network [Forgy, 19821 and the Warren Abstract Machine (WAM, [Warren, 1983)). In general, an implication is com- piled into a network. The roots are input patterns that are satisfied by facts in the data base, while the leaves specify consumers to be created. This is illustrated by the network constructed for append given in Figure 22. Execution con- sists of passing jobs down the Uwires” from top to bottom. Jobs get created at the top when input patterns are sat- isfied by the global database, and get “consumed” at the leaves when they are used to create consumers. The con- sumers created are then run by the consumer architecture and satisfy further input patterns, thus generating new jobs. Hence, FORLCG is implemented as a data flow ma- chine whose network is “sliced.” Each slice represents a global communication, via the database, between output and input nodes in the network. To illustrate this data-flow language, we continue with the append example presented in Section B, At step (8) when (APPEND NIL (CONS 2 (CONS 3 NIL)) SW) 2Universal variables are denoted by $ prefixes, existential variables are denoted by # prefixes. The basic constructor in FORLOG is the function CONS. Finally, # denotes anything. Flann, Dietterich, and Corpron 27 ~(APPEND sx SY Sz)( I Oneof.l.Srl (OR (NOT (- SX I I Figure 2: Data-flow implementation of FORLOG Append is asserted, the pattern (APPEND $X $Y SZ) is satisfied, creating the following data-flow job: bindings : (($X . NIL)($Y . (CONS 2 (CONS 3 NIL))) ($Z . SK2)) support : (D), (18) where D is the ATMS-node of (8). This job passes through the New Detachment node (which will be discussed later) to the Oneof nodes. The oneof nodes allow us to simply determine at run time the correct disjunct. To do this, we instantiate the expressions in the nodes with the bindings on the job, and apply the identity theory (17). In this example, the resulting expression in the right-most oneof node is (OR (NOT (= NIL (CONS # #))) $ (NOT (= SK2 (CONS # #)))), (19) which is true, because NIL does not equal a constructor. Hence, this job is passed down the right-most branch. When the job reaches the leaves, we create a consumer that asserts the equality between the binding on SY and the binding on SZ, corresponding to step (10) above. To demonstrate how unifications are compiled and run, we follow the first append goal assertion (5) : (APPEND (CONS 1 NIL) (CONS 2 (CONS 3 NIL)) SKl). In this case, the left-most Oneof node is satisfied through (OR (NOT (= (CONS 1 NIL) NIL)) (NOT (= (CONS 2 (CONS 3 NIL)) SKl))), (20) and the data-flow job passes down the network to the left. Recall from the definition of append (4) that the second disjunct involves unification of both $X and $Z. Each uni- fication can be used in two ways: in read mode or in write mode [Warren 19831. Read mode corresponds to unifying the pattern with existing structure, for example, in the first unify node SX. (= (CONS 1 NIL) (CONS #Xl-l #XR-2)). Write mode corresponds to constructing new structure, for example, in the second unify node $Z, (= SK1 (CONS 1 #ZR-3). The compiler enumerates the possible combinations and constructs the decision tree illustrated in Figure 2. In this case, the job passes down the decision tree to the Skolemize #XR-2 node, where the new Skolem constant is generated and added to the bindings of the data-flow job. Finally the job is propagated to both the leaves, leading to the consumers given in (7) and (8). Note how the network is shared between the different detachments. To ensure detachments are separated from each other, the New Detachment node generates a unique name that is associated with each data-flow job that passes through. This example does not demonstrate how FORLOG handles the case when we are unable to determine the dis- junct. For example, if we assert (APPEND SK3 SK4 (CONS 1 NIL)) (21) and evaluate the instantiated expressions of the Oneof nodes, we are unable to determine directly which disjunct is true. There is not enough information available to com- pletely evaluate these equalities. FORLOG therefore par- tially evaluates them and sets up “listeners” that w.ill trig- ger when the evaluation can proceed further. In other words, we are implementing the equality axiom of substi- tuting equals for equals by rewriting the patterns rather than rewriting the facts. For example, consider the first disjunct of the right-most oneof node following (21): (NOT (= SK3 (CONS # #))). The partial evaluator translates this (through the use of the equality axiom) into: (AND (= SK3 $1) (NOT (= $1 (CONS # #))I). The first clause acts as a “1isteneP for any object that is asserted equal to SK3. while the second clause tests whether the object is not a constructor. This process is implemented by simply extending the data-flow network at run time. The listener becomes a new input pattern that connects to the right-most oneof node. If we later determine a value for SK3, (= SK3 NIL) (22) The listener is satisfied, and a job created and passed down 28 Al Architectures the network, eventually producing the following consumer: Assertion (= SK4 (CONS I NIL)) Justification (CD), (23) where C is the ATMS-node of (21) and D is the ATMS- node of (22). This value of SK3 ((22) above) could come directly from an assertion or from equality reasoning. For exam- ple, the system may first determine that another Skolem constant, say SKlO. is equal to NIL. then determine that SK3 equals SKIO. Hence, to ensure that point 1 in Section III. is satisfied, the supporting equality system needs to apply the reflexive and transitive laws of equality. The problem may be so unconstrained that values for SK3 and SK4 are not available. In this case, FORLOG enumerates both branches by applying the method pre- sented in (14). H ence, FORLOG can exploit the ATMS to explore alternative solutions in any order. . FORLOG is a forward chaining logic programming system that combines the flexible search and assumption handling of the ATMS with the clean denotational semantics, ex- pressive power, and reversibility of logic programming. This was achieved by extending deKleer’s simple CA language to include much of full, first order logic and by augmenting the problem solving architecture so that Skolem constants can be supported. FORLOG is completely implemented and running in We&p-D. Problems solved include the bulk of the Pro- log examples given in the early chapters of [Sterling and Shapiro, 19861 and some simple constraint propagation problems such as those in [Steele, 19801. In addition, a small expert system for designing cloning expriments has been constructed in FORLOG. In this paper we have used a rather pathological, but well known example-append. FORLOG is currently be- ing applied to solving more complex problems including mechanical design problems. In this domain, FORLOG’s ability to both manipulate partial designs (through the use of Skolem constants) and to reason simultaneously with multiple contradictory designs, gives it a significant ad- vantage over other logic programming systems [Dietterich and Ullman, 19871. nowledgme This research was supported in part by the National Sci- ence Foundation under grants DMC-8514949 and IST- 8519926 and by a contract from Tektronix, Inc. We thank both Cohn Gerety, who implemented the ATMS, and the students of the spring 1986 Non-monotonic Reasoning class who used FORLOG for their projects. We also thank Car- oline Koff and Ritchey Ruff for their valuable comments on earlier drafts of this paper. eferenees [Clark 19781 N g t’ e a ion as Failure. In Logic and Databases, H. Gallaire and J. Minker Eds. Plenum Press, New York, pp. 293-322. [Clocksin, and Mellish, 19841 Programming in Prolog. Berlin: Springer-Verlag. [ Corpron, 19871 Disjunctions in Forward-Chaining Logi Programming. Rep. No. 87-30-l. Computer Science Department, Oregon State University. [Corpron, Dietterich, and Flann, 19871 Forthcoming. View- ing Forward Chaining as Backward Chaining. [deKleer, 1986a] An Assumption-based TMS. Artificial Intelligence, 28 (2) pp. 127-162. [deKleer, 1986b] Extending the ATMS. Artificial Intel& gence, 28 (2) pp. 163-196. [deKleer, 1986c] Problem-solving with the ATMS. Arti- ficial Intelligence, 28 (2) pp. 197-224. [Dietterich and Ullman, 19871 FORLOG: A Logic-based Architecture for Design. Rep. No. 86-30-8. Computer Science Department, Oregon State University. [Dietterich, Corpron and Flann, 19871 Forthcoming. For- ward chaining Logic Programming in FORLOG. [Forgy, 19821 RETE: A Fast Algorithm for the Many Pat- tern/Many Object Pattern Match Problem. Artificial Intelligence, 19. [Steele, 19801 The definition and implementation of a computer programming language based on con- straints. Doctoral dissertation. Massachusetts Insti- tute of Technology. [Sterling, and Shapiro, 19861 The Art of Prolog, MIT Press, Cambridge, Mass. [Warren, 19831 An Abstract Prolog Instruction Set. Technical Note 309, SRI International, Stanford, Oc- tober 1983. Flann, Dietterich, and Corpron 29
1987
5
643
MATERIAL I-IANDEING: A CONSERVATIVE N FOR NEURAL CONNECTIVITY AND PROIFPAGATION H. Van Dyke I?arunalc, James Kindrick, Bruce Irish Industrial Technology Institute P.O. Box 1485 Ann Arbor, MI 48106 (313) 769-4800 Abstract: Two important components of connectionist models are the connectivity between units and the propagation rule for mapping outputs of units to inputs of units. The biological domains where these models are usually applied are nonconservative, in that a single output signal produced by one unit can become the input to zero, one, or many subsequent units. The connectivity matrices and propagation rules common in these domains reflect this nonconservativism in both learning and performance. CASCADE is a connectionist system for performing material handling in a discrete parts manufacturing environment. We have described elsewhere the architecture and implementation of CASCADE [PARU86 a and its formal correspondence [PARU86c], ] [PARU87a] with the PDP model [RUME86]. The signals that CASCADE passes between units correpond to discrete physical objects, and thus must obey certain conservation laws not observed by conventional neural architectures. This paper briefly reviews the problem domain and the connectionist structure of CASCADE, describes CASCADE’s scheme for maintaining connectivity information and signals, and reports some experiments with the system. propagating 11. The Domain of Material Handling Primitive factory operators fall into two classes, Material Handling and Processing. CASCADE formalizes and extends previous approaches to Material Handling in the context of a connectionist architecture. 1.1. Manufacturing = Processing + Material Handling Algebraically, a factory making discrete goods applies a series of state-changing operators to inventory. Some state components, such as shape, hardness, and color, reflect the part’s specification, and make up its junctional state. Other components of state, such as a part’s location in the plant or the length of time it has been there, are irrelevant to its function, and make up its non- junctional state. For each state component, there is a primitive operator that changes only that component. Most processing machines implement complex operators that correspond to the composition of several primitive operators. For example, a painting robot changes a part’s color, and also its size, its weight, and (because of the time consumed by the operation) its age. “Processing” is the set of all primitive operators that change a part’s functional state. “Material Handling” is the set of all primitive operators that change a part’s non-functional state. Material handling thus includes the traditional functions of moving material between workstations (changing its location), and storing it in a warehouse (changing its age). It also includes transportation and aging that occur as components of a complex operator, such as drying paint by moving a part through a heated tunnel. Thus material handling occurs in almost every machine in a plant, as parts and requests for parts move back and forth. This model of manufacturing represents only one view of a complex enterprise ([PARU87b], [FOX83]), but an important view economically. If material does not reach machines fast enough to keep them busy, productivity suffers, but if excessive work-in- process inventory (WIP) accumulates, carrying costs, response time to customer orders, and scrap due to engineering changes all increase. Monitoring and controlling WIP is not managed well by raw human intelligence, and represents a major locus of interest in the manufacturing community. 1.2. Previous Work CASCADE is inspired by the Japanese KANBAN system of inventory flow control [HALL81], [SCH082], [SUGI77]. In KANBAN, a set of modular workstations request parts from one another by passing tickets back and forth. The factory relaxes into a steady state of production determined by the performance of the workstations and the number of tickets in circulation. WAN performs well for a factory with a stable production schedule, where the flow of parts remains in a steady state for a long period of time. Its behavior rapidly deteriorates, though, when the loading and product mix of the shop change frequently. KANBAN resembles a neural net model in which machines correspond to neurons, transport links correspond to connections, and parts and requests correspond to neural impulses. CASCADE formalizes the connectionism implicit in WAN, and extends it to support the changeability of a flexible manufacturing environment. 2. A Model for Material Handling We generally follow the Parallel Distributed F’rocessing (PDP) model [RUME86]. [PARU86c], [PARU87a] relate CASCADE to PDP formally. 2.1. Objects in CASCADE CASCADE manipulates three basic kinds of objects. A container represents a parcel of material that can be moved and stored as an entity. Containers are strongly typed on the basis of the functional state of their contents, and two containers of the same type are interchangeable in the manufacturing process is concerned. Parunak, Kindrick, and Irish 307 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. A unit is a collection of containers that are geographically and functionally related. There are two types of units, corresponding to two types of relations between containers. 1. If all the containers in a unit have the same type, the unit is a TOMP. No functional processing can take place in such a unit, otherwise incoming containers would differ in type from outgoing ones. 2. If the containers in a unit are participants in the same functional operation, the unit is a process. A process consumes containers of zero or more types and produces containers of zero or more types. It differs from a TOMP in two ways. Its behavior is stochastic rather than deterministic (at least from the perspective of the material handling system), and its output can be of a different type than its input. 3. Connectivity and Propagation in CASCADE One motive for developing the parallel between material handling and connectionist models is the adaptive nature of such models. These models learn by modifying the weights in their connectivity matrices. Weights can be modified locally, on the basis of the experience of an individual unit, without invoking a global “learning module” that knows the state of the entire system. Such a local learning scheme offers great promise in coping with the complexity of a large material handling network in a flexible manufacturing environment, where part types, quantities, and distribution are continually changing. This section develops the notion of conservation in propagation rules, describes CASCADE’s implementation of such rules, and reports some simple experiments with the system. Every container belongs to exactly one unit at a time, and moves from unit to unit as the system operates. An aggregate is a collection of units that are in the same geographical area. There are two types of aggregates, corresponding to the two types of units. 1. A mover is a collection of TOMP’s, and models a single geographically limited material handling module, such as a conveyor loop, a zone of an AGV (automatic guided vehicle) system, or an ASRS (automatic storage and retrieval system). 2. A workstation is a collection of processes that run on a single geographically local and functionally integrated set of machines (typically, a machine tool with associated transfer and inspection mechanisms). Every unit belongs to exactly one aggregate, and retains this association unless the system is reconfigured. 2.2. Messages Among Units Each TOMP has a maximum and a minimum capacity for containers of its type, and is connected to one or more other units on adjacent aggregates. If its population (the number of containers it contains) exceeds its maximum capacity, it seeks to spill the excess to a neighboring unit. If its population falls below its minimum capacity, it seeks to fill up to the minimum from its neighbors. Thus the units form a network through which containers and requests for containers propagate. This behavior results in a mechanism that is a superset of KANBAN. We can set capacities to make TOMP’s behave like links in a traditional KANBAN system [PARU86a]. However, because we can push as well as pull material through a CASCADE net, we can distribute material ahead of time to anticipate changes in production requirements, and thus avoid. MAN’s problems when production is not steady-state. The requests and acquisitions that result from fills and spills are local messages, allowing one TOMP to propagate its constraints to its nearest neighbors. The TOMP’s correspond to neurons in a neural net, while requests and containers correspond to inter- neural impulses and capacities correspond to neural threshholds. Experiments with CXXADE [PARU86c] show that it does control WIP levels and reduces waiting time for parts at machines. 3.1. Conservation in Propagation Rules In the PDP model, signals travel from unit to unit through connections. A propagation rule determines whether the output from one unit reaches the input of another. In the simplest propagation rules, the vector that represents the inputs to each unit is computed as the product of the output vector and a connectivity matrix, and a single output can contribute to many inputs, or to none. That is, neural propagation does not conserve impulses. It can effectively multiply a single impulse by routing it to many inputs, or destroy an impulse by not passing it anywhere. A system can differ from this standard model in two ways. 1. It might require quantitative conservation of signals, so that the total output at one time step equals the total strength input at the next. This constraint can be implemented by normalizing each column in the connectivity matrix so that total output equals total input. 2. It might require qualitative conservation, in *which signals are discrete packets that must be propagated intact. This constraint can be implemented by interpreting the weights, not as shares into which signals are divided, but as the probability that a packet from one unit arrives at another. CASCADE exhibits both quantitative and qualitative conservation. If a container leaves one unit, the same container must arrive at precisely one unit. Thus the weights in each column of the connectivity matrix for containers in CASCADE sum to one, and each is interpreted as the probability that a container from the source will go to the target. Requests are not subject to the same physical constraints that containers are. In our system, though, each request results in the delivery of a container. If propagation multiplies requests, the system as a whole will send many more containers than needed toward the node that initiated the request, and the rest of the network will starve for that type of container. Thus, we require propagation to conserve requests as well as containers. 3.2. Implementing Connectivity and Propagation One can model connectivity in CASCADE as weight matrices interpreted probabilistically, as outlined above. Containers and requests have distinct matrices, reflecting different connectivities. In our implementation, each unit stores its column of each of the matrices, t for container connectivity and 7 for request connectivity. ci is the probability that the next container spilled 308 Cognitive Modeling from this unit will be sent to unit i, and ri is the probability that the next request for a container will be sent to unit i. The total number of units is n, and the probability interpretation requires n-l n-l &.=pi=l i=O i=O Entries in Z and t can be nonzero only if physical connections exist between the associated aggregates, and in most installations these connections do not vary dynamically, so in practice these vectors are sparse and only their nonzero elements need be manipulated. These vectors are the locus both of propagation monitoring changes in the environment. decisions 3.2.1. Propagation Decisions When a unit is ready to spill or fill, it generates a random number 0 < r 5 1, and uses it to select an element from the appropriate vector. This element then becomes the target of the request or container being output. For instance, a spilling unit sends its excess container to cj such that j-l C ilci ci<r5 i=O i=O 3.2.2. Modifying the Vectors Some principles for managing the connectivity vectors in a are the same for both container and request connectivities, apply in general to any connectionist architecture that requir conservative propagation rule. o Bayesians will set the initial values in the connectivity vectors for a unit on the basis of their a prioris. Others will probably set them to l/n, dividing the probability equally among them. unit and *es a e As the system learns, certain weights change, and the remaining values in the vector must be adjusted in the opposite direction to keep the total at 1. One computationally simple strategy is to adjust the desired entry by a, then divide every element in the vector by 1 +a. * As individual weights approach zero, the associated units are for all practical purposes disconnected from the sending unit. Once “forgotten” in this way, they will never be selected. It is easy to show that if the desired containers exist in the system, allowing connection weights to attain zero value insures that requests will be satisfied in bounded time. In applications where this irrevocable forgetting is undesirable, we set maximum and minimum limits beyond which a unit’s weight is not adjusted. The actual adjustments made to individual weights are determined differently for containers and requests. The protocols outlined here reflect the semantics of our application, and may be different for a different application. The container vector I for a unit records the probability of selecting each of that unit’s neighbors as the recipient of a spilled container. The weights reflect our judgment that a given neighbor desires containers. Two events affect this judgment. 1. When a neighbor requests a container, we know that it has some interest in receiving containers. So when a unit receives a request, it augments the container weight for the requesting neighbor (say, by R). 2. When one unit spills a container to another, the receiving neighbor is less likely to need one than it was before the spill. So its weight in the sending unit should be decremented (say, by S). The sizes of adjustments for receiving a request or delivering a spill are tuning parameters. Their ratio R/S reflects the impact of a single request. If this ratio is greater than one, the spilling unit interprets a single request as expressing a relatively long-term interest in receiving containers. If it is less than one, the spilling unit attaches much less significance to the long-term implications of a single request. The average value of R and S reflects how quickly the spilling unit shifts its attention to a requesting unit. The request vector i for a unit records the probability of selecting each of that unit’s neighbors as the recipient of a request issued by the unit. The weights reflect our judgment that a given neighbor has a container in stock, or has access to a container from one of its neighbors. We modify these weights on the basis of the neighbors’ cost of effort (COE) in filling past requests. In the present implementation, the COE is the number of units that were searched to find a container. Each time a neighbor successfully satisfies a request, we augment its weight (in the current implementation, by UjCOE, where U is a constant). Each time a neighbor fails to satisfy a request, we decrement its weight (in our implementation, by a constant v). Again, the average value of U and V determines the stability and rate of learning of the system, while their ratio controls whether success or failure has more impact on the learned behavior. This back-propagation error-correction algorithm is similar to that used in the perceptron convergence theorem [ROSEfX]. It differs from the classical procedure in two main ways. 1. Traditional systems distinguish the learning and performance phases. In the learning phase, the system converges to a stable set of connection weights that produce a desired output pattern by modifying those weights using back-propagation. During the performance phase, weights are not modified and the network is no longer adaptive. Our problem domain requires continual adaptive behavior, merging the two phases by modifying connection weights during performance. This strategy is necessary since the desired output pattern is not fixed, but continually varies as containers move through the system. 2 The classical approach modifies connection weights proportional to the magnitude of error, the difference between output produced and desired output. In our system the desired output pattern is variable, so the traditional notion of error is not well defined. The same activation level may be an error during one trial and correct during another. Therefore we apply a constant negative reinforcement on error. Our system propagates back the cost of achieving a success rather than the degree of failure. The magnitude of connection weight adjustment is proportional to this COE of success. The depth to which search proceeds before reporting failure, and the number of trials that a unit makes before reporting failure, are Parunak, Kindrick, and Irish 309 parameters of the model. In the implementation described here, the search proceeds depth-first, and each unit tries only one neighbor before reporting success or failure. 3.3. Experimental Results Figure 1 shows the connectivity of some of the units (TOMP’s and processes) in ITI’s Advanced Manufacturing Center, a working CIM cell in which CASCADE is implemented. These particular units manipulate empty boxes into which processes pO7 and ~08 pack finished parts. To demonstrate the system, we load TOMF’ t26 (an ASRS) with 15 empty boxes, and begin retrieving from pO7 and ~08. When the 15 boxes from t26 are gone, we load 15 more at ~04 (a process that loads inventory at a manual workstation). For each retrieval we record the cost of effort, (COE), the number of units that, CASCADE searched to find the requested part. The minimum COE possible is 6 from t26 and 5 from ~04. Figure 1: “Empty Box” Units 0 PO7 As a control run, we assign probability l/n to each connection in each TOMP, and do not, vary these weights as the system operates. Thus later retrievals do not learn from the success or failure of earlier ones. Figure 2 shows the COE as a function of retrieval. The variability results solely from the stochastic search process. The median COE to retrieve a box is 22.5, the inter- quartile spread is 38.5, and the standard deviation is 28.9. Figure 2: Retrievals Without Learning 100 -y- C 98 -- 0 BB -- S t 78 -- 0 60 -- f 58 -- E “0 -- f f 30 -- 0 i@ -- r t 18 -- \/;, Figure 3 shows COE as a function of retrieval when the weights are allowed to vary in response to success and failure. For this trial, we augment the weight of a successful neighbor by O.l/COE, and decremented the weight of an unsuccessful one by 0.01. The COE drops as the system learns to go to t26 for boxes. Limited exploration of other units continues at trials 11, 13, and 14, even after focusing on t26. The COE rises when the boxes run out at t26, then drops as the system discovers the new source at ~04. Over the entire run, the median COE is 11, the inter-quartile spread is 13, and the standard deviation is 16. Figure 3: Retrievals With Learning IBB C 98 t 78 Retrieval Figures 4 and 5 show how the probability assigned to each neighbor of TOMP’s tl2 and tO5, respectively, changes during the 30 retrievals described in Figure 3. These TOMP’s are the major decision points in the system, and the weights of the connections between them and their neighbors are the main locus of learning in this experiment,. For each retrieval, the vertical space between the x axis and the line at y = 1.00 represents unit probability, and is divided into as many bands as the TOMP under consideration has neighbors. The relative width of each band shows the weight of the connection to the associated neighbor. Figure I-I: Connection Weights irom TOMP t12 T t05 Retrieval For instance, Figure 4 shows three horizontal bands, one each for t54, tO5, and t61. In our experiment, requests all enter from t54, so it never succeeds or fails, and its probability remains constant. The weight of the connection to t05 increases through normalization when t61 fails, and increases through augmentation when it succeeds, so it increases monotonically. Similarly, the weight of the connection to t61 falls monotonically. Since t05 is on the route to both sources of empty boxes, it grows both before and after the switch from t26 to ~04. 310 Cognitive Modeling Figure 5: Connec tion Weights from TOMP tB5 ’ o0 r- --- t Lj 0 - -- ///-------- P B RB - 4’7 4 -.---I r lz B se - t 12 E i t33 1 0 “8 - i t 0 2B - CJ “:!:‘-:--;---::- t 19 ----- Retrieval - Figure 5 tells a similar story. Now the weight of tlz remains constant, since it does not participate in the competition. As long as t26 has boxes, t19 (which leads to it) gains weight. When we switch to ~04, tl9 rapidly loses weight and t40 (now on the correct route) becomes prominent. 4. Summary CASCADE uses a connectionist model to manage the distribution and movement of inventory in a discrete manufacturing environment. It differs from traditional neural models in conserving its signals as they propagate between units. We have described a scheme that adjusts the connectivity of such a network dynamically and propagates signals with the required conservation. Ongoing research is probing several directions. e This system modifies weights as it uses them, and so merges learning and performance. Because of the conservation characteristics of the application, propagation of requests and containers is strongly serial, and a failed request is an expensive way to learn. Under some circumstances, it may be advantageous to add a separate parallel search of the network to set weights from time to time. In the multiprocessor environments for which CASCADE is intended, such a phase reduces the cost of futile requests. @A flexible manufacturing environment favors integration of learning and performance for material handling because the distribution of supply and demand for inventory varies continuously. Probably, though, certain distribution states recur periodically, due to repeated runs of the same parts and customer order cycles. We can store the weights periodically as a function of an estimator of system state, and then retrieve them to shorten the adaptation time. e Units in CASCADE can learn by modifying threshholds as well as their connectivity. their Some of the original ideas in CASCADE originated in discussions with Bob Judd. The work described in this report was financed by a grant from the Kellogg Foundation. References [FOX831 Fox, M., 1983. “Constraint-Directed Search: A Case Study of Job-Shop Scheduling.” Carnegie-Mellon University: Robotics Institute CMU-RI-TR-83-22; Computer Science Department CMU-CS-83-161. [FOX851 Fox, B.R.; and K.G. Kempf, 1985. “Complexity, Uncertainty, and Opportunistic Scheduling.” Second IEEE Conference on Artificial Intelligence Applications, Miami, FL, 487-492. [HALL811 Hall, R.W., 1981. Driving the Productivity Machine. American Production and Inventory Control Society. [PARU85a] Parunak, H.V.D.; B.W. Irish; J. Kindrick; and P.W. Lozo, 1985. “Fractal Actors for Distributed Manufacturing Control.” Proceedings o j the Second IEEE Conference on AI Applications. [PARU86a] Parunak, H.V.D.; P.W. Lozo; R. Judd; B.W. Irish; J. Kindrick, 1986. “A Distributed Heuristic Strategy for Material Transportation.” Proceedings of the 1986 Conference on Intelligent Systems and Machines, Oakland University, Rochester, MI. [PARU86b] Parunak, H.V.D.; J.F. White; P.W. Lozo; R. Judd; B.W. Irish; J. Kindrick, 1986. “An Architecture for Heuristic Factory Control. ‘I Proceedings of the 1986 American Control Conference. [PARU86c] Parunak, H.V.D., and James Kindrick, “A Connectionist Model for Material Handling.” Presented at the Seventh DAI Workshop, Gloucester, MA, Oct. 1986. [PARU87a] Parunak, H.V.D., James Kindrick, and Bruce W. Irish, “A Manufacturing Application of a Neural Model.” Submitted to IJCAI-87. [PARU87b] Parunak, H.V.D., and John F. White, “A Framework for Comparing CIM Reference Models.” Spring 1987 Meeting, International Purdue Workshop on Industrial Computer Systems, April, 1987. [REES86] Rees, L.P.; and P.R. Philipoom, 1986. “Dynamically Adjusting the Number of Kanbans in a Just-In-Time Production System.” 1986 S.E. AIDS (prepublication draft). [ROSE621 Rosenblatt, F., 1962. Principles of Neurodynamics. New York: Spartan. [RUME86] Rumelhart, D.E.; G.E. Hinton; and J.L. McClelland, 1986. “A General Framework for Parallel Distributed Processing.” In Rumelhart and McClelland, eds., Parallel Distributed Recessing, Cambridge: MIT Press, 1.45-76. [SCHO82] R. J. Schonberger, 1982. Japanese Manu jacturing Techniques. New York: The Free Press. [SUGI77] Sugimori, Y.; K. Kusunoki; F. Cho; and S. Uchikawa, 1977. “Toyota production system and Kanban system: Materialization of just-in-time and respect-for-human system.” International Journal o j Boduction Research IS:& 553-564. Parunak, Kindrick, and Irish 311
1987
50
644
AQUA: Asking uestisns and Understan nswers Ashwin Ram Yale University Department of Computer Science New Haven, CT 06520-2158 Abstract Story understanding programs are often designed to answer questions to demonstrate that they have ad- equately understood a story (e.g., [Leh78]). In con- trast, we claim that asking questions is central to un- derstanding. Reading a story involves the generation of questions, which in turn focus the understander on the relevant aspects of the story as it reads further. We are interested in the kinds of questions that peo- ple ask as they read. In this paper, we talk about the origin of these questions in the basic cycle of under- standing, and their effect on processing. We present an understanding algorithm based on our theory of questions, which we have implemented in a computer program called AQUA (Asking Questions and Under- standing Answers). I. Questiosn-driven understanding “The students seemed to understand the lec- ture - at least they were asking the right ques- tions.” - A teacher. When we read a story, we are constantly trying to re- late the events in the story to what we already know. We build motivational and causal explanations for the events in the story in order to understand why the characters acted as they did, or why certain events occurred or did not occur. The central claim of this paper is that an understander asks questions in order to understand the story, build explanations for it, and integrate it into mem- ory. The depth of understanding that the understander achieves depends on the questions that it asks. Consider, for example, the following excerpt from a rather unusual story which appeared on the front page of the New York Times a couple of years ago: Boy Says Lebanese Recruited Him as Car Bomber. New York Times, Sunday, April 14, 1985. JERUSALEM, April 13 - A 16year-old Lebanese was captured by Israeli troops hours before he was supposed to get into an explosive- laden car and go on a suicide bombing mission This research was supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contract N00014-85-K-0108. to blow up the Israeli Army headquarters in Lebanon. . . . What seems most striking about [Mo- hammed] Burro’s account is that although he is a Shiite Moslem, he comes from a secular family background. He spent his free time not in prayer, he said, but riding his motorcycle and playing pinball. According to his account, he was not a fanatic who wanted to kill himself in the cause of Islam or anti-Zionism, but was recruited for the suicide mission through another means: black- mail. The premise is that reading involves the generation and transformation of questions. This story was read out loud to a class of graduate students. As they heard the story, the students voiced the questions that occurred to them. Here are a few of the questions that they asked: 1. 2. 3. 4. 5. 6. Why would someone commit suicide if he was not de- pressed? Did the kid think he was going to die? Are car bombers motivated like the Kamikaze? Does pinball lead to terrorism? Who blackmailed him? What fate worse than death did they threaten him with? 7. Why are kids chosen for these missions? 8. Why do we hear about Lebanese car bombers and not about Israeli car bombers? 9. Why are they all named Mohammed? 10. How did the Israeli know where to make the raids? 11. How do Lebanese teenagers compare with U.S. teenagers? Some of the questions seem pretty reasonable, (e.g., Did the kid think he was going to die?), but some are rather silly in retrospect (e.g., Does pinball lead to terrorism?). Some, though perfectly reasonable questions, aren’t cen- tral to the story itself, but instead relate to other things that the person concerned was reminded of, things that he was wondering about or interested in (e.g., Why do we hear about Lebanese car bombers and not about Israeli car bombers?). 312 Cognitive Modeling From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. A. The nature of questions Questions such as the above arise naturally during the course of understanding. Let us summarize our central claims about the nature of such questions. m Since the ultimate goal of understanding is the in- tegration of new input with what the system already knows, questions that arise during the integration pro- cess represent dificulties in processing. The understander needs to ask these questions in or- der to perform explanation tasks effectively. Asking the right questions is central to achieving a greater depth of understanding. For example, thinking about the Kamikaze question (3) is likely to lead to a better understanding of the boy’s motivations than is think- ing about the pinball question (4). 8) Since questions arise from unusual input for which the understander does not have the appropriate pro- cessing structures in memory, or from explicit contra- dictions between the predictions supplied by memory structures and the actual input, questions reflect that part of the input that needs extra attention, i.e., that part of the input that the understander ought to focus on. In other words, questions represent what the un- derstander is interested in finding out with respect to its goal of understanding the story. They should be used to drive the understanding process. e The process is dynamic in that new input generates new questions or transforms old ones, which in turn affects further processing of the story and of future stories. eseasch issues 0 We are approaching the problem of story understanding as a process involving the generation and transformation of questions. We are designing a computer program that asks creative questions while it reads a story in order to raise the level of understanding that it can achieve. To do this, we have developed a theory of questions and their role in understanding and explanation. Our approach raises several issues: e Where do questions come from? What are the points in the understanding process where questions arise? What role do questions play in understanding and ex- planation? How do they affect understanding? UI How are questions indexed in memory such that can get triggered when relevant input comes in? they e How do questions get transformed as new information comes in? into new questions This paper is primarily about the first two questions, though we are addressing all four in our research. To con- trast our approach with previous approaches to story un- derstanding, let us consider a program such as FRUMP [DeJ79] as a question generation program. FRUMP had a database of scripts (also called frames or schemas) for dif- ferent situations, such as terrorism and earthquakes. Each script contained a set of slots to be filled in when under- standing a story about that kind of situation. For ex- ample, the earthquake script wanted to know the Richter scale reading, the number of people killed, and so on. The slots, therefore, represented the questions that the system asked every time it read about an earthquake. They also represented the limit of what the system could understand about earthquake stories. FRUMP would miss the point of a story about an earthquake in Pisa in which the Lean= ing Tower was destroyed, because it simply didn’t have a slot for “famous monuments destroyed” in its earthquake script. In other words, it would never think of asking the question. There are two ways out of this. We could, of course, add the missing slot to the other slots in the script, along with all the other slots we might need. Clearly it would be impossible to stuff all the required knowledge for all possible situations into a machine. We might compromise and stuff in a “lot” of knowledge as a start. But a machine that relied only on previously built-in knowledge would be able to understand just the situations that it was designed for. In order to be considered intelligent, we would want it to be able to deal with novel situations that it didn’t already have the knowledge to deal with. In addition, all slots in all scripts are not equally interesting, so we would still have the problem of deciding which slots are interest- ing in a given situation. Most story understanders avoid this issue and pursue all of them with equal enthusiasm. In our research, we have taken a different route. We have designed our system as a question generation pro- gram. The system asks questions as it processes a story, and then uses these questions to drive the understand- ing process. As a consequence, the system is interested in those facts that are relevant to the questions that it currently has. Thus the Richter scale reading of an earth- quake would be interesting only if it was actually relevant to something it wanted to find out, and not simply because it was a slot in the earthquake script. In order to understand where questions come from, as well as how they affect processing, we categorize questions into various types. The categories a,re defined in terms of the origin and functional role of questions in understanding. The taxonomy is based on informal data collected from several subjects. We will first present our taxonomy, and in the next section we will relate it to the explanation cycle that underlies the process of understanding. Questions can be divided into five major categories: 8 Explanation questions o Elaboration questions (D Hypothesis verification and discrimination questions e Reminding questions e General interest questions Ram 313 A. xglanation questisns Since constructing explanations is an important part of un- derstanding, we would expect many questions to be con- cerned with explanation. Asking the right question is cen- tral to constructing the best explanation. An important class of questions, therefore, are explanation questions (or EQs). EQs focus our attention on a particular aspect of the situation, or allow us to view a situation in a particu- lar way, with the intention of finding an explanation that might underlie it. There are two major types of explanation questions. Since explanations are constructed to resolve contradic- tions or anomalies in the situation, E&s are often con- cerned with anomalies. For example, Did the boy want the results of his actions? is an anomaly detection question since thinking about this allows us to notice the anomaly in the first place. Given a characterization of an anomaly, we ask anomaly resolution questions to search for explanations of a particular type so that the situation isn’t anomalous any more. For example, Did the boy know he was going to die? is an anomaly resolution question, since if he didn’t this particular anomaly goes away. The other kind of explanation questions seek stereo- typical explanation patterns that might apply to the cur- rent situation. Explanation patterns, or XPs, are stock explanations that we have for various situations [Sch86]. For example, “Shiite religious fanatic does terrorism” is a standard XP many people have about the Middle East ter- rorism problem. We might think of them as the “scripts” of the explanation domain. When we see a situation for which we have a canned XP, we try to apply the XP to avoid detailed analysis of the situation from scratch. Ex- planation patterns are retrieved via explanation questions. For example, the question “Why would a Lebanese person perform a terrorist act?” has the religious fanatic explana- tion (and possibly others) indexed under it. The purpose of this question is to allow us to find these XPs. El. Elaboration questions Once we have retrieved a set of candidate explanation pat- terns, we try to apply them to resolve the anomaly. Often an XP cannot be applied directly, or is too general. In such situations, we might elaborate appropriate pieces of the ex- planation, or perhaps collect more information about the input. (There is also the possibility of tweaking the expla- nation, as in the SWALE program [Kas86], which we will not deal with here.) For example, consider the blackmail incident that the car bombing story above tells us about. This provides an explanation for the boy’s actions, but the explanation is incomplete. Some of the questions in our data were concerned with elaborating the explanation, such as What could he want more than his own life? and Why do they choose kids for these missions?. To answer these questions, we can either search memory for old episodes that might contain relevant information, or wait for further input. In the latter case, we call the question a data collection ques- tion, because it seeks to to a given hypothesis. collect additional data pertaining c. ypothesis verification and diserimi- mat ion quest ions Even after we construct (or are given) a detailed explana- tion, we may not know for certain that it is the right one. In fact, we typically have more than one competing hy- pothesis about what the best explanation is. The validity of a hypothesis depends on the assumptions that we made while constructing it. For example, although it is pretty easy to apply the “Shiite religious fanatic” XP in the car bombing example (before we are explicitly told that he is not a fanatic), the explanation rests on the assumptions that he is a Shiite Moslem and he is very zealous about his religion. These assumptions then become hypothesis verification questions (or HVQs) for the religious fanatic hypothesis: Wus he a Shiite Moslem? and Was he very zealous about his religion?. The role of HVQs is to verify or refute the hypothesis that they were generated from when answers to them are found. In case we have two or more competing hypotheses, they also help us to discriminate between the alternatives. Thus they represent what the understander is interested in finding out at any time for the purpose of understanding the story. However, unlike most story understanding pro- grams, this notion of interestingness is dynamic. The boy’s religion, for example, is interesting in this story because it is of relevance to the explanations begin constructed, and not because the “boy” frame has a “religion” slot that must always be filled. ID. eminding questions The fourth type of questions are reminding questions. The role of reminding in understanding is discussed in [Sch82] and [Sch86], and we will not pursue it here. Many ques- tions are generated as a result of remindings based either on superficial features (e.g., Why are they all named Mo- hammed?), or on deeper explanatory similarities (e.g., Are car bombers motivated like the Kamikaze?). Reminding questions may suggest possible explanations stored with old episodes as candidate hypotheses for the current situa- tion; they may also help us verify or refute hypotheses by providing supporting or opposing evidence from episodic memory. They also help us learn new categories by gen- eralization over similar instances [Leb80] or over similar explanations. E. Genesall interest questions Finally, we have questions already extant in memory be- fore we begin to read the story. These questions are left over from our previous experiences. As we read, we re- member these questions and think about them again in a new light. Certainly after reading the car bombing story, we expect to have several questions representing issues we were wondering about which weren’t resolved by the story. For example, in this story it turns out that the boy was 314 Cognitive Modeling blackmailed into going on the bombing mission by threat- ening his parents. This makes us think about the question What are family relations like in Lebanon?, which remains in memory after we have finished reading the story. To the extent that we are interested in this question, we will read stories about the social life in Lebanon, and we will relate other stories to this one. To cite another example, one of the students we read the story to repeatedly related the story to the IRA because he was interested in similar issues about Ireland. Thus understanding is a process of question gcnera- tion, and is in turn driven by these questions themselves. The traditional view of understanding is one of a process that takes a story as input and builds a representation of what it has understood. In contrast to this, we view under- standing as a process that starts with questions in memory and, as a result of reading a story, answers some of them and generates a new set of questions to think about. Thus questions represent the dynamic “knowledge goals” of the understander. e We have implemented a computer program called AQUA’ which embodies our theory of questions and understand- ing. AQUA reads newspaper stories about terrorism and attempts to understand them by constructing causal and motivational explanations for the events in the stories. The explanations it constructs may be divided into four major levels. Each level corresponds to a set of explanation ques- tions (EQs) that organize explanations at that level. Action level: Explanations involving direct relationships between actions. For example, the question Was the mdssion instrumental to another action that the boy wanted to perform? is an EQ at this level. Outcome level: Explanations involving direct benefits of actions for participants. For example, the question Did the boy want the results of his actions? is an EQ at this level. Stereotype level: Explanations constructed from stereo- typical explanation patterns (XPs). EQs at this level are Why do teenagers commit suicide? and Why do Lebanese people perform terrorist acts?. eeisicom level: Ab initio reasoning about planning deci- sions. For example, if an action has a negative out- come for an agent who chose to perform the action (as opposed to being forced into it), we might ask the following questions: Did the agent know the outcome the action would have for him? B) Did the agent want that outcome (i.e., were we mistaken in assuming that the outcome was neg- ative)? Was there another result of the action that the agent wanted, and did he want that result more than he wanted to avoid the negative result? AQUA is part of an on-going project and is in the process of being developed. At present it reads the car bombing story mentioned above, but we are in the process of extending it to read other stories. * The processing cycle in AQUA consists of three interac- tive steps: read, explain and generalize2. AQUA starts with reading some text and retrieving relevant memory structures to integrate new input into. This is guided by the questions that are currently in memory. These ques- tions are generated during the explain step and indesed in memory to enable the read step to find them. A. e explain ste The explain step may be summarized as follows. Assume that AQUA has just read a piece of text. Formulate E$ of appropriate type. Retrieve XPs using EQ and general interest in certain types of explanations. For example, we might look for a social explanation for why a 16 year old Lebanese boy might want to commit suicide. Apply XP’ to input: If in applying the XP we detect an anomaly: Characterize the anomaly. Elaborate, using the anomaly characterization to fo- cus the elaboration. Explain the anomaly recursively, using the above characterization to guide the formulation of new EQs. If the XP is applicable to the input: Construct hypothesis by instantiating the espla- nation pattern. Construct WV$s to help verify or refute the new hypothesis. s in memory to allow us to find them in the read step below. If we can’t apply the XP, try another one. If there are no more XPs, try a different EQ. 0 The read step At this point, AQUA has finished processing the newly read piece of text. It now continues reading the story, guided by the questions it has generated so far, as follows: Read some text, focussing as determined below. attention on interesting input ‘Asking Questions and Understanding Answers. 2The generalize step has not yet been implemented. Ram 315 Retrieve extant questions indexed in memory that migl1t Other issues we are investigating include those of ques- be relevant. Use these questions as an interestingness tion transformation and judging the interestingness of dif- measure to focus the read above. ferent questions. We are also interested in using questions to direct learning. Answer the questions retrieved in the previous step: Answer HVQs by either confirming or refuting them. Propagate back to the hypothesis that the question originated from. Confirm/refute hypotheses. If the HVQs of a hy- pothesis are confirmed, confirm the hypothesis and refute its competitors. If any HVQ of a hy- pothesis is refuted, refute the corresponding hy- pothesis. Explain the new input if necessary. C. The generalize step Since questions represent the difficulties encountered dur- ing understanding, they should also provide the focus for learning. A program such as AQUA should be able to: Generalize the novel explanations encountered using questions to focus generalization. in story, Index the generalizations back in memory, such that the original question which failed would now find it. We have not yet implemented the generalize step in AQUA, but we are interested in this issue. Explanation- based learning algorithms [DeJ83, SCH86, SCSZ] involve the generalization of causal structures of explanations to form new generalized explanations while dropping the ir- relevant details. However, programs such as GENESIS [MD851 that embody this idea perform undirected learn- ing. We want to use questions as a mecllanism to focus the learning process on the interesting or relevant aspects of the story. v. Conclusions We view story understanding as a process involving the generation of questions, which in turn drive further pro- cessing of the story. In this paper, we presented a tax- onomy of the questions people ask while they read. We talked about the origin of these questions in the explana- tion cycle, and their role in understanding. We are building a computer program to read and u11- derstand newspaper stories according to our theory. In contrast to the traditional view of understanding as a “story in, representations out” process, our program may be viewed as a “questions + story in, questions out” pro- cess. The paper presented our understanding algorithm as a three step integrated process: the read step, in which the program reads the text, focussed by the questions that are extant in its memory, the explain step, in which the program asks questions in order to construct explanations, and the generalize step, in which the program will gen- eralize novel answers to its questions. cknowledgements I would like to thank Roger Schank and Chris Riesbeck for their excellent guidance during this research, and Robert Farrell, Eric Jones, John Hodgkinson and Mike Factor for the synergistic exchange of ideas and, yes, code. I also thank Alex Kass and Larry Birnbaum for their comments on an earlier draft of this paper. eferences [DeJ79] G. DeJong. Skimming Stories in Real Time: An Experiment in Integrated Understanding. Ph.D. thesis, Yale University, May 1979. [DeJ83] G. DeJong. An Approach to Learning from Ob- servation. In Proceedings of the International Machine Learning Workshop, 171-176, University of Illinois, Monticello, IL, June 1983. [I&86] A. Kass. Modifying Explanations to Understand Stories. Proceedings of the Eighth Annual Conference of the Cognitive Science Society, 691-696, Amllerst, MA, August 1986. [KL086] A. Kass, D. Leake and C. Owens. SWALE: A Program That ExpZuins. In [Scl186]. [Leb80] M. Lebowitz. Generalization and Memory in an Integrated Understanding System. Ph.D. thesis, Yale University, October 1980. [Leh78] W.G. Lehnert. The Process of Question Answer- ing. Lawrence Erlbaum Associates, Hillsdale, New Jer- sey, 1978. [MD851 R. Mooney and G. DeJong. Learning scllemata for natural language processing. In Proceedings of the 9th IJCAI, 681-687, Los Angeles, CA, August 1985. [Scl182] R.C. Schank. Dynamic memory: A theory of learning in computers and people. Cambridge Univer- sity Press, 1982. [Sch86] R.C. Schank. ExpZunution Patterns: Understand- ing Mechanically and Creatively. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1986. [SC821 Schank, R.C. and Collins, G.C. Looking at Learn- ing In Proceedings of the ECAI, 10-16, Paris, France, 1982. [SCH86] R.C. Schank, G. Collins, and L. Hunter. Tran- scending inductive category formation in learning. Be- havioral and Bruin Sciences, 9(4), 1986. 316 Cognitive Modeling
1987
51
645
I Infcwmation Lisa F. Rau Artificial Intelligence Branch GE Company, Corporate R&D Schenectady, NY 12301 USA Abstract The System for Conceptual Information Summariza- tion, Organization, and Retrieval (SCISOR) is a re- search system that consists of a set of programs to parse short newspaper texts in the domain of cor- porate takeovers and finance. The conceptual infor- mation extracted from these stories may then be ac- cessed through a natural language interface. Events in the world of corporate takeovers unfold slowly over time. As a result of this, the input to SCISOR consists of multiple short articles, most of which add a new piece of information to an ongoing story. This motivates a natural language, knowledge- based approach to information retrieval, as tradi- tional methods of document retrieval are inappro- priate for retrieving multiple short articles describ- ing events that take place over time. A natural lan- guage, knowledge-based approach facilitates obtain- ing both concise answers to straightforward questions and summaries or updates of the events that take place. The predictable events that take place in the domain make expectation-driven, partial parsing fea- sible. . As more and more information is stored electronically, the need for intelligent methods of accessing this information becomes increasingly critical. In the distant future, we can imagine the computer not as a static repository of information, but as a librarian who not only knows where all the information in the library is kept, but also has read and understood everything in that library. This librarian could instantly absorb all the latest texts, and keep in mind the special interests and needs of a vast number of users. This work rests on the belief that a truly helpful infor- mation system must, in some sense, uderstand what the user is looking for. In order to understand this, the system must minimally understand what it knows about. Such a system becomes no longer a storage and retrieval device, but an intelligent and knowledgeable assistant, available to answer questions and provide digested summaries of top- ics. The work described here is one small step on the way to making that assistant exist. The System for Conceptual Information Summariza- tion, Organization, and Retrieval (SCISOR) is a research system being developed at GE’s Corporate Research and Development Center, that operates in the domain of cor- porate takeovers and finance. SCISOR is designed to read short articles from newspapers, and answers questions about the domain. This paper describes the preliminary system implementation and objectives of SCISOR, but fo- cuses on some of the features of the domain that motivate the computational approach taken, as well as the system objectives themselves. SCISOR is designed to increase the usefulness of an automated system to extract information from naturally occurring texts. Below is an interaction a user might even- tually have with the SCISOR system: 1. User: Did ACE hardware take over the ACME food company? System: Yes it aid. The ACE hardware company com- pleted the takeover last &day. 2. User: What were the events in the ACE-ACME takeover deal? System: Rumors that ACME was to be taken over started May 13, 1985. The stock rose $5 a share to $65. On May 16, the ACE company announced that it had made an o$er to ACME valued at $66 a share, The o$er was a cash offer to purchase all 2 million out- standing shares of ACME. After a month of ddibera- tion involving some potential conflict of interest mat- ters, the ACE-ACME deal was completed last Friday. 3. User: What happened yesterday in the HAL Corpora- tion bid for BIG? System: HAL Corporation increased its bid for the BIG company yesterday, up $4 to $7’0 a share. BIG has not commented on the new offer. These three exchanges illustrate three capabilities dif- ficult to obtain from traditional information or document retrieval technology [Salton and McGill, 19831. The first illustrates the ease of obtaining answers to straightforward questions by asking the question in natural~language. The second illustrates a summary of an event sequence whose description spanned multiple articles occurring over time. In the third exchange, a simple update of an ongoing story, presumably being followed by the user, is obtained. In this paper, some problems with the paradig)n of full-text retrieval are isolated to motivate the natural lan- guage, knowledge-based approach to information retrieval Rau 317 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. taken in SCISOR. These problems manifest themselves es- pecially strongly when certain domain characteristics are present. The domain of corporate takeovers and how it ex- hibits these characteristics are described. This is followed by a discussion of the implementation of the SCISOR sys- tem and its current system status. A. Problems with Full Text Retrieval The most widely used methods of storing and retrieving information are by storing the full document, and retriev- ing via either automatically or manually constructed key- words, or full-text search [Salton and McGill, 19831. Cur- rent full-text retrieval systems have three problems. The first problem is with the accuracy of retrieval, which can stand to be improved. Recent studies have shown that one-fifth of relevant articles are retrieved, and only three- fourths of those retrieved are judged by users to be relevant [Salton, 19861. This problem may disappear with the ad- vent of massively parallel machines to perform document retrieval, where some promising results have already been obtained [Stanfill and Kahle, 19861. The second problem is that full-text retrieval sys- tems are designed to have as output a document, when a user might really desire certain kinds of information from the document. For example, full-text retrieval is a poor method of extracting simple, factual information from text. This is because users must isolate a document or set of doc- uments that cant ains an answer to a question through the construction of a potentially complex combination of key- words. Then the relevant passage or passages must be read before the sought-after information is obtained. A much more natural and time-efficient technique is to give the user the option of posing questions to the system in natu- ral language, and having the system respond, not with the original text, but with an answer to the question posed, as illustrated in the first exchange above. Full-text retrieval systems are also incapable of relat- ing articles to one another. Thus, it is impossible to ask a system for a summary of a situation that unfolds over a period of time, potentially involving multiple documents, as shown in the second exchange given previously. The user must retrieve the entire series of articles and read each one to obtain an understanding of all that has gone on. Given that most articles consist of background infor- mation potentially known to the reader, simply restricting the response to new information would be very helpful. The best scenario, however, is to give the user the ability to retrieve a preprocessed summary of events in any given situation. Three features of the articles that appear in the cor- porate takeover domain in particular make it an appropri- ate medium for replacing traditional IR techniques with a natural language, knowledge-based approach. Note that these points apply regardless of the method of document retrieval used. That is, the problems still exist whether document retrieval is performed extremely quickly with a highly parallel machine as described in Stanfill and Kahle [Star&l and Kahle, 19861, with automatic or manually con- structed topic indices, or with keyword or free-text search for lexical items. The problems are the following: B. Most of the content of input articles is NOT new in- formation, but a rehashing of old information. Thus although articles that are relevant may be retrieved with IR technology, the user will be interested in only a small part of that retrieved information. Related to the above point, the frequent rehashing of past events that occur in news stories makes retrieval of multiple articles dealing with the same events likely. This redundancy in retrieval can increase the time a user spends finding the information desired. Events that take place in the domain are not self- contained in singular articles as in the earthquake or terrorist domain used in IPP [Lebowitz, 19831, or the banking telex domain in TESS [Young and Hayes, 19851, for example. Rather, stories continue over long periods of time. Even assuming an IR system that was totally accurate in retrieving the entire series of small articles updating and modifying the events that occurred, the user would still have to read all the ar- ticles in the correct order to obtain an understanding of what had transpired. AI information retrieval Recently, there have been some limited successes in the de- velopment of AI systems to parse partially and to under- stand short texts in constrained domains [DeJong, 1979, Lebowitz, 1983, Kolodner, 1984, Young and Hayes, 19851. However, work in effectively accessing these knowledge bases has not been as successful. For example in Young and Hayes [Young and Hayes, 19851, the information cannot be accessed after it has been understood except through a traditional database front-end, or by direct examination of the conceptual, frame-like representation. In CYRUS, a natural language question-answering component allows queries of the knowledge base. However, it is not always guarenteed to answer correctly due to the reconstructive nature of its retrieval. Although SCISOR has not been tested with large numbers of documents and questions, it is hoped that it will demonstrate some of the uses that can be made with automatically extracted conceptual in- formation from text when that conceptual information is combined with a powerful and robust method of sponta- neous retrieval. Summary: Full-text retrieval: Full-text systems are inappropriate for certain tasks in certain domains such as the cor- porate takeover domain. Answers to simple questions and summaries of a series of events are two examples of such tasks. AI approaches: AI approaches have successfully demon- strated the feasibility of partially understanding free text in constrained domains. No system yet, however, 318 Cognitive Modeling has demonstrated both reliability of information re- trieved and accessibility of stored information. SCISOR operates in the world of corporate takeovers and takeover attempts: at present, relevant articles are taken directly from news sources and manually “fed” to the sys- tem. Typically articles in the domain that appear in the business section of newspapers or the Wall Street Journal are between one and three paragraphs long. Much of the information in the articles is frequently a rehash of the events that may have taken place previously in the current, takeover deal. The events in the domain are generally quite predictable, but typically take place over long periods of time. For example, after a company has made an offer to take over another company, it may be months before the situation is resolved. During this time, a number of pre- dictable developments may arise, such as legal complica- tions, other suitors entering the bidding, or an increase or withdrawal of the initial offer. The predictable nature of the events in the domain, along with the long intervals between initiation and resolution of events, makes the cor- porate takeover world a rich, but constrained, domain to experiment in. The following is a typical input, to the SCISOR sys- tem: Group Offers to Sweeten Warnaco Bid April 8 - An investor group said yesterday that it is prepared to raise its cash bid for Warnaco, Inc. from $40 a share to at least $42.50, or $433.5 million, if it can reach a merger agreement with the apparel maker. The California-based group, called W Acqui- sition Corp., already had sweetened its hostile tender offer to $40 a share from the $36 a share offered when the group launched its bid in mid-March. Notice how only the first half of this story gives new infor- mation to the ongoing sequence of events. Both the initial bid and the first increase of the offer in the second half are references to previous events. In order to deal with these references correctly, an information system must re- trieve the recorded events and recognize that the recorded events are the same as the references to them. Then a user may obtain an update of the story containing only the new information. 0 Figure 1 illustrates the architecture of the SCISOR sys- tem. Each of the boxes in Figure 1 will be briefly dis- cussed. First, newspaper stories, or questions about the Figure 1: SCISOR System Architecture stories that deal with corporate takeovers, such as the above, are interpreted using the TRUMP (TRansportable Understanding Mechanism Package) parser and semant, ic interpreter [Jacobs, 19861. Questions asked by users are parsed with the same understanding mechanism as is used for the input stories. They are stored along with the sto- ries, for future user modeling, and to enable the system to answer a user’s question when the answer comes along, if it was not known to the system at, the time it was posed. After answers to input questions have been retrieved, they are-passed to the KING (K nowledge INtensive Generator) [Jacobs, 19851 natural language generator for expression. These stories are represented in the KODIAK [Wilen- sky, 19861 knowledge representation language. KODIAK has been augmented with some scriptal knowledge [Schank and Abelson, 19771 of typical sequences of events in the domain. TRUMP and KING were designed to access the same linguistic knowledge base. Linguistic information is represented with KODIAK and the Ace [Jacobs and Rau, 19851 knowledge representation framework. The story event integration mechanism “fills in” new information in stories with the story summary o$ained thus far. It also unifies references of past, events in new stories with the previously stored representation of those events, if necessary. The retrieval mechanism retrieves answers to user’s questions or the history of a story being continued in a new input story. For example, upon input of the first sentence of the example story given previously, the history of the initial W Acquisition bid and increased bid for Warnaco are retrieved. The retrieval mechanism operates by using a form of constrained marker-passing [Char&k, 19831. In the following discussion, “episodes” will refer to events in stories or questions users may have asked, both of which are stored in the system. Retrieval occurs as a by-product of tlie understanding process. As new concepts are instantiated in the system, other instantiated concepts related to one another through the semantic category information in the knowledge base are marked. For example, consider the representation of the question “How much has a company offered to take over an apparel company?” given in Figure 2. Previously A. Question Answering Figure 2: HOW much are offers for apparel companies in takeovers? In SCISOR, the processes that find the approximate loca- tion of an answer to a user’s question and the processes that determine what the answer should be are separate. A great deal of work has been done on the second problem, most notably by Lehnert [Lehnert, 19781. Determining an appropriate answer to a user’s question, given that the context in which the user’s question was posed is already known, is a separate process from the initial retrieval of a context in which to search for an answer. This initial re- trieval of a context is the event retrieval this paper briefly describes. Figure 3: Part of Story Episode stored in the system is part of a story episode that should be retrieved to give an answer to the question, shown in Figure 3. The instantiation of the concepts in the question (OFFERS, COMPANYI, COMPANY2, QUESTIONS, TAKEOVERS, APPARELS) causes markers to be passed to related concepts, such as CASH-OFFERa, CLOTHING3 and TAKEOVERS. Note how the instantiation of m OFFER causes a more specific kind of offer; a CASH-OFFER, to get marked. Similarly, CLOTHING is marked even though the input question only specified an APPAREL company. This marker-passing to related concepts allows SCISOR to find answers to questions even when the questions ask for in- formation at a different level of conceptual generality than is present in the story. SCISOR also is limited in the kinds of questions the system can answer. Currently the SCISOR system is capa- ble of answering only questions about information explic- itly stored in the knowledge base. Any information that potentially could be reconstructed or inferred (in the sense of Kolodner [Kolodner, 19841) from information stored in the knowledge base is not available. The line between what is explicit in a story and what can be deduced from that story is not sharp, because some amount of “figuring” must go on to obtain any reasonable understanding of the story. To obtain this understanding, SCISOR computes something similar to a maximally complete inference set [Cullingford, 19861 as the set of information present ex- plicitly in articles and inferred from the context and other world knowledge. Anything in that understanding can be directly retrieved. Intersections of marked concepts occur when a subset of concepts in an episode are marked, and the episodes with the most marks are put into a short-term memory buffer. A filtering process, represented in Figure 1 as the “graph filter mechanism” is then run on these candidates to determine the nature of the match between input question For example, SCISOR is able to answer the question “What company was sold for $3 billion?” without pre- indexing a story containing that information by AMOUNT- OF-SALE. However the system cannot answer the question “Which companies have been taken over more times than they have taken over other companies?” for example, be- cause an answer would require counting all the times a company has been taken over and has taken over other companies, comparing these two numbers, and repeating the process for every other company in the knowledge base. Such procesing capabilities may be added in the future. and the answer in a story, for example. All concepts are unmarked periodically. B. System Status This two-step retrieval process is very efficient, in that SCISOR is implemented in Common Lisp, and it is used only likely candidates are examined closely. Also, it is very on VAX computers and Symbolics and SUN workstations. tolerant of erroneous, incomplete or partial input informa- The TRUMP parser and semantic interpreter has not yet tion. This is important in the corporate takeover domain been tested with a large grammar or vocabulary, but in to ensure retrieval of previous events even when a new state these early stages it has been relatively easy to customize. of affairs may contradict the previous events. For example, On the SUN-3 ‘t 1 processes input at the rate of a few SCISOR must find that Warnaco was trying to take over seconds per sentence, including the selection of candidate W Acquisition if today W Acquisition announced that it parse and semantic interpretation. The KING natural lan- was trying to take over Warnaco. A more complete de- guage generator was implemented in Franz Lisp, and at scription of the retrieval mechanism may be found in Rau this writing has not yet been converted to Common Lisp [Rau, 19871. to run with TRUMP. The generalization mechanism is being designed to no- tice new trends in the domain, through automatic detec- tion of multiple cases of similar situations not previously seen. The system has a dozen or so stories stored in the knowledge base. Hundreds of semantic concepts and do- main vocabulary are also present. About a dozen questions are answered by the system. It has not yet been tested on 320 Cognitive Modeling a large number of documents. The tests that have been performed so far, however, are quite promising. Before any definitive claims can be made about the ultimate use- fulness of this type of system, the system must be tested with a very large sample of documents in real information retrieval tasks. The next stage of the project will include such tests. . masy an onelusions In certain domains and for certain tasks, the traditional output of document or full-text retrieval systems (i.e., doc- uments) is inappropriate for the task. Obtaining a piece of information, a summary, or an update are examples of such tasks. In these cases, providing the information desired may be more helpful than providing the. original full-text source. This is especially true when the full-text sources that contain the information desired span multiple docu- ments. The information desired may span multiple docu- ments when the events in the domain take place over time, as they do in the world of corporate takeovers. To obtain a summary of a typical takeover using document retrieval techniques, one would have to be able to retrieve all the articles dealing with each event that took place, put them in the correct order and read them all. Also, when a do- main deals with newspaper articles in particular, writers frequently include in new articles descriptions of events that have been described in previous articles. Thus, a search for information about any given event in the do- main will 6nd all articles that refer to that event. A user would have to peruse all these articles before being satisifed that nothing relevant had been missed. SCISOR is an experiment in the utility of understand- ing short inputs to increase the usefulness and accuracy of an information retrieval system. The corporate takeover domain has proven to be well constrained. The event se- quences that occur are highly predictable, making under- standing of the stories in context feasible. Moreover, the unfolding of the stories over time makes the natural lan- wage, vated. knowledge-based approach particularily well moti- [Charniak, 19831 E. Charniak. Passing markers: a the- ory of contextual influence in language comprehen- sion. Cognitive Science, 7(3):171-190, 1983. [Cullingford, 19861 R. E. Cullingford. Natural Lan- guage Processing: A Knowledge-Engineering Ap- proach. Rowman and Littlefield, Totowa, NJ, 1986. [DeJong, 19791 G. DeJong. Skimming Stories in Real Time: An Experiment in Integrated Understanding. Research Report 158, Department of Computer Sci- ence, Yale University, 1979. [Jacobs, 19851 P. J acobs. A knowledge-based approach to language production. PhD thesis, University of Cal- ifornia, Berkeley, 1985. Computer Science Division Report UCB/CSD86/254. [Jacobs, 19861 P. Jacobs. Language analysis in not-so- limited domains. In Proceedings of the Fall Joint Computer Conference, pages 247-252, IEEE Com- puter Society Press, Washington, DC, November 1986. [Jacobs and Rau, 19851 P. J acobs and L. Rau. Ace: as- sociating language with meaning. In T. O’Shea, ed- itor, Advances in Artificial Intelligence, pages 295- 304, North Holland, Amsterdam, 1985. [Kolodner, 19841 J. Kolodner. Retrieval and Organiza- tional Strategies in Conceptual Memory: A Computer Model. Lawrence Erlbaum Associates, Hillsdale, NJ, 1984. [Lebowitz, 19831 M. L b e owitz. Generalization from natu- ral language text. Cognitive Science, 7(1):140, 1983. [Lehnert, 19781 W. G. Lehnert. The Process of Ques- tion Answering: Computer Simulation of Cognition. Lawrence Erlbaum Associates, Hillsdale, NJ, 1978. [Rau, 19871 L.F. Rau. Knowledge organization and ac- cess in a conceptual information system. Information Processing and Management, Special Issue on Artifi- cial Intelligence for Information Retrieval, Forthcom- ing(Summer), 1987. [Salton, 19861 G. Salton. Another look at automatic text- retrieval systems. Communications of the Association for Computing Machinery, 29(7):648-656, 1986. [Salton and McGill, 19831 G. Salton and M. McGill. An Introduction to Modern Information Retrieval. McGraw-Hill, New York, 1983. [Schank and Abelson, 19771 R. C. Schank and R. P. Abel- son. Scripts, Plans, Goals, and Understanding. Lawrence Erlbaum Associates, Halsted, NJ, 1977. [Stanfill and Kahle, 19861 C. Stanfill and B. Kahle. Par- allel free-text search on the connection machine sys- tem. Communications of the Association for Com- puting Machinery, 29(12):1229-1239, 1986. [Wilensky, 19861 R. Wilensky. Knowledge Representation - A Critique and a Proposal. In J. Kolodner and C. Riesbeck, editors, Experience, Memory, and Reason- ing, pages 15-28, Lawrence Erlbaum Associates, Hills- dale, NJ, 1986. [Young and Hayes, 19851 S. Young and P. Hayes. Auto- matic classification and summarization of banking telexes. In The Second Conference on Artificial In- telligence Applications, pages 402-208, IEEE Press, 1985. Rau 321
1987
52
646
Analogical Processing: A Simulation ad Empirical Corroboration’ Janice Skorstad Brian Falkenhainer Qualitative Reasoning Group Qualitative Reasoning Group Department of Computer Science Department of Computer Science Dedre Gentner Department of Psychology University of Illinois at Urbana-Champaign 1304 W. Springfield Avenue, Urbana, Illinois 61801 Abstract This paper compares the performance of the Structure-Mapping En- gzne (SME), a cognitive simulation of analogy, with two aspects of hu- man performance. Gentner’s Structure-Mappzng theory predicts that soundness is highest for relational matches, while accessibility is high- est for surface matches. These predictions have been borne out in psy- chological studies, and here we demonstrate that SME replicates these results. In particular, we ran SME on the same stories used in the psy- chological studies with two different kinds of match rules. In analogy mode, SME closely captures the human soundness ordering. In mere- appearance mode, SME captures the accessibility ordering. We briefly review the psychological studies, describe our computational experi- ments, and discuss the utility of SME as a cognitive modeling tool. 1 Introd.uction Analogy is a complex process. Given a current context, it con- sists of being reminded of a “similar” experience or concept, es- tablishing the proper correspondences between this knowledge and the current situation, judging the match for soundness and appropriateness, and then using these correspondences. As with any complex process, it is essential to form the right decomposi- tion and strive to understand the subtleties of each component of the process. This work examines the variables that determine the acces- sibility of a similarity match and its inferential power or sound- ness. To test our hypotheses, we start with a theoretical model, Gentner’s Structure-Mappping theory of analogy, and use a com- putational simulation to show how the predictions of the model compare with independent, empirical data. By embedding a the- ory in a computational model which is used for prediction, we can see whether the predictions follow logically from the imple- mented form of the theory (see Anderson, 1983; Van Lehn, 1983). This constrains the interpretation of observations. Cognitive simulation studies can offer important insights for understanding the human mind. They serve to verify psycholog- ical theories and force one to pin down aspects which might oth- erwise be left unspecified. They also offer unique opportunities to construct idealized subjects, whose prior knowledge and set of available processes is completely known to the experimenter. Unfortunately, cognitive simulation programs tend to be special- purpose and/or computationally expensive. In this paper we discuss our use’of the Structure-Mapping Engine (SME) as an aid in research on a general theory of analogy. SME is a computer sim- ulation of analogical processing based upon Gentner’s Structure- Mapping theory. It avoids the difficulties typically found in cog- ‘This research was supported by the Office of Naval Research, Contract No. NOOOI.4~85-K-0559. The first two authors were supported in this work by Univ. of Illinois Cognitive Science / Artificial Intelligence Fellowships. nitive simulation programs by being both flexible and efficient. SME provides a “tool-kit” for constructing matchers consistent with Gentner’s theory. This enables us to generate and explore a space of plausible algorithms for analogical processing and com- pare these against subjects’ performance. In this paper, we aim to show the utility of SME’s tool-kit approach, its viability as a cognitive model, and demonstrate the validity of its theoretical foundation, the structure-mapping theory. 2 The Structure-Mapping Theory The theoretical framework for our studies is Gentner’s Structure- Mapping theory of analogy (Gentner, 1980, 1983, 1987)) which outlines the implicit rules by which people interpret and reason with analogy and similarity. The underlying hypothesis of the Structure-Mapping theory is that an analogy is a device for im- porting the relational structure of one domain (the base, source of knowledge) to another, less familiar domain (the target). It pro- vides rules for analogical mapping and demonstrates how map- ping may be used to make inferences about the new domain. These rules state that information is mapped from the base to the target in the following manner: 1. Discard object descriptions not involved in higher-order re- lations. 2. Attempt to preserve relations between objects. 3. Use systematicity to determine which higher-order relations are mapped. This rule is important for deciding what in- ferences to make and how be believed. strongly these inferences should The systematicity principle is central to analogy. It maintains that analogy conveys a system of connected knowledge, rather than a mere assortment of independent facts. The systematicity principle is a structural expression of our tacit preference for coherence and deductive power in analogy (Gentner, 1987). An important feature of Gentner’s theory is that it is struc- tural. The rules depend only on the structural properties of the knowledge representation and are independent of specific domain content. In addition to articulating the rules for analogical mapping, the structure-mapping theory functions as a core theory for a broader treatment of the processes of analogy and similarity. Identification of these processes, as defined by Gentner (1987), enable us to decompose analogical processing into three distinct, yet interdependent, stages. First, a suitable base domain must be accessed from memory. Once base and target representa- tions appear in working memory, the mapping stage establishes 322 Cognitive Modeling From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Table 1: Similarity Classes (Gentner, 1987). 1 # shared 1 # shared I Type attributes 1 relations EXAMPLE I J Literal Similarity Many Many 1 Milk is like water. Analogy ’ Abstraction Few I Many Heat is like water. ! Few Many Heat flow is a through- ’ variable. Anomaly Few Few Coffee is like the solar system. Mere Appearance Many Few The glass tabletop was blue as water the proper analogical correspondences between the two domains. Finally, the mapping is examined to determine soundness, and when appropriate, applicability and consistency with the task at hand. This work examines the variables that affect the accessi- bility of a similarity match and it inferential power or soundness. 2.1 Similarity Types Gentner’s theory is unique in that it breaks down the often vague terms of “analogy” and “similarity” into a continuum of similarity categories. These categories are characterized accord- ing to the distribution of relational and attributional predicates that are mapped during the analogical process (see Table 1). Analogy and literal similarity lie on a continuum of degree-of- attribute-overlap. Likewise, there is a continuum between analo- gies and abstractions. Both involve overlap in relational struc- ture, but vary in the degree of concreteness of the base domain. We use these classifications below to analyze the factors influ- encing accessibility and soundness. 3 The Structure-Mapping Engine matches. In our experiments using SME, we currently use three types of rule sets, depending on the phenomenon being inves- tigated. One set of rules focuses on object descriptions and is called the “mere-appearance” rules. In contrast, the “true anal- ogy” rule set prefers relations, while the “literal similarity” rules match both relations and object descriptions. In this study, we used the mere-appearance (MA) and true analogy (TA) rules. The mere-appearance rules serve to measure the degree of superficial, descriptive similarity between the two domains. The match constructor rules for the MA set only allow matches between lower-order predicates - object attributes and first-order relations - not between higher-order relations. The evidence rules for the MA set give a weight of 0.5 for each match between descriptive attributes and a weight of 0.4 for matches between first-order relations. The true analogy rules measure the degree of relational overlap between two domains. The TA match constructor rules allow matches between relational pred- icates having the same name and discriminate against attribu- tional matches, only allowing them if the attributes play a role in some higher-order relation. The evidence rules provide evidence for a match if the predicates matched have the same name, if they are of similar order,’ and if their arguments may potentially match.3 all in the 0.2 to 0.5 range. One important evidence rule is the systematicity rule, which causes the weight of a match between two items to increase in proportion to the amount of higher-order structure matching above them. Proper understanding of the evaluation scores is important for correct interpretation of these studies. The scores are unnor- malized. A score of 20 may be high for one analogy, while low for a different analogy. The evidence weights provide an ordering between alternate interpretations within a single mapping task. They only measure the relative merits of different targets and the merits of different interpretations for a single target. One of our current research goals is the construction of a structural eval- uator that would produce scores corresponding to a single, fixed scale. With the evaluator, SME would then be able to rate two completely different similarity matches as being equally good, re- gardless of how different their domain descriptions were in size. To date, over 40 analogies have been run on SME. These ex- amples serve to provide evidence for the Structure-Mapping ap- preach to the theory of analogical processing. SME rapidly pro- duces intuitively plausible results. For details, see (Falkenhainer, et. al, 1986). 4 Comparing Simulation with Human Performance The Structure-Mapping Engine (SME) (Falkenhainer, Forbus, & Gentner, 1986) is a computational tool for studying analogi- cal processing which simulates the structure-mapping process. Given representations of a base and target, SME constructs all consistent interpretations of the given analogy, providing a nu- merical evaluation score for each. SME has significant advantages over programs based on traditional matching algorithms. Like the Structure-Mapping theory, SME is domain independent. Be- cause it produces all consistent interpretations of an analogy, one can easily see structurally consistent alternatives to the best match. At the same time, SME’s algorithm is very efficient - it does not backtrack. Most importantly, SME provides a flexible “tool-kit” for constructing matchers consistent with the differ- ent kinds of comparisons sanctioned by Gentner’s theory. This enables us to quickly test, refine, and compare a large space of different conjectures about analogical processing. The psychological results we are modeling concern the natural processes of spontaneous reasoning by analogy and similarity: that is, the process whereby a person who is thinking about some current situation is reminded of some prior similar situa- tion which he may decide to use in reasoning about the current situation. In this research we asked (I) what governs sponta- neous access to analogy and similarity, and (2) once an analogy has been processed, how do we judge its inferential soundness (ie., whether it is rigorous enough to have predictive utility)? The construction of interpretations is guided by match rules that specify which facts and entities might match and estimate the believability of each possible component of a match. To build a new matcher one simply loads a new set of match rules. These rules are the key to SME’s flexibility. Match constructor rules guide what individual predicates and entities are allowed to map between the two domains. As with the match construc- tor rules, match evaluation is programmable: the quality of each match is found by running match evidence rules and combining their results. Using one set of rules, SME may be configured to perform analogical matches. Using other rule sets, SME can be made to perform mere-appearance matches or literal similarity To analyze the factors affecting accessibility and soundness in ‘We define the order of an item in a representation as follows: Objects and constants are order 0. The order of a predicate is one plus the maximum of the IJrder *of its arguments. 3The arqurnents of two predicates m ry potentially match if corresponding arguments Are syntactically compatible (e.g., both are entities). Skorstad, Falkenhainer, and Centner 323 Proportion of base sto- (4 Es7 Rating Of ‘Ound- (b) ries recalled given differ- . ent kinds of matches. Figure 1: Results of the Rattermann and Gentner Study. analogical processing, we start with recent empirical findings and discuss how these fit within the structure-mapping framework. We then use the implementation to simulate the same process in the hope that a rigorous simulation of the theory correctly parallels the empirical results. 4.1 Empirical Findings Recent studies by Gentner & Landers (1985) and Rattermann & Gentner (1987) ask what governs spontaneous access to anal- bgy and similarity and what governs the subjective soundness of analogy and similarity. According to Structure-Mapping the- ory, systematicity should play a key role in the determination of soundness; no predictions are made for accessibility. The method was designed to resemble a natural memory situation. Subjects were given 32 short stories to read and remember. Of these 32, 18 were key stories; the remaining 14 were fillers, designed to add more difficulty to the task. After a week, the subjects re- turned and performed two tasks: (1) a reminding task, and (2) a soundness rating task. The reminding task consisted of reading a new set of 18 stories which matched the original 18 in various ways: mere- appearance matches, which match in object descriptions and first-order relations, true analogies, which match in first-order and higher-order relations, and false analogies or anomalies, which matched only in first-order relations. Each subject received only one matching target story for each of the 18 original key stories. Subjects were told that if the new story reminded them of any of the original stories, they should write out that original story in as much detail as possible. After completing the reminding task, subjects went on to the soundness task. Subjects were given pairs of stories and asked to judge each of the story pairs for the inferential soundness of the match. The explanation for soundness was given as: “...when two situations match well enough to make a strong argument” (Gentner, 1987). They were told to rate each pair on a scale of 1 to 5, with 5 being “sound” and 1 being (‘spurious”. The results of the study are presented in Figure 1. As pre- dicted by structure-mapping theory, a strong preference for true analogies was found in the soundness-rating task. These results suggest that relational structure is important in determining the subjective “goodness” of an analogy. However, as evidenced by the higher score for TA’s than FA’s, it is not just shared re- lati.ons but shared higher-order relations that are important in determining inferential power. The study provided surprising results for access. Although subjects rated true analogies as being most sound, they tended to not retrieve true analogies during the reminding task Instead they were most likely to access superficial, mere-appearance matcl Base Story Karla, an old Hawk, lived at the top of a tall oak tree. One afternoon, she saw a hunter on the ground with a bow and some crude arrows that had no feathers. The hunter took aim and shot at the hawk but missed. Karla knew that hunter wanted her feathers so she glided down to the hunter and offered to give him a few. The hunter was so grateful that he pledged never to shoot at a hawk again. He went off and shot deer instead. Target Story - True Analogy Once there was a small country called Zerdia that learned to make the world’s smartest computer. One day Zerdia was attacked by its warlike neighbor, Gagrach. But the missiles were badly aimed and the attack failed. The Zerdian government realized that Gagrach wanted Zerdian computers so it offered to sell some of its computers to the country. The government of Gagrach was very pleased. It promised never to attack Zerdia again. Target Story - Mere-Appearance Once there was an eagle named Zerdia who donated a few of her tail- feathers to a sportsman so he would promise never to attack eagles. One day Zerdia was nesting high on a rocky cliff when she saw the sportsman coming with a crossbow. Zerdia flew down to meet the man, but he attacked and felled her with a single bolt. As she fluttered to the ground Zerdia realized that the bolt had her own tailfeathers on it. Figure 2: Story Set Number 5. These results suggest that superficial similarities, including ob- ject descriptions, play an important role in access. However, higher-order relational similarities do promote some access, as indicated by the fact that true analogies were retrieved more often that false analogies. Our conclusion is that access and inference are governed by a very different set of rules. 4.2 Computational Simulation Human performance in the Rattermann & Gentner study was compared to SME’s performance on similar tasks. For five of the story sets that were used in their study, the base stories. true analogy targets and mere-appearance targets,” were encoded and presented to SME (a total of 15 stories, making 10 matches). The encoder had no knowledge of the results of human perfornlance when writing the representations. Different rule sets were used. corresponding to the similarity types of mere-appearance and true analogy. One of these stories will be discussed in detail. showing how SME was used to simulate a test subject. Story set number 5, shown in Figure 2, revolves around a story about a hawk named Karla. Two similar, target stories were used as potential analogies for the Karla narration. One was designed to be truly analogous and describes a small coun- try named Zerdia (TA5). The other was designed to be only superficially similar and describes an eagle named Zerdia (MA5). To test the relative accessibility of the base story for the two target stories, we ran SME using the mere-appearance match rules. This measured their degree of superficial overlap and thus, according to our prediction, the relative likelihood of their ac- cessibility. The output of SME for the MA task is given in Fig- ure 3, which shows that the eagle story (evaluation = i 7) has a higher mere-appearance match rating than the country story (evaluation = 6.4). Thus. If the surface-accessibility hypothesis is correct, the MA target -Zerdia the eagle” should have demon- strated a higher accessibility rating for the human subjects than ‘The false nnnlngies were nl.r 331~!‘11 Lted. sine? they provided little tnsl<ilt beyond that given by the TA lnd MA result9 324 Cognitive Modeling Analogical Match from Karla to Zerdia the country (TA5). Analogical Match from Karfa to Zerdia the country (TA5). Rule File. appearance-match.rules Sumber of Match Hypotheses 12 Yumber of GMaps. 1 Gmap #l. (HAPPINESS-HUNTER HAPPnms-GAGMcH) (ATTACK-HUNTER ATTACK-GAGRACH) (WARLIKE-HUNTER wARLIKE-kmAcH) (DESIRE-FEATHERS DESIRE-SUPERCOMPUTER) (HAS-FEATHERS USE-SuPERcaWPuTER) (OFFER-FEATHERS oFFER-~~PER~~~JTER) (TAKE-FEATHERS BW-SUPERCOMPUTER) (WEAPON-BOW WEAPON-BOW) Emaps (KARLAl ZERDIA12) (FEATHERS3 SUPERCOPIPUTERlJ) (CROSS-BOW4 MISSILESlS) (HUNTER2 GAGRACH13) Weight. 6.411672 Analogical Match from Karla to Zerdin the eagle (MA5). Rule File. appearance-match rules Number of Match Hypotheses: 14 Number of GMaps: 1 Gmap #l. (OFFER-FEATHERS OFFER-FEATHERS) (BIRD-KARLA BIRD-ZERDIA) (ATTACK-HUNTER ATTACK-sP0RTs1~tiN) (SEE-KARLA SEE-zERDIA) (HAS-FEATHERS HAS-FEATHERS) (TAKE-FEATHERS TAKE-FEATHERS) (DESIRE-FEATHERS DESIRE-FEATHERS) (WEAPOIJ-BOW WEAPON-BOV/) WARLIKE-HUNTER WARLIKE-s~oRTshuN) (PERSON-HUNTER PERSON-SPORT~AN) Emaps (FEATHERS3 FEATHERSS) (CROSS-BOW4 CROSS-BOWlO) (HUNTER2 SPORTSElAN8) (KARLAl ZERDIA'Z) Sleight 7.703668 Figure 3: SME’s Analysis of Story Set 5, Using the MA Rules. the TA target “Zerdia the country”. To obtain soundness ratings for story set 5, we again ran SME on the above stories, this time using the true-analogy (TA) match rules. The output of SME for the TA task is given in Fig- ure 4. Notice that “Zerdia the country” (evaluation = 22.4) was found to be a better analogical match to the original Karla story than “Zerdia the eagle” (evaluation = 16.8). Thus, according to Gentner’s systematicity principle, it should be judged more sound by human subjects. 4.3 Observation versus Prediction Tables 2 and 3 show the empirical as well as computational re- sults for the five stories used in our simulation. Table 2 provides soundness ratings along with SME’s evaluation scores when TA match rules were used. The f’s in the columns labeled “TA > MA?” indicate a higher evaluation score for the true analogy than for the mere-appearance match, by our subjects (column 4) or by SME (column 7). Here, as in Table 3, the results from SME should be read only to establish the direction of the difference: whether TA or MA receives a higher evaluation score within the same story set.j For example, we cannot say that story 9 is rated as being a more sound analogy than story 18 simply because SME gives story 9 a higher score. Comparing columns 4 and 7, we see that, using analogy rules, SME was able to qualitatively match --- 5Recnll that the evidence score used here cnn anly be used to compare m.ltchea that have the same base &*main Therefure it is rne:LningfuI to. cr.mplre scores acrops the rows, but not down the columns. Rule File: true-analogy rules Number of Match Hypotheses. 6J #umber of GMaps. 1 Gmap #l: (CAUSE-PROMISE CAUSE-PROMISE) (SUCCESS-ATTACK SUCCESS-ATTACK) (HAPPINESS-HUNTER HAPPINESS-GAGRACH) (HAPPY-HUNTER HAPPY-GAGRACH) (REALIZE-DESIRE REALIZE-DESIRE) (DESIRE-FEATHERS DESIRE-SUPERCOI~TER) (CAUSE-TAKE CAUSE-BW) (OFFER-FEATHERS OFFER-~~PER~~EIP~TER) (NOT-ATTACK NOT-ATTACK) (FAILED-ATTACK FAILED-ATTACK) (ATTACK-HUNTER ATTACK-GAGRACH) (TAKE-FEATHERS BW-SUPERCO~~IPUTER) (PROMISE-HUNTER PROMISE) (CAUSE-OFFER CAUSE-OFFER) (FOLLOW-REALIZE F~LL~W-RF.ALIZE) (NOT-HAS-FEATHERS NOT-USE-sUPERC~MPUTER) (CAUSE-HAPPY CAUSE-HAPPY) (HAS-FEATHERS USE-SUPERCOMPUTER) (CAUSE-FAILED-ATTACK CAUSE-FAILED-ATTACK) Emaps. (HIGH23 HIGH17) (FEATHERS20 SUPERCOIIlPUTERld) (CROSS-BOW21 I~IISSILESlS) (HUNTER19 GAGRACH13) (KARLA ZERDIA12) (FAILED22 FAILED16) Vlelght 22 362718 Analogical Match from Karla to Zerdia the eagle (MA5). Rule File true-analogy rules Number of Match Hypotheses 47 Number of GMaps 1 Gmap #l. (PROMISE-HUNTER PROMISE) (DESIRE-FEATHERS DESIRE-FEATHERS) (TAKE-FEATHERS TAKE-FEATHERS) (CAUSE-OFFER CAUSE-OFFER) (OFFER-FEATHERS OFFER-FEATHERS) (HAS-FEATHERS HAS-FEATHERS) (REALIZE-DESIRE REALIZE-DESIRE) (SUCCESS-ATTACK SUCCESS-ATTACK) (GOT-ATTACK :IoT-ATTACK) (FoLLOV/-SEE-ATTACK FOLLOW-SEE) (SEE-KARLA SEE-ZERDIA) (FAILED-ATTACK SUCCESSFUL-ATTACK) (CAUSE-TAKE CAUSE-TAKE) (ATTACK-HUNTER ATTACK-SPORTSEIAN) Emaps (FAILED22 TRUEll) (KARLAl ZERDIA7) (HUNTER19 SPORTSMAN8) (FEATHERS20 FEXTHERSS) (CROSS-BOW21 CROSS-BO'!/lO) Weight 16 816630 Figure 4: SME’s Analysis of Story Set 5, Using the ‘I’.\ Rules human soundness preferences quite well. Table 3 shows the results from the human subjects’ recall task, along with SME’s evaluation scores using MA match rules. Again, SME was able to duplicate human performance as indi- cated by the f’s in the “MA > TA?” columns. Note that in Table 2 SME gives its highest evaluation to the true analogy in every case but one: in story 21 the MA match wins over the TA match. SME’s performance on story 21 under true analogy rules concerned us, since we had expected the TA match to win over the MA match in every case. However, when we looked more closely at the human data, we discovered that the human subjects also broke their usual pattern: when rating this story set they failed to show their usual preference for anal- ogy over mere appearance in soundness. As Table 2 shows, the difference between TA and MA is significant for the other four story sets. but nonsignificant for story set 21. Examination of the stories revealed that we had erred in constructing this set: the “true analogy” target required a many-to-one object mapping with the base. Both our human subjects and our simulation had reacted to this inconsistency by giving the TA match a much lower than average score. Skorstad, Falkenhainer, and Gentner 325 Table 2: SME Run as a True Analogy Matcher. A “t” indicates the difference is significant and “?” indicates the difference is non-significant, as determined by a t-test. Recall that the differences between SME’s evaluation scores are only useful as <0 or >O; they cannot be compared across rows. STORY # 5 Karla, hawk 17 piom 21 Acme. IRS lllJMAN suBJEcrs’ SMBRUNASA’IRUE SOUND~S RATlNGS ANALOGY MATCHER TA MA Table 3: SME Run as a Mere Appearance Matcher. STORY # I I HUMAN SuBlEcTs PRO~RTION OF BASE SMERUNASAMBRB AFTBAUNCEMATCHER S-WRY RECALLED I I I I 1 I 5 Karla, hawk 17 t- pionen 44 .11 33 + A4 .22 12 + 1 11.00 I 7.59 I + (3.41) 1 .78 I .11 I m I + 7.75 6.71 1 + (1.04) 5 Discussion The results of comparing SME with human performance are promis- ing. First, psychological evidence indicates that people use sys- tematicity and consistency to rate the soundness of a match. SME replicates this pattern in analogy mode. Second, access in people is governed by surface properties. As predicted, SME repli- cates human access patterns whet1 in mere-appearance mode. In fact, we have tried SME on over 40 different analogies (including those cited here), and it rapidly produced humanly plausible re- sults on all of them. The results of SME qua “ideal subject” on these analogical tasks provides strong convergent evidence for the Structure-Mapping theory. SME is extremely representation-sensitive. We believe that this is psychologically plausible, in that human analogical pro- cessing is limited by their representations. Unfortunately, it raises the spectre of tailoring the representations to get desirable results. We have tried to reduce tailorability by several routes. First, we have tested SME with representations produced by AI reasoning programs which were not designed for analogical rea- soning (Falkenhainer, et. al, 1986). Second, when hand-coding is necessary (as in these studies), we used several cross-checks. First, representation conventions were defined in advance. Sec- ond, we sometimes used several independent encoders and com- pared results. Third, we told the encoders nothing about the human results. The results of story set 21 suggests we were somewhat successful. At first it appeared that SME’s low evalua- tion of the TA match was a bug. Only later, when examining the human data, did we discover that the same pattern held there. Although several AI analogy programs exist (e.g., Winston, Carbonell, Kedar-Cabelli), few are intended as cognitive simu- lations (exceptions include Burstein 1983, Pirolli & Anderson, 1985). To our knowledge, no simulation has been successfully compared as extensively with human performance as SME. More- over, we know of no other general-purpose matcher which suc- cessfully simulates two distinct kinds of human similarity. Our results suggest that the principles of Structure-Mapping can pro- vide a detailed account of human analogical processing. Using these principles, it appears that SME’s architecture provides con- siderable leverage for cognitive modeling. Acknowledgements The authors wish to thank Mary Jo Rattermann for providing the results of her psychological studies. Ken Forbus and Gordon Skorstad provided valuable help on prior drafts of this paper. References Anderson, J., The Architecture of C’ognztion, Harvard University Press, Cambridge, Mass, 1983. Burstein, M. H., “Concept formation by incremental analogical reason- ing and debugging”, Proceedings of the 1983 International Ma- chine Learnzng Workshop, Monticello, IL, June 1983. Carbonell, J.G., “Learning by Analogy: Formulating and Generalizing Plans From Past Experience,” in Machine Learnzng: An Artzfi- ma1 Intelligence Approach, R.S. Michalski, J.G. Carbonell. and T.M. Mitchell (Eds.), Morgan Kaufman, 1983a. Falkenhainer, B., K.D. Forbus, D. Gentner, ‘The Structure-Mapping En- gine,” Proceedings AAAI, August: 1986. Gentner, D., “The Structure of Analogical Models in Science”, BBN Tech. Report No. 4451, Cambridge, MA., Bolt Beranek and Newman Inc., 1980. Gentner, D., “Structure-Mapping: .4 Theoretical Framework for Anal- ogy”, Cognztive Science 7(2), 1983. Gentner, D., “Analogical Inference and Analogical Access” to appear in A. Preiditis (Ed.) Analogica: Proceedzngs of the First Workshop on Analogical Reasoning, Pitman Publishing Co., 1987 (in press). Gentner, D., & R. Landers, “Analogical Reminding: A Good Match is Hard to Find”. In Proceedings of the International Conference on Systems, Man and Cybernetacs. Tucson, Arizona, 1985. Kedar-Cabelli, S., “Purpose-directed analogy”, Proceedings of the Sev- enth Annual Conference of the Cognitive Sczence Socaety, 1985. Pirolli, P. L.,.‘& Anderson, J. R., “The role of learning from examples in the acquisition of recursive programming skills”, Canadian Journal of Psychology 39, 240-272, 1985. Rattermann, M.J. & D. Gentner, ‘Analogical Reminding”, submitted to the Ninth Annual Conference of the Cognative Sczence So- ciety, July, 1987. Van Lehn, K., “Felicity Conditions for Human Skill Acquisition: Vali- dating an AI-based Theory”, Xerox Palo Alto Research Center Technical Report CIS-21, 1983. Winston, P. H., “Learning and reasoning by analogy” CACM 23( 12), 1980. Winston, P. H., “Learning new principles from precedents and exer- cises”. Artzjiczal Intellzgence 19, 321-350. 1982. 326 Cognitive Modeling
1987
53
647
Ingrid Zukerman Computer Science Department, Monash University, Clayton, Victoria 3 168 9 AUSTRALIA Abstract In a tutorial setting, we often hear expressions such as “The method we are about to discuss will help you solve . . . ” or “Let us consider a subject which demands some more practice,” which are issued by a tutor to motivate a stu- dent to attend to forthcoming discourse. In this paper we model the meaning of these expres- sions in terms of their anticipated influence on the status of a listener’s goals, and use these predictions to produce motivational expressions and embed them in computer generated discourse. In particular, we have recognized relations which are instrumental in determining a listener’s motivational requirements in a hierarchical problem-solving domain. These ideas have been incorporated into a system called FIGMENT which generates commen- taries on the solution of algebraic equations. In a tutorial setting a student is constantly exposed to Technical Utterances in the form of explanations, defini- tions, descriptions and problems to be solved. Intelligent Tutoring Systems (Clancey 1979, Genesereth 1978, Slee- man and Brown 1982) and text generation systems (McKeown 1985, Appelt 1982, Mann and Moore 1982) have addressed the problem of determining the type of technical utterance to be presented and the information it should contain. However, in discourse produced by a human tutor we also notice the presence of expressions such as “however,” “ this technique demands some more practice, ” “as I said before” and “next,” which are not part of the subject matter (Fames 1973, Hallyday and Hassan 1976, Longacre 1976, Hoey 1979, Winter 1968). These expressions, denoted Meta-Technical Utterances (MTUs), carry important information which assists the listener in assimilating the knowledge being transferred (Zukemm 1986, Zukerman and Pearl 1986). In this paper we focus on one type of MT& namely Motivational MTUs, which are used to motivate a student to perform prescribed tasks such as attending to forth- coming discourse and solving given problems. We attempt to gain insight into the mechanisms used by peo- * Part of this work was performed at the Cognitive Systems Laboratory, Computer Science Department, University of California, Los Angeles, and supported by the National Science Foundation Grants IST 81-19045 and DCR 83-13875. ple to generate these MTUs by building and implement- ing a generative model of their meaning. This model has been incorporated into an Intelligent Tutoring System called FIGMENT, enabling it to produce a variety of Motivational MTUs. For instance: 1. “We shall now consider a topic, namely quadratic equations, which we have not seen for a while” - This motivation is issued to prompt a student to practice a topic which he may be forgetting. 2. “This alternative serves to introduce the very impor- tant and interesting method of factoring out common fac- tors” - A tutor uses this motivation to awaken interest in a new item of knowledge. 3. “This type of equation has been practiced a lot, but it still demands some more practice” - This motivation is generated to encourage a, probably tired, student to con- tinue practicing a subject in which he lacks proficiency. In the following section we present a goal-based tax- onomy of Motivational MTUs. Then we examine the mechanism used by FIGMENT to generate them. During the learning process a student is expected to exhi- bit the goal of mastering the subject matter. In addition, a typical student usually has a host of other goals, such as: achievement goals (passing a test, getting a good job), social goak (earning the respect of his peers, gaining the approval of the teacher), enjoyment goals (remaining interested and amused during lectures, being able to rest), etc (see Schank and Riesbeck 1981). At any point in time, a goal is either non-existent or it can exhibit varying degrees of activity. A goal is non- existent if the entities involved are not represented in the listener’s memory. For example, a person who has never heard of university cannot have a goal of studying there. A goal may become active due to the occurrence of an external event, e.g., if we find out there is a new movie featuring our favourite actor, our goal of seeing it becomes active; or by gradual build-up over a period of time, e.g., if a student has been studying for quite a while, his enjoyment goal of being able to rest is strengthened. The level of activity of a goal may decrease over a period of time if it was not reactivated and com- peting goals became active. In addition, a goal may become inactive if the listener believes that it has been Zaakerman 327 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. accomplished. believes that he masters a particular information item even though his knowledge is imperfect. In this case, his goal of mastering the subject matter can be directly activated by stating or implying the error of his ways. For instance, “There is a better way to do this” or “I don’t think I made myself clear”; The level of activity of a goal also depends on the ‘status of goals enabled by it, where the goal-enablement relationship is defined as follows: goal G, enables goal G, if the listener believes that the fulfillment of G, increases the probability of attaining GZ. Thus, if G1 enables G, and G is active, then G, is active as well. For example, if the listener has an active goal of getting a good job, and he believes that knowing how to program will help him get such a job, then the goal of knowing how to program is active too. In a learning environment, a student is required to perform different tasks such as paying attention, studying and solving given problems. We recognize the following relationships between tasks and goals: 8 Task T enabZes goal G if the listener believes that by performing T he will increase the probability of achieving G. The proverb “practice makes perfect” illustrates this relationship; and e Goal G immediately follows task T, if T enables G and the completion of T ensures the immediate attainment of 6. For instance, the goal room is clean immediately fol- lows the task cleaning room. We define a student to be motivated to perform a task T, if there exists a goal G such that G is active and T enables G. If a student does not exhibit a goal for which both conditions- are satisfied, a tutor will try to remedy this situation by producing a Motivational MTU. A Task- based Motivational MTU establishes an enablement rela- tionship between a task and an already active goal, whereas a Goal-based Motivational MTU activates an inactive or non-existent goal. ased Motivation When a given task entails a considerable amount of time and effort, the enablement relationship between this task and a student’s active goal may weaken, causing the stu- dent to become discouraged. In this case, a task-based Motivational MTU is usually generated. For instance: “If you B. Goal Base Quite often, a listener is aware of the enablement rela- tionship between a task and a goal. In this case, the speaker assumes that a listener’s lack of motivation stems from the absence of an active goal, and generates a goal- based Motivational MTU. In the following subsections we shall consider three types of goal based motivations: Direct, Indirect and Enablement. I. Direct Goal Activation A presently inactive goal may be directly activated by stating the degree of attainment of this goal or lack thereof. The following types of Motivational MTUs are commonly used to directly activate a student’s goal of mastering the subject matter: Knowledge Rectification - Occasionally, a student Knowledge Preservation - If a certain item of informa- tion has not been encountered for some time, a student’s skill may have deteriorated. In this case, he could be motivated to pay attention to this item by means of a knowledge preservation Motivational MTU such as “This equation enables us to practice a technique, which we have not encountered for a while”; and Knowledge Pncrementation - If a particular topic has been practiced recently, we can safely assume that the expertise of a student will only increase with additional practice. In this case, if the performance of the student leaves something to be desired, a tutor can use a knowledge incrementation motivation like the following: “Let us continue with the following type of equation, which demands some more practice.” 2. Indirect Goal Activation A non-existent or inactive goal of mastering a given item of knowledge may be indirectly activated by highlighting its positive attributes, and thereby arousing the listener’s curiosity. The following Motivational MTU illustrates this type of goal activation: “Let us examine a very important and interesting technique.” 3. Enablement Goal Activation An inactive or non-existent goal may be activated by communicating to the listener that this goal enables another, already active, goal. For instance, if a listener exhibits the goal of going to the movies, the goal of com- pleting the homework can be activated in the following manner: “If your homework isn’t finished you can’t go to the movies.” The same effect can be obtained by directly referring to a task which is immediately followed by the goal to be activated. In our example, the task in question would be to do the homework. We divide the Motivational MTUs which activate the goal of mastering the subject matter by enablement, into two subclasses based on the expected usage of the mastered knowledge: Application - The acquired knowledge is directly applied in order to attain an enabled goal. Examples are: “You can use this technique to beat your friends at tic-tae-to@” and “A more effkient way to solve this equation is by . . . ” [enabled goal: solve problem in area of interest]. This type of motivation is often combined with a knowledge rectification motivation which directly activates the enabled goal; and Precondition - The acquired knowledge is not directly used, but is considered an obstacle that needs to be over- come prior to the attainment of an enabled goal. The precondition may either be fictitious or factual. A fictitious precondition exists only in the mind of the speaker and the listener. It is expressed by Motiva- tional MTUs such as “You can’t watch TV unless you do your homework” [enabled goal: enjoyment] and “I would like you to solve the following equation” 328 Cognitive Modeling [enabled goal: social - gain the teacher’s approval]. This type of motivation is usually generated as a last resort, and its effectiveness depends on the speaker’s authority. A factual precondition represents a situation which takes place in- real life. It is expressed by means of Motivational MTUs such as “If you don’t know qua- dratic equations you won’t do well in your final” [enabled goal: achievement (pass exams)] and “Fire- fighters also need to know how to read” [enabled goal: achievement (get a desired job)]. In an interactive environment characterized by a tutor’s active participation, a student is usually presented with various rather short tasks such as solving a few exercises or listening to an explanation. Thus, in general, he is aware of the enablement relationship between a task and the goal of mastering a given item of knowledge, and an anticipated lack of motivation can be attributed to the absence or inactivity of this goal. This situation calls for the generation of a goal-based Motivational MTU. An effective human tutor ascertains the need for a Motivational MTU by using some models of cognitive processes triggered in a student upon encountering a technical utterance. These models represent a teacher’s perception of the learning habits of a typical student. FIGMENT uses a similar strategy to determine whether a Motivational MTU is required. It predicts whether a stu- dent is motivated to perform a task, by consulting a sim- plified model of the effect of the state of the discourse on the status of the student’s goal of mastering the subject matter (see figure 1). If, according to this model, the goal in question is either non-existent or inactive, the system concludes that the student is unmotivated. and records a requirement for a Motivational MTU. was technical utterance previously encountered? I Yes / was utterance \ . . ;pced late<!o5 was utterance . mastered? /\ Inactivk (remote) goal 6atus: Inactive (achieved) has stud?nt been recently motivated to perform task? / \ yes / no goal status: \ goal status: Active Inactive (deteriorated) Figure 1: Process for Predicting the Status of the Goall of Mastering the Subject Matter According to this model, a tutor considers a student’s goal of mastering an unknown item of knowledge to be non-existent. The goal of mastering an item of knowledge which a student believes has already been mastered, is presumed to be inactive. The goal of mastering a heavily practiced item for which a student has not been recently motivated deteriorates, eventually becoming inactive, i.e., it is superseded by enjoyment goals. Finally, the goal of mastering an item of knowledge which hasn’t been seen for some time is inactive due to its remoteness, i.e., the goal of mastering a different item of knowledge has prob- ably taken precedence. The questions in the decision nodes of the procedure depicted in figure 1 are answered by applying procedures which take into consideration the state of the discourse and a student’s talent, diligence and knowledge status. After the status of the goal of mastering the subject matter has been ascertained, FIGMENT selects a Motiva- tional MTU, by applying the following directives, and taking into account the student’s attributes as well as rhe- torical considerations: A non-existent goal may be activated by means of an indirect goal activation and/or an application-enablement goal activation. For example, the Motivational MTU “Let us now consider a rather important technique, wbicb enables us to solve some equations of higher degree” combines both types of activations. A goal which is inactive due to its remoteness may be activated by means of a direct knowledge preservation motivation, e.g., “Let us go over a technique which we haven’t seen for a while.” This type of Motivational MTU may be accompanied by an indirect goal activation or an application-enablement goal activation, if the goal’s activity level is extremely low, i.e., the student no longer recalls the reason for studying the subject under con- sideration. In this case, FIGMENT may produce a Motivational MTU like the following: “Let us consider an extremely important and useful technique, which we baven’t practiced for quite some time.” A goal whose inactivity stems from its deterioration may be activated either by means of a direct knowledge incrementation motivation or by a precondition enable- ment motivation. In general, the latter is used only if the former is inapplicable. Like the knowledge preservation motivation, a direct knowledge incrementation motivation may be accompanied by an indirect or application- enablement goal activation, yielding a Motivational MTU such as “The following method, which, as you probably recall, enables us to solve many problems in mechan- ics, demands some more practice.” A factual precondition-enablement goal activation is preferred to a fictitious one, which is generated as a last resort. Notice, however, that due to the nature of the interaction between an automated Tutoring System and a student, the only applicable fictitious precondition motivation is one which enables a social goal. For instance, “I would like you to solve one more equation.” Finally, a goal which the student believes has been attained, can either be reactivated by negating this belief, or, should this be unsuitable, by means of a fictitious enablement motivation. The following direct knowledge rectification MTU illustrates the former: “Can you think of another way to solve this equation?” 2% kerman 329 1. Motivation Relations in a IIierarchical Problem-Solving Domain In a hierarchical problem-solving domain the subject matter is typically composed of a sequence of problems interleaved with declarative knowledge. Each problem belongs to a particular topic, and is usually accompanied by one or more solution alternatives. Each alternative contains a sequence of rules. In this case, a student’s motivation to attend to a particular piece of information not only depends on the level of activity of his goal of mastering this information, but also on the status of the goal of mastering other items of knowledge in the hierar- chy, This dependency is expressed by means of the fol- lowing relations (see figure 2): Inheritance - The goal of mastering a topic or equation is transmitted to the solution alternatives. For example, if a student has acquired the goal of solving a given prob- lem, this goal shall remain active until the problem is solved, motivating the student to attend to various solu- tion attempts. Similarly, lack of interest in a given equa- tion is propagated to its solution alternatives. Upwards propagation - The goal of mastering a rule or an equation can be used to motivate a listener to attend to higher levels in the hierarchy. Unlike the previous rela- tionship, this type of propagation applies only to active goals. For instance, FIGMENT activates the goal of mas- tering the substitution method, and propagates it upwards to motivate the student to attend to a given equation, by means of the following indirect Motivational MTU: “The following equation enables us to introduce the very important method of substitution.” This motiva- tion, in turn, may be inherited by other alternatives, by affixing the following text to this MTU: “But first, let us consider other ways to solve this equation, for com- parison purposes.” A knowledge preservation or incre- mentation motivation may be propagated upwards, if it is shared by all the solution alternatives, yielding a sentence such as: “This equation enables us to practice a couple of methods, which we have not seen for a while.” topic, I. in~;<n~ative I I 2 upwards propagation mleH m%2 . . . . . . SOLVED SOLVED Figure 2: Motivation Relations in a Problem-Solving Wierarchy FIGMENT determines whether a student needs to be motivated to attend to a commentary on an algebraic equation, by applying the goal-status determination pro- cess presented in figure 1 fast to the root of the problem-solving hierarchy, namely the topic, and then to the equation. Next, it uses the inheritance relation to 330 Cognitive Modeling ascertain whether the student is motivated to attend to each solution alternative, and applies the goal-status determination process to a typical sequence of rules in each of the alternatives for which the student is unmo- tivated. Finally, if the goal of mastering the given equa- tion or the typical sequence of rules in all the alternatives is active, the upwards propagation relation is used to can- cel the motivational requirements recorded for their ancestors. After ascertaining the status of the student’s goal of mastering each item in the hierarchy, FIGMENT applies the directives presented above to the items which the stu- dent is unmotivated to study, in order to determine an adequate type of Motivational MTU for each of these items. These actions yield a structure containing sugges- tions for Motivational MTUs, such as the one presented in table 1 for the linearized hierarchy in figure 3. TOPIC direct (knowledge preservation) EQUATION fictitious precondition enablement (social) {rule, i REMOVE PARENTHESES - ALTER- fictitious enablement (social)) NATIVE, {rule,, coucr TERMS - direct (knowledge incrementation)) ALTER- {rule,, SUBS- - indirect NATIVE, (highlight attributes)) Table I: Types of Suggested Motivational MTIJs for Sample Input According to this structure, a knowledge preservation motivation may be used to motivate a student to attend to the topic of quadratic equations, and the only motivation applicable to the equation is a fictitious social motivation. The typical sequence in the first alternative consists of two rules, which require a social and a knowledge incre- mentation MTU, respectively, and the typical sequence in the last alternative contains the substitution rule, for which an indirect goal activation is advised. TOPIC: quadratic EQUATION: (~-3)~ - 4(x-3) - 12 = 0 (ALTERNATIVE 1) RULE: remove-parentheses RULE collect-terms RESULT x2 - 10x + 9 = 0 CONTINUE (ALTERNATIVE 2) RULE: substitute (object y) Cfor x-3) RESULT: y2-4y- 12=0 RULE: solve-for (object y) RESULT: y=6 or y=-2 RULE substitute-back (object x-3) Cfor y) RESULT: x-3 = 6 or x-3 = -2 RULE: transfer-term RESULT: x=9 or x=1 FINISH Figure 3: Sample Input to FIGMENT’s Motivation Generation Component FIGMENT completes the motivation generation pro- cess by selecting a subset of the suggested Motivational MTUs. The selection process takes into consideration inheritance and upwards propagation relations, and is guided by the principles of implying the least possible ignorance in the student and, at the same time, exhibiting knowledge about the situation at hand. Thus, FIGMENT will generally favour an application-enablement or indirect motivation over a direct knowledge-status related motivation, and prefer the latter to a precondition- enablement motivation. In addition, among direct motiva- tions, preference is given to one that addresses the lowest possible information item in the hierarchy. For the above presented structure, in most cases the system will highlight the attributes of the substitution method and propagate it upwards, producing a sentence such as “The following equation serves to introduce a very interesting technique, namely substitution.” The first alternative then inherits this motivation, causing the fol- lowing text to be appended “but first, let us consider another alternative for comparison purposes.” In the rest of the cases, the system will generate a knowledge preser- vation Motivational MTU for the topic, yielding the fol- lowing sentence: “Let us now consider the topic of qua- dratic equations, which we have not seen for some time.” Notice that this Motivational MTU accounts only for the first solution alternative, and a separate motivation is pro- duced for the second one, e.g., “Let us now consider another alternative. This approach enables us to introduce the technique of substitution, which is very interesting.” In a learning environment a speaker produces motiva- tional expressions based on his perception of the state of the discourse and the listener’s attributes. This paper presents a motivation-generating mechanism which fol- lows this paradigm and can be readily incorporated into an Intelligent Tutoring System. Specifically, the paper demonstrates the generation of motivational expressions by consulting a simplified model of the effect of the state of the discourse on the listener’s goals. Motivational expressions produced in this manner not only encourage the listener to attend to forthcoming discourse, but enhance the credibility of an Intelligent Tutoring System. efesenees Appelt, D.E. (1982), Planning Natural Language Utter- ances to Satisfy Multiple Goals. Technical Note 259, SRI International, March 1982. Clancey, W.J. (1979),, Transfer of Rule-Based Expertise through a Tutorial Dialogue. Doctoral Dissertation, Computer Science Department, Stanford University, California. Fames, N.C. (1973 revd. 1975), Comprehension and the use of Context, Unit 4, Reading Development. Educa- tional Studies: a Post-Experience Course and 2nd Level Course P.E. 261. Genesereth, M.R. (1978), Automated Consultation of Complex Computer Systems. Doctoral Dissertation, Division of Applied Mathematics, Harvard University, Hallyday, M.A.K. and Hassan, R. (1976), Cohesion in English. Layman Press, London. Hoey, M. (1979), Signaling in Discourse. English Language Research, University of Birmingham, Birm- ingham Instant Print Limited. Longacre, R.E. (1976), An Anatomy of Speech Notions. Peter de Ridder Press Publications in Tagmemics No. 3. Mann, W.C. and Moore, J.A. (1980), Computer as Author - Results and Prospects. Report. No. ISI/RR-79-82, Information Sciences Institute, Los Angeles, January 1980. McKeown, K.R. (1985), Discourse Strategies for Generat- ing Natural Language Text. In Artificial Intelligence 27, pp. 1-41. Schank, R. C. and Riesbeck, C. K. (1981), Inside Com- puter Understanding, Lawrence Erlbaum Associates, Hillsdale, New Jersey. Sleeman, D. and Brown, J.S. (Eds.) (1982), Intelligent Tutoring Systems, London: Academic Press. Winter, E.Q. (1968), Some Aspects of Cohesion. In Sen- tence and Clause in Scientific English, by R. D. Hud- dleston et al., Communication Research Cenne, Depart- ment of General Linguistics, University College, Lon- don, May 1968, pp. 560-604. Zukeruw, I. (1986), Computer Generation of Meta- technical Utterances in Tutoring Mathematics. Doctoral Dissertation, University of California, Los Angeles. Zukerman, I. and Pearl, J. (1986), Comprehension-Driven Generation of Meta-technical Utterances in Math Tutoring. In AAAZ Conference Proceedings, August 1986. Cambridge, Massachusetts. Zukcrman 331
1987
54
648
Pengi: lementation of a Theory of Activity Philip E. Agre and David Chapman MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 Abstract AI has generally interpreted the organized nature of everyday activity in terms of plan-following. Nobody could doubt that people often make and follow plans. But the complexity, uncertainty, and immediacy of the real world require a central role for moment-to- moment improvisation. But before and beneath any planning ahead, one continually decides what to do now. Investigation of the dynamics of everyday rou- tine activity reveals important regularities in the in- teraction of very simple machinery with its environ- ment. We have used our dynamic theories to design a program, called Pengi, that engages in complex, apparently planful activity without requiring explicit models of the world. I. Pengo Let us distinguish two different uses of the word “planning”.’ AI has traditionally interpreted the orga- nized nature of everyday activity in terms of capital-P Planning, according to which a smart Planning phase con- structs a Plan which is carried out in a mechanical fashion by a dumb Executive phase. People often engage in lower- case-p planning. Though a plan might in some sense be mental, better prototypes are provided by recipes, direc- tions, and instruction manuals. Use of plans regularly in- volves rearrangement, interpolation, disambiguation, and substitution. Before and beneath any activity of plan- following, life is a continual improvisation, a matter of de- ciding what to do now based on how the world is now. Our empirical and theoretical studies of activity have led us to question the supposition that action derives from the Execution of Plans and the corresponding framework of lAgre has been supported by a fellowship from the Fanny and John Herts Foundation. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research has been provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract NOOOl4-80-C-0505, in part by National Science Foundation grant MCS-8117633, and in part by the IBM Corporation. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the policies, neither expressed nor implied, of the Department of Defense, of the National Science Foundation, nor of the IBM Corporation. problem solving and reasoning with representations. We observe that real situations are characteristically complex, uncertain, and immediate. We have shown in [Chapman, 19851 that Planning is inherently combinatorially explo- sive, and so is unlikely to scale up to realistic situations which take thousands of propositions to represent. Most real situations cannot be completely represented; there isn’t time to collect all the necessary information. Real situations are rife with uncertainty; the actions of other agents and processes cannot be predicted. At best, this ex- ponentially increases the size of a Planner’s search space; often, it may lose the Planner completely. Life is fired at you point blank: when the rock you step on pivots un- expectedly, you have only milliseconds to react. Proving theorems is out of the question. Rather than relying on reasoning to intervene between perception and action, we believe activity mostly derives from very simple sorts of machinery interacting with the immediate situation. This machinery exploits regularities in its interaction with the world to engage in complex, ap- parently planful activity without requiring explicit models of the world. This paper reports on an implementation in progress of parts of our more general theory of activity [Agre, 1985a, Agre, 1985b, Agre, in preparation, Chapman and Agre, 1987, Chapman, 19851. We are writing a program, Pengi, that plays a commercial arcade video game called Pengo. Pengo is played on a 2-d maze made of unit-sized ice blocks. The player navigates a penguin around in this field with a joystick. Bees chase the penguin and kill him if they get close enough. The penguin and bees can modify the maze by kicking ice blocks to make them slide. If a block slides into a bee or penguin, it dies. A snapshot of a Pengo game appears in Figure 1. In the lower left-hand corner, the penguin faces a bee across a block. Whoever kicks the block first will kill the other. Although Pengo is much simpler than the real world, it is nonetheless not amenable to current or projected Plan- ning techniques because it exhibits the three properties of complexity, uncertainty, and real-time involvement. With several hundred objects of various sorts on the screen, some moving, representing any situation would require well over a thousand propositions, too many for any current plan- ner. The behavior of the bees has a random component and so is not fully predictable. Real-time response is re- 268 Cognitive Modeling From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Figure 1: A Pengo game in progress. quired to avoid being killed by bees. Still, Pengo is only a toy. There are no real vision or manipulation problems; it’s a simulation inside the computer. Nothing drastically novel ever happens. This makes it a tractable first domain for demonstrating our ideas. In a typical Pengo game, the penguin will run to es- cape bees, hunt down bees when it has the advantage, build traps and escape routes, maneuver bees into corners, and collect “magic blocks” (shown as concentric squares in Fig- ure 1) for completing the “magic square” structure that wins the game. Naturally we ascribe the player’s seeming purposefulness to its models of its environment, its rea- soning about the world, and its planful efforts to carry out its tasks. But as with Simon’s ant the complexity of the player’s activity may be the result of the interaction of sim- ple opportunistic strategies with a complex world. Instead of sticking to a rigid plan, Pengi lives in the present, con- tinually acting on its immediate circumstances. It happens upon each situation with only a set of goals and a stock of skills. It can take advantage of unexpected opportuni- ties and deal with unexpected contingencies, not because they’ve been written into a script, but because they are apparent in the situation. II. hteractive Routines are patterns of interaction between an agent and its world. A routine is not a plan or procedure; typically it is not represented by the agent. An agent can reliably enter into’a particular routine without representing it be- cause of regularities in the world. For example, imagine the penguin running from a bee. The penguin will run as far as it can, until it runs into a wall made of blocks. Then it will have to kick its way through the wall. Then it will run some more. Then it will hit another wall. This process could be described by a procedure with two nested loops: running until it hits something, kicking the obstacle, and repeating. But this same pattern of activity could equally well arise from a pair of rules: (Rl) when you are being chased, run away; (R2) if you run into a wall, kick through it. These rules don’t represent the iteration; the loop emerges as a result of the interaction of the rules with the situation. Causality flows into the system from the world, drives the rules which chose what to do, resulting in action which changes the world, and back again into the system, which responds to the changes. An agent executing a plan is inflexible: it has a series of actions to carry out, and it performs them one after another. But it sometimes happens that while a bee is pursuing the penguin, the bee is accidentally crushed by a block kicked by a different bee. A penguin controlled by an iterative procedure would then either continue running needlessly or have to notice that it had gone wrong and switch to executing a different procedure. An agent en- gaging in a routine is not driven by a preconceived notion of what will happen. When circumstances change, other responses become applicable; there’s no need for the agent even to register the unexpected event. (Rl) depends on being chased; if there is no bee chasing, it is no longer applicable, and other rules, relevant perhaps to collecting magic blocks, will apply instead. Thus, routines are op- portunistic, and therefore robust under uncertainty. Re- sponses can be individually very simple, requiring almost no computation; this allows real-time activity. Pengi’s activity is guided by relevant properties of the im- mediate situation we call indexical-junctional aspects, or “aspects” for short. Registering and acting on aspects is an alternative to representing and reasoning about com- plex domains, and avoids combinatorial explosions. A traditional problem solver for the Pengo domain would represent each situation with hundreds or thousands of such representations as (AT BLOCK-213 427 991), (IS- A BLOCK-213 BLOCK), and (NEXT-TO BLOCK-213 BEE-23). These representations do not make reference to the penguin’s situation or goals. Instead of naming each individual with its own gensym, Pengi employs indexical- junctional entities, such as the following, which are useful to find at times when playing Pengo: CJ the-block-I’m-pushing o the-corridor-I’m-running-along 8 the-bee-on-the-other-side-of-this-block-next-to-me a the-block-that-the-block-I-just-kicked-will-collide- with o the-bee-that-is-heading-along-the-wall-that-I’m-on- the-other-side-of As we will see later, the machinery itself does not directly manipulate names for these entities. They are only invoked in particular aspects. If an entity looks like a Agre and Chapman 269 hyphenated noun phrase, sentence. For example: an aspect looks like a hyphenated o the- block-I’m-going-to-kick-at-the-bee-is-behind-me (so I have to backtrack) e there-is-no-block-suited-for-kicking-at-the-bee (so just hang out until the situation improves) e I’ve-run-into-the-edge-of-the-screen (better turn and run along it) e the-bee-I-intend-to-clobber-is-closer-toectile- than-I-am (dangerous!) 0 . . s -but-it’s-heading-away-from-it (which is OK) e I’m-adjacent-to-my-chosen-projectile (so kick it) These aspects depend on the Pengo player’s circum- stances; this is the indexicality of aspects. At any given time, Pengi can ignore most of the screen because effects propagate relatively slowly. It’s important to keep track of what’s happening around the penguin and sometimes in one or two other localized regions. When Pengi needs to know where something is, it doesn’t look in a database, it looks at the screen. This eliminates most of the overhead of reasoning and representation. (The next section will describe how Pengi can find an entity in, or register some aspect of, a situation.) Entities and aspects are relative to the player’s pur- poses; they are junctional. Each aspect is used for a spe- cific purpose: it’s important to register various aspects of the-bee-on-the-other-side-of-this-block-next-t~me, be- cause it is both vulnerable (if the penguin kicks the block) and dangerous (because it can kick the block at the pen- guin). Which aspects even make sense depends on the sort of activity Pengi is engaged in. For example, when run- ning away, it’s important to find the-bee-that-is-chasing- me and and the-obstacle-to-my-flight and the-edge-I’ll-run- into-if-I-keep-going-this-way; when pursuing, you should find the-bee-that-I’m-chasing and the-block-I-will-kick-at- the-bee and the-bee’s-escape-route. Aspects are not de- fined in terms of specific individuals such as BEE-69. The- bee-that-is-chasing-me at one minute may be the same bee or a different one from the-bee-that-is-chasing-me a minute later. Pengi cannot tell the difference, but it doesn’t mat- ter because the the same action is right in either case: run away or hit it with a block. Moreover, the same object might be two entities at different times, or even at the same time. Depending on whether you are attacking or running away, the same block might be a projectile to kick at the bee or an obstacle to your flight. Avoiding the representation of individuals bypasses the overhead of instantiation: binding constants to vari- ables. In all existing knowledge representation systems, from logic to frames, to decide to kick a block at a bee requires reasoning from some general statement that in every situation satisfying certain requirements there will be some bee (say SOME-BEE) and some block (SOME- BLOCK) that should be kicked at it. To make this state- ment concretely useful, you must instantiate it: consider various candidate bees and blocks and bind a representa- tion of one of these bees (perhaps BEE-29) to SOME-BEE and a representation of one of the blocks (BLOCK-237) to SOME-BLOCK. With n candidate bees and m blocks, this may involve n x m work. Clever indexing schemes and control heuristics can help, but the scheme for registering aspects we present in the next section will always be faster. Entities are not logical categories because they are indexical: their extension depends on the circumstances. In this way, indexical-functional entities are intermediate between logical individuals and categories. Aspects make many cases of generalization free. If the player discov- ers in a particular situation that the-bee-on-the-other-side- of-this-block-next-to-me-is-dangerous because it can easily kick the block into the penguin, this discovery will apply automatically to other specific bees later on that can be de- scribed as the-bee-on-the-other-side-of-this-block-next-to- le Machinery We believe that a simple architecture interacting with the world can participate in most forms of activity. This ar- chitecture is made up of a central system and peripheral systems. The central system is responsible, loosely, for cognition: registering and acting on relevant aspects of the situation. The peripheral systems are responsible for perception and for effector control. Because routines and aspects avoid representation and reasoning, the central sys- tem can be made from very simple machinery. We believe that combinational networks can form an adequate central system for most activity. The inputs to the combinational network come from perceptual systems; the outputs go to motor control systems. The network de- cides on actions that are appropriate given the situation it is presented with. Many nodes of the network register par- ticular aspects. As the world changes, the outputs of the perceptual system change; these changes are propagated through the network to result in different actions. Thus interaction can result without Pengi maintaining any state in the central system. V. Visual Routines Aspects, like routines, are not datastructures. They do not involve variables bound to symbols that represent objects. Aspects are registered by routines in which the network interacts with the perceptual systems and with the world. The actions in these routines get the world and the periph- eral systems into states in which the aspects will become manifest. Shimon Ullman [Ullman, 19831 has developed a theory of vision based on visual routines which are patterns of in- teraction between the central system and a visual routines processor (VRP). The VRP maintains several modified in- ternal copies of the two-D sketch produced by early vision. It can perform operations on these images such as coloring 274) Cognitive Modeling Figure 2: Finding the-block-that-the-block-I-just-kicked- will-collide-with using ray tracing and dropping a marker. The two circle-crosses are distinct visual markers, the one on the left marking the-block-that-I-just-kicked and the one on the right marking the-block-that-the-block-I-just-kicked-will-collide-with. in regions, tracing curves, keeping track of locations using visual markers (pointers into the image), indexing interest- ing features, and detecting and tracking moving objects. The VRP is guided in what operations it applies to what images by outputs of the central network, and outputs of the VRP are inputs to the network. A visual routine, then, is a process whereby the VRP, guided by the network, finds entities and registers aspects of the situation, and finally injects them into the inputs of the network. The first phase of the network registers aspects us- ing boolean combinations of inputs from the VRP. Some visual routines are run constantly to keep certain vi- tal aspects up to date; it is always important to know if there is a bee-that-is-chasing-me. Other routines are entered into only in certain circumstances. For exam- ple, when you kick the-block-that-is-in-my-way-as-I’m- running-away-from-some-bee, it is useful to find the-block- that-the-block-I-just-kicked-will-collide-with. This can be done by directing the VRP to trace a ray forward from the kicked block over free space until it runs into some- thing solid, dropping a visual marker there, and checking that the thing under the marker is in fact a block. This is illustrated in Figure 2. As another example, if the penguin is lurking behind a continuous wall of blocks (a good strategy) and a bee appears in front of the wall heading toward it, the-block- to-kick-at-the-bee can be found by extending a ray along the path of the bee indefinitely, drawing a line along the wall, and dropping a marker at their intersection. This is shown in Figure 3. Figure 3: Finding the-block-to-kick-at-the-bee when lurk- ing behind a wall. I. ction rbitration Actions are suggested only on the basis of local plausibility. Two actions may conflict. For example, if a bee is closing in on the penguin, the penguin should run away. On the other hand, if there is a block close to the penguin and a bee is on the other side, the penguin should run over to the block and kick it at the bee. These two aspects may be present simultaneously, in which case both running away and kicking the block at the bee will be suggested. In such cases one of the conflicting actions must be selected. In some cases, one of the actions should always take prece- dence over the other. More commonly, which action to take will depend on other aspects of the situation. In this case, the deciding factor is whether the penguin or the bee is closer to the block between them: whichever gets to it first will get to kick it at the other. Therefore, if the pen- guin is further from the block it should run away, otherwise it should run toward the block. This is not always true, though: for example, if the penguin is trapped in a narrow passage, running is a bad strategy; the ice block cannot be evaded. In this case, it is better to run toward the block in the hope that the bee will be distracted (as often happens); a severe risk, but better than facing certain death. On the other hand, if the block is far enough away, there may be time to kick a hole in the side of the passage to escape into. We see here Zeoefs of arbitration: an action is suggested; it may be overruled; the overruling can be overruled, or a counter-proposal be put forth; and so forth. Action arbitration has many of the benefits of Plan- ning, but is much more efficient, because it does not require representation and search of future worlds. In particular, a designer who understands the game’s common patterns of interaction (its “dynamics”) can use action arbitration to produce action sequencing, nonlinear lookahead to re- Agre and Chapman 271 solve goal interactions, and hierarchical action selection. Unfortunately, space does not permit us to describe the boundaries of the large but restricted set of dynamics in which this sort of machinery can participate. We should comment, though, on Pengi’s central system’s lack of state. We do not believe that the human central system has no state. Our point is simply that state is less necessary and less important than is often assumed. efesences VII* status; ark Currently, Pengi has a network of several hundred gates and a VRP with about thirty operators. It plays Pengo badly, in near real time. It can maneuver behind blocks to use as projectiles and kick them at bees and can run from bees which are chasing it. We expect to expand the network sufficiently that the program will play a competent if not expert game. Pengi is an implementation of parts of a theory of cognitive architecture which will be described in greater detail in [Agre, in preparation]. In constructing the the- ory we learned from the cognitive-architectural theories of Batali, Drescher, Minsky, Rosenschein, and the SOAR group, among other sources. Rosenschein’s is the most similar project. His situated automata use compiled logic gates to drive a robot based on a theory of the robot’s interactions with its world. But these use the ontology of first-order logic, not that of aspects. The robot does not use visual routines and its networks contain latches. We chose Pengo as a domain because it is utterly un- like those AI has historically taken as typical. It is one in which events move so quickly that little or no plan- ning is possible, and yet in which human experts can do very well. Many everyday domains are like this: driving to work, talking to a friend, or dancing. Yet undeniably other situations do require planning. In [Agre, in prepara- tion] we will outline a theory of planning that builds on the theory of activity that Pengi partly implements. Planning, on this view, is the internalization of social communication about activity. We are wiring Pengi’s central system by hand. Evo- lution, similarly, wired the central system of insects. But humans and intelligent programs must be able to extend their own networks based on experience with new sorts of situations. This will be a focus of our next phase of research. VIII. Acknowledgments [Agre, 1985a] Philip E. Agre. The Structures of Everyday Life. MIT Working Paper 267, February 1985. [Agre, 1985b] Philip E. Agre. Routines. MIT AI Memo 828, May 1985. [Agre, in preparation] Philip E. Agre. The Dynamic Structure of Everyday Life. PhD Thesis, MIT Depart- ment of Electrical Engineering and Computer Science, forthcoming. [Chapman, 19851 David Chapman. Planning for Conjunc- tive Goals. MIT AI Technical Report 802, November, 1985. Revised version to appear in Artificial Intelli- gence. [Chapman and Agre, 19871 David Chapman and Philip E. Agre. Abstract Reasoning as Emergent from Con- crete Activity. In M.P. Georgeff and A. L. Lansky (editors), Reasoning about Actions and Plans. Pro- ceedings of the 1986 Workshop at Timberline, Oregon, pages 411-424. Morgan Kaufman Publishers, Los Al- tos, California (1987). [Rosenschein and Kaelbling, 19861 Stanley J. Rosenschein and Leslie Pack Kaelbling. The Synthesis of Digi- tal Machines with Provable Epistemic Properties. In Joseph Y. Halpern, editor, Theoretical Aspects of Rea- soning about Knowlege. Proceedings of the 1986 Con- ference, pages 83-98. Morgan Kauffman Publishers (1986). [Ullman, 19831 Shimon Ullman. Visual Routines. MIT AI Memo 723, June, 1983. This work couldn’t have been done without the support and supervision of Mike Brady, Rod Brooks, Pat Hayes, Chuck Rich, Stan Rosenschein, and Marty Tenenbaum. We thank Gary Drescher, Leslie Kaelbling, and Beth Preston for helpful comments, David Kirsh for reading about seventeen drafts, and all our friends for helping us develop the theory. 272 Cognitive Modeling
1987
55
649
Compare and Contrast, A Test of Ex Kevin II. Ashley and Edwina L. Rissland2 Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 Abstract In this paper we present three key elements of case-based reasoning (“CBR”) and describe how these are realized in our HYPO program which performs legal reasoning in the domain of trade secret law by comparing and contrasting cases. More specifically, the key elements involve how prior cases are used for: (1) Credit assignment of factual features; (2) Justification; and (3) Ar- gument in domains that do not necessarily have strong causal theories or well-understood empir- ical regularities. We show how HYPO uses “di- mensions”, “case-analysis-record” and “claim lat- tice” mechanisms to perform indexing and rele- vancy assessment of past cases dynamically and how it compares and contrasts cases to come up with the best cases pro and con a decision. . ntroduction It is one thing for an expert to analyze a problem situation and another to compare it to similar situations and explain why they are the same or different. If a human expert could perform only the former task, we might well doubt his level of expertise. Critically comparing a situation to other cases - showing why they are the same or pointing out the crucial differences - is an important component of explaining, arguing and planning. One could not reason analogically without it. Only by focussing on important differences, as well as similarities, can one choose the best cases, avoid the worst cases or extrapolate from cases not so on point. Despite the importance of this crucial intel- lectual skill, most expert systems do not represent cases or have the control structure to facilitate comparing cases. Research in Case-Based Reasoning (“CBR”) focusses on that deficit and how to correct it. lThis work was supported (in part) by: the Advanced Research Projects Agency of the Department of Defense, monitored by the Office of Naval Research under contract no. N00014-84-K-0017, and an IBM Graduate Student Fellowship. 2Copyright @lQSS. rights reserved. Kevin D. Ashley & Edwina L. Rissland All II. CBR Invokes Criticdy Comparing Cases A case-based approach to reasoning has three basic ele- ments: 1. Credit Assignment: A decision-maker decides a case because of some factual features and inspite of others. In other words, the decider assigns credit or blame to some of the case’s factual features. In effect, the decision of a case: (a) Selects certain features that are important enough for purposes of credit assignment (Not all facts make a dif- ference to the outcome.); (b) Clust ers the selected features; and (c) “ Weights” them. Features in the cluster that favor the decision are ranked higher than those against it. In this way a prior case represents “experience”. 2. Precedential Justifications: That a prior case (i.e., a precedent) had a certain cluster of features, and that its decision was made because of some of those features and inspite of others, is treated as a basis for a justification for coming to the same conclusion in a future case with a simi- lar combination of features. By assumption, a precedential justification is a reason for coming to a decision in a subse- quent case (and in fact prior cases will be cited in support of an argument that the new case should be decided, or th at conflicting vfeatures shou Id be resolved, in the same way as in the prior case.) Since the experience represented by prior cases matters for future decision-making, those cases need to be accessible for analyzing future cases. 3. Arguments: CBR is inherently adversarial; there seldom is one right answer. Instead there are arguments based on prior cases. CBR generates arguments present- ing the possibly inconsistent alternative justifications. Al- though there are criteria for preferring some justifications over others, for telling good arguments from bad, and for making decisions accordingly, CBR’s recommendations al- ways must be viewed as presenting alternatives. Given its elements, it is essential that a CBR sys- tem facilitate comparing new cases against old. Searching for justifications for deciding a new case is like searching through a space of prior cases for relevant precedents where the criterion for assessing relevance must take into account how useful a prior case will be in an argument about the new case. To make the search feasi ble, a CBR system must represent and record cases and organize them for efficient selection and comparison. In a word, this means index- ing. The cases in the CKB should be indexed by the same features that are involved in credit assignment. Ashley and Rissland 273 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. With its emphasis on comparing a new situation to prior cases and comparing prior cases among themselves to find those that make the best justifications, CBR yields some important advantages: First, it is useful in domains that do not have a strong model. In domains like law, strategic planning, philosophical inquiry and historical po- litical analysis, experts make reasoned decisions in spite of the facts that the rules are incomplete, use predicates whose meanings are not well defined (the open textured problem) or lead to inconsistent results. In these domains the expertise is simply organized differently along case- based lines. To the outsider, legal decision-making may seem arbitrary and chaotic, but, with its doctrine of case precedent, the law is an organized chaos. See [Levi, 19491. Second, even in domains with strong models, case-based approaches are better-suited for a number of reasoning tasks involving explanation, persuasion and planning. We expect experts to be able to: 1. Explain their analysis of a situation by giving examples and posing hypotheti- cals to demonstrate the critical features, which if different, would have lead to a different conclusion; 2. Persuade us to believe the conclusion by: comparing the current sit- uation approvingly to previous cases; extrapolating from less-similar cases (e.g., by pointing out differentiating fea- tures of the current situation that warrant the desired con- clusion even more strongly); and posing hypotheticals to illustrate the dire consequences if the proposed conclusion is not adopted. 3. Plan for contingencies by posing hy- pothetical scenarios (worst, best, most recent, most likely cases, etc.) that illustrate the consequences of and alter- natives to a given course of action. Of course, a CBR approach has costs: 1. Construct- ing and maintaining the index; 2. dealing with the com- binatorics of large numbers of cases and the depth of in- ferencing necessary to invoke the index; and 3. coming up with evaluation criteria for assessing justifications and arguments. For examples of recent research on these is- sues, see [Kolodner, 1983, Kolodner, Simpson and Sycara- Cyranski, 1985, Hammond, 1986a, Hammond, 1986b, Car- bonell, 1983a, Carbonell, 1983b]. rogram and its omain WYPO is a case-based reasoning program which operates in the domain of trade secret law [Rissland,Valcarce and Ash- ley, 1984, Rissland and Ashley, 1986, Rissland and Ash- ley, 19871. HYPO accepts a fact situation from its user, analyzes it, retrieves other relevant cases from its Case- Knowledge-Base (“CKB”), considers various assignments of importance to facts, “positions” the retrieved cases with respect to the curent case, selects important most-on-point and most-dangerous cases, suggests interesting or criti- cal hypotheticals, proposes the skeleton of an argument, and justifies this argument with case citations in the form demanded in legal scholarship [Ashley, 1986, Ashley and Rissland, 19871. In HYPO, the main sources of legal knowledge are contained in HYPO’s CKB and its library of dimensions. Dimensions represent the legal relationship between var- ious clusters of operative facts and the legal conclusions they support or undermine. Dimensions provide not only indices into lines of cases and their attendant analyses and arguments but also a mechanism by which to judge the strength, or weakness, of a fact situation with respect to that line of reasoning. For instance, one line of trade secret cases focusses on the degree to which the “cat (i.e., secret) has been let out of the bag”, even by the complaining plain- tiff, himself: that is, how many disclosures of the putative secret were there and of what kind? This way of looking at a trade secret case (captured by the Disclose-Secrets dimension) provides one approach to resolving a misap- propriation dispute and was used in the Data General and Midland Ross cases discussed below. Another approach might emphasize the competitive advantage gained by the defendant at the plaintiff’s expense or the switching of a key employee from the plaintiff to the defendant [Rissland and Ashley, 19861. Each dimension has: prerequisites, ex- pressed in terms of factual predicates, that tell whether a dimension applies to a case or not; focal slots that single out the particular facts making a case stronger or weaker along the dimension and range information that tells how a change in the focal slot affects that strength (e.g., for Disclose-Secrets, the focal slot is the number of disclosees. Increasing that number weakens the plaintiff’s position.) See generally [Ashley, 19861. easoning Process Here is how HYPO reasons about a new fact situation (call it the current fact situation or cfs, for short). First, in analyzing a new cfs, HYPO runs through the li- brary of dimensions and produces a case-analysis-record that contains: (1) applicable factual predicates; (2) appli- cable dimensions; (3) near-miss dimensions; (4) potential claims and (5) relevant cases from the CKB. Near-miss dimensions are those for which some, but not all, of the prerequisites are satisfied. The combined list of applica- ble and near-miss dimensions is called the D-Est. Fig- ure 1 describes a cfs based, for purposes of illustration, on Crown Industries, Inc. v. Kawneer Co., 335 F.Supp. 749 (N.D.Ill., 1971). Figure 2 shows the case-analysis-record for the cfs. Second, HYPO uses the case-analysis-record to con- struct the claim lattice, which is a lattice such that: (1) the root is the cfs together with its D-list; and (2) successor nodes contain pointers to cases that share a subset, usually proper, of the dimensions in the cfs’s D-list. Figure 3 (a) shows the claim lattice actually generated by the HYPO program for analyzing the cfs of Figure 1 from the view- point of a trade secrets misappropriation claim. (There is a separate claim lattice for each possible claim.) 274 Cognitive Modeling From 1962 to 1964, Crown Industries, Inc., the plaintiff (rrr), developed a hydraulic power pack, PX-121, for automatic door openers. Crown complained that defendant (6) Kawneer Co. developed a competing product, PX-125, by misappropriating ?T’S trade secrets. Crown’s power packs had been sold to and installed in five public retail establishments. Crown made disclo- sures about the power pack to a third party, and in 1963 and 1965 a Crown employee made disclo- sures concerning the pack to Kawneer. PX-121 did not have any unique features not generally known to the prior art. It took Kawneer six years to develop PX-125, from 1962 to 1968. Figure 1: Current Fact Situation (cfs) based on Crown Industries, Inc. v. Kawneer Co. The ordering scheme enables claim lattices to capture a sense of closeness to the cfs of cases in the CKB. Those sharing more dimensions are nearer to the cfs. Those nodes closest to the root whose subsets of the cfs’s D-list do not contain near-miss dimensions can be considered most-on- point-cases “mope’s” to the cfs; leaf nodes are the least- on-point. All of the cases displayed are relevant to the cfs because they all share some legally important strengths or weaknesses with the fact situation as represented by the dimensions shared with the cfs. Third, HYPO uses the claim lattice to identify the competing parties’ mopc’s. There are two pro-defendant (“6”) mope’s in Figure 3 (a): Midland-Ross and Yokana. Since mope’s share the most legally important strengths and weaknesses with the cfs (i.e., mope’s are the closest analogies to the cfs), Midland Ross and Yokana are the most persuasive cases HYPO could cite for the defendant. (Crown Industries is also a mope, but that is the very case on which the cfs is based. Eventhough it would be silly to cite a case in an argument about itself, it makes sense that HYPO regards a case as most on point to itself.) Applicable Factual Predicates: exists-corporate-claimant, exists-confidential-info, exists-disclosures . . . Applicable Dimensions: Disclose-Secrets Near-Miss Dimensions: Restricted-Disclose, Competitive-Advantage, Vertical-Knowledge Potential Claims: Trade Secrets Misappropri- ation Relevant CKB cites: See claim lattice, Figure 3 (4 Figure 2: Case-Analysis-Record for CFS There are no pro-plaintiff (“r”) mope’s in Figure 3 (a). Data General, for example, is not a mope because, al- though it is very close to the root, the Restricted-Disclose dimension, which applies to Data General, and which would help 7r if it applied to the cfs, is only a near-miss for the cfs.(Restricted-Disclose is a near-miss because the cfs does not have the prerequisite factual predicate that some disclosees agreed to keep r’s secrets confidential. Note that Restricted-Disclose is *‘d in Figure 3 (a).) Although not a mope, the Data General case is potentially a mope for T. A potential mope is very similar to the cfs, except that some dimensions that apply to it are near-misses with respect to the cfs; Potential mope’s reside in nodes clos- est to the root. As shown below, if it were true that the disclosees had agreed to keep r’s confidential information secret, Data General would become a very important case to the plaintiff. Fourth, HYPO uses the cases in the claim lattice to make and respond to precedent-citing arguments about the cfs. Different major branches of the lattice indicate differ- ent ways to argue the case, effectively one way for each group of mopc’s. HYPO a three-ply argument starting with a point for side 1, a response for side 2 and, possi- bly a rebuttal for side 1 again. HYPO, for instance, can argue the case for side 1, the defendant (“6”) in the cfs, by citing a pro-defendant mope, as in Figure 4 [a]. R ecall that in Figure 3 (a) there are two such mope’s, Midland Ross and Yokana. HYPO justifies the point expressly by drawing the analogy between the cfs and the cited cases by reciting the facts associated with dimension they have in common, Disclose-Secrets, namely that in both cases, plaintiff disclosed its secrets to some outsiders. HYPO responds to points like that of Figure 4 [a] by distinguishing the cited case using three basic methods: (1) Comparing the strengths of cfs and cited case along the di- mensions they share in common; (2) Finding strengths or weaknesses, represented by dimensions, that the cfs and cited case do not share. (3) Finding other cases that are more on point than the cited case. Figure 4 [b] is an exam- ple of the first method. HYPO distinguishes Midland-Ross on behalf of side 2, the plaintiff, by comparing values in the cfs and cited case of the focal slots of the shared di- mension. HYPO knows from the claim lattice and the range information about the Disclose-Secrets dimension, that Midland-Ross presents a stronger case for 6 because 7r disclosed the confidential information to 100 outsiders; in the cfs, Crown disclosed to only five. HYPO supports the response by citing the Data General case where the plaintiff won despite having made many more disclosures than in Midland-Ross. For a rebuttal, HYPO distinguishes any case cited in the response, as in Figure 4 [c]. Using the second method of distinguishing HYPO points out the pro-s strength whose absence from the cfs makes Data General only a poten- tial mope, namely that the disclosures were subject to re- strictions to maintain confidentiality ( a feature captured by the Restricted-Disclose dimension that applies to Data Ashley and Rissland 275 GRAPH-NODE-74 DIMENSIONS: Disclose-Secrets ~;st~~ded-Disclose + Data General n ~kbc$-Secrets Midland’ Ross b Yokana S Crown Industries S GRAPH-NODE-70 - Restricted-Disclose + Potential MOPC \ Applicable DIMENSIONS: Dsmensions- Disclose-Secrets Vertical-Knowledge * Competltive-Advantage + Near-MSss Restricted-Disclose + Dimensions P’d\ Ii ‘1 PotentSal GRAPH-NODE-7 1 / MOpC’B \ --I NO Longer Near-Mbs\ The root node of claim lattice (a) represents cfs in Figure 1 and its D-list. (Dimensions that are near-misses for cfs have *‘s.) Successor nodes contain pro-plaintiff (x) or pro-defendant (6) trade secrets cases that are on point to cfs. Nodes closest to root that do not have near-miss dimensions contain mope’s; otherwise they may contain potential mopc’s. Leaf nodes are least-on-point. Each major branch of lattice that contains mope’s represents one way of arguing the cfs. MOPC’S distinguish cases in successor nodes. Potential mope’s suggest fruitful hypothetical variants of cfs like that in (b). (b) is lattice for same cfs as (a) plus fact that disclosees agreed to treat K’S secrete as confidential. Argument for x is stronger in (b) than (a) because Data General: (1) has been promoted to being pro-n mope (Restricted-Disclose dimension is no longer near miss in (b)); (2) is more on point than 6’s mopc’s. Figure 3: Two Claim Lattices. General but is only a near-miss for the cfs.) The fifth step in HYPO’s reasoning process is to gen- erate hypotheticals that are useful for testing the strengths and weaknesses of a party’s position. HYPO uses its knowledge of how a case may be distinguished to suggest hypothetical modifications of the cfs that would strengthen or weaken the plaintiff’s position [Rissland and Ashley, 1986]. For example, HYPO uses the relative positions of IT’S potential mope Data General and 6’s mope Midland- ROSS in the claim lattice of Figure 3 (a) to suggest a hy- pathetical variant of the cfs in which n’s disclosures were made on a restricted basis. Then Data General can be used to distinguish Midland-Ross using the third method of distinguishing, significantly improving ?r’s position. Fig- ure 3 (b) shows the claim lattice that would result for the modified cfs. The basic differences between the two claim lattices are that the Restricted-Disclose dimension, a near- miss in Figure 3 (a) is an applicable dimension in Fig- 276 Cognitive Modeling of For Side 1: (A’s point) Cite: Midland-Ross, Yokana (A should win because As in cited cases won where IIs disclosed secrets to outsiders.) [b] * For Side 2: (II’s response to [a]) Distinguish: Midland-Ross (In Midland-Ross, II disclosed to 100 out- siders. lI in cfs disclosed to only 7 outsiders.) Cite: Data General (II in Data Genera6 won eventhough lI disclosed to 6000 outsiders, more than in Midland-Ross.) [c] L) For Side 1: (A’s rebuttal to [b]) Distinguish: Data General (In Data General disclosees secrets but not so in cfs.) agreed to keep Figure 4: Citing & Distinguishing Precedents: 3-Ply Ar- guments ure 3 (b) and that Data General has become T’S real mope and one that is more on point (i.e., closer to the root) than 6’s mopc’s. HYPO illustrates the new strength in the plaintiff’s position by replaying the three-ply argument. Given the facts of the modified hypothetical in Figure 3 (b), HYPO can now generate a stronger response to the point in Fig- ure 4 [a]: [d] e For Side 2: (II’s response to [a]) Cite: Data General (IT should win because in Data General, II won where II disclosed secrets and disclosees agreed to keep disclosures secret.) Distinguish: Midland-Ross, Yokana (Data General is more on point than these cases where disclosees did not agree to keep disclosures secret.) Using information contained in the case-analysis-records and claim lattice, HYPO expressly compares and contrasts cases at three levels: (1) Facts; (2) Justifications; and (3) Arguments. At the level of facts, HYPO compares the cfs to rel- evant cases from the claim lattice by focussing on the im- portant facts they share as indicated by the dimensions they have in common. As we have seen, in making points, HYPO draws the analogy between the cfs and various cases by reciting these facts. HYPO contrasts cases when it re- sponds to points by distinguishing the cited cases. Using the first two methods of distinguishing (i.e., focussing on differing strengths along shared and unshared dimen- sions), HYPO is able to point out factual differences that justify not treating the cfs like a cited case. At the level of justifications, HYPO compares relevant cases to each other using the claim lattice to see which make better precedents for deciding the cfs. Cases are compared in terms of: how on point they are relative to the cfs (mope’s vs. less on point cases); how useful they are in a legal argument about the cfs (e.g., using the third method of distinguishing to contrast a cited case with a more on point opposing case.); and how potentially use- ful they would be in a legal argument about the cfs (e.g., finding pro-opponent cases that can be used to distinguish mope’s) . HYPO makes comparisons at the arguments level by comparing the claim lattices. In moving from the cfs, Fig- ure 3 (a) to the variant in (b), there has been a big shift in the balance of the argument in favor of the plaintiff, a comparative legal conclusion that HYPO can infer from a simple comparison of the claim lattices. One of HYPQ’s evaluation functions for comparing claim lattices involves simply comparing mopc’s. In Figure 3 (a) there are pro-6 mope’s but no pro-r mope, indicating a strong argument for the defendant. In Figure 3 (b), beside the same pro-6 mope’s, there is a new pro-r mope, Data General, which is more on point (i.e., closer to the root) than Midland-Ross or Yokana, indicating a strong argument for plaintiff. In other words, claim lattices can be used to evaluate the ar- guments in favor of a proposition essentially by comparing the relationships of the pro and con mopc’s. In its selection of Midland-Ross as defendant’s best case, HYPO agreed with what the court actually did in its opin- ion in the case on which the cfs is based, Crown Industries, Inc. v. Kawneer Co. The court said, Even though the Plaintiff’s power packs, ex- emplified by PX-121, might have had to be ren- dered inoperative and examined by an engineer in order to discover the alleged trade secrets con- tained therein the sale of the power packs never- theless constituted a public disclosure which de- feats a claim founded upon alleged misappropria- tion of the trade secrets allegedly contained in the power packs. Midland-Ross Corp. w. Sunbeam Equipment Co., 316 F. Supp. 171, 177 (W.D.Pa. 1970), affirmed, 435 F.2d 159 (3d Cir. 1970). HYPO’s analysis of a cfs by comparing and contrast- ing it with mope’s is similar to that actually performed by courts. Consider the opinion of the court in another case with similar issues to our cfs, National Rejectors, Inc. 21. Trieman 409 S.W.2d 1, 40-42 (Sup. Ct. MO., 1966): Ashley and Rissland 277 [W]e do find some significant parallels be- tween the facts of this case and those of Midland- Ross Corporation v. Yokana (D.C. N.J.), 185 F.Supp. 594 [The Yokana case involved the same plaintiff as Midland-Ross Corp. v. Sunbeam Equipment Co. and the same defense that plain- tiff had disclosed its secrets to outsiders].. . . Thus the claim of trade secrets by National and by plaintiff in Midland-Ross have essentially the same basis. . . . What was lacking in Yokana as in this case, was any evidence that, prior to defen- dant’s competition, plaintiff considered the infor- mation which Yokana sought to use trade secrets. The court pointed out that plaintiff’s blueprints in Midland-Ross were furnished plaintiff’s suppli- ers and customers and potential customers. The court found an absence of precautions on the part of plaintiff to keep secret information regarding its machines. Although the following cases do not parallel the present cause ins closely ils Yokana our con- clusion here is consistent with that reached in: [citing and describing other cases.] Not only are the facts of Midland-Ross Cor- poration u. Yokana comparable to those in this situation, but we find the relief afforded in that case also appropriate in this.. . . VII. Conclusion In this paper, we have presented three key elements of case-based reasoning (CBR): 1. That prior cases select and assign credit to factual features and weight conflicting features; 2. That prior cases are justifications for deciding a new fact situation (cfs) with similar combinations of fea- tures; and 3. That CBR yields arguments how to decide the cfs based on these potentially conflicting justifications. We have reviewed our indexing scheme based on “dimen- sions” that organizes cases in the Case-Knowledge-Base (CKB). HYPO performs indexing and relevancy assess- ment of past cases dynamically by (1) analyzing how prior cases can be viewed from the point of view of the cfs and (2) determing what aspects of these prior cases apply, and how strongly, to the cfs. This sort of analysis - accomplished through HYPO’s dimensions, “case-analysis-recOrd” and “claim lattice” mechanisms - allows HYPO to promote some prior cases over others as precedents for interpreting and arguing the cfs. HYPO compares and contrasts the cfs and prior cases at the levels of facts, justifications and arguments to come up with the best cases pro and con a decision and to pose instructive hypothetical variants of the cfs. References [Ashley, 19861 K evin D. Ashley. Modelling Legal Argu- ment: Reasoning with Cases and Hypotheticals - A Thesis Proposal. Project Memo 10, The COUNSELOR 27% Cognitive Modeling Project, Department of Computer and Information Sci- ence, University of Massachusetts, 1986. [Ashley and Rissland, 19871 Kevin D. Ashley and Ed- wina L. Rissland. But, See, Accord: Generating “Blue Book” Citations in HYPO. In Proceedings: First Inter- national Conference on Artificial Intelligence and Law, Northeastern University, 1987. [Carbonell, 1983a] J. G. Carbonell. Derivational Analogy and its Role in Problem Solving. In Proceedings of the Third National Conference on Artificial Intelligence, American Association for Artificial Intelligence, Wash- ington, D.C., 1983a. [Carbonell, 1983b] J. G. Carbonell. Learning by Analogy: Formulating and Generalizing Plans from Past Experi- ence. In Michalski, J.G. Carbonell, and T. Mitchell, ed- itors, Machine Learning: An Artificial Intelligence Ap- proach, Tioga Publishing, CA, 1983b. [Hammond, 1986a) Kristian J. Hammond. CHEF: A Model of Case-based Planning. In Proceedings of the Fifth National Conference on Artificial Intelli- gence, American Association for Artificial Intelligence, Philadelphia, PA, 1986a. [Hammond, 1986b] Kristian J. Hammond. Learning to Anticipate and Avoid Planning Problems through the Explanation of Failures. In Proceedings of the Fifth Na- tional Conference on Artificial Intelligence, American Association for Artificial Intelligence. Philadelphia, PA, 1986b. [Kolodner, 19831 Janet L. Kolodner. Maintaining Organi- zation in a Dynamic Long-Term Memory. Cognitive Sci- ence, 7(4):243-280, 1983. [Kolodner, Simpson and Sycara-Cyranski, 19851 Janet L. Kolodner, Robert L. Simpson, and Katia Sycara- Cyranski. A Process Model of Case-Based ‘Reasoning in Problem Solving. In Proceedings of the Ninth Inter- national Joint Conference on Artificial Intelligence, In- ternational Joint Conferences on Artificial Intelligence, Inc., Los Angeles, CA, 1985. [Levi, 19491 Edward H. Levi. An Introduction to Legal Reasoning. University of Chicago Press, 1949. [Rissland and Ashley, 19871 Edwina L. Rissland and Kevin D. Ashley. A Case-Based System for Trade Secrets Law. In Proceedings: First International Con- ference on Artificial Intelligence and Law, Northeastern University, 1987. [Rissland and Ashley, 19861 Edwina L. Rissland and Kevin D. Ashley. Hypotheticals as Heuristic Device. In Proceedings of the Fifth National Conference on Artifi- cial Intelligence, American Association for Artificial In- telligence. Philadelphia, PA, 1986. [Rissland,Valcarce and Ashley, 19841 Edwina L. Rissland, E.M. Valcarce, and Kevin D. Ashley. Explaining and Arguing with Examples. In Proceedings of the Fourth National Conference on Artificial Intelligence, American Association for Artificial Intelligence, 1984.
1987
56
650
Dana H. Ballard Department of Computer Science University of Rochester, Rochester, New York 14627 Abstract In the development of large-scale knowledge networks, much recent progress has been inspired by connections to neurobiology. An important component of any “‘neural” network is an accompanying learning algorithm. Such an algorithm, to be biologically lausible, !s must work for very large numbers of units. tudies of large-scale systems have so far been restricted to systems without internal units (units with no direct connections to the input or output). Internal units are crucial to such systems as they are the means by which a system can encode high-order regularities (or invariants) that are implicit in its inputs and outputs. Computer simulations of learning using internal units have been restricted to small-scale systems. This paper describes a way of coupling autoassociative learning modules into hierarchies that should greatly improve the performance of learning algorithms in large-scale systems. The idea has been tested experimentally with positive results. 1. Introduction An important component of any artificial intelligence system ultimately will be its ability to learn. Very recently there has been great progress in the development of learning algorithms [Rumelhart et al., 1986 (1); Rumelhart and Zipser, 1985 (2); Ackley et al., 1985 (3); Pearlmutter and Hinton, 1986 (4); Lapedes and Farber, 1986 (5)l. All of the above algorithms use internal representations to represent regularities in the environment. The internal representations capture efficient encodings of the environment that presumably facilitate the behavioral needs of the system. These individual algorithms have their own advantages and disadvantages, but a common question related to all of them is whether or not they scale with the size of the problem. In other words, even on an appropriate parallel architecture, the computational complexity in the average case may not remain constant or at worst scale with the problem size. The result is that it is likely that additional insights will be needed to implement learning algorithms in massively parallel systems. 2. Hierarchies The tremendous advantage of hierarchies as a compact encoding of input-output pairs is the principal motivation for developing a learning algorithm that is geared to developing hierarchical encodings. One possibility is to use the Backpropagation algorithm with several internal levels. Our computer experiments in Section 4, however, show that this formulation does not seem to have good scaling properties. An example that took 256 iterations to converge with one internal layer took over 4096 iterations to converge with three internal layers. Thus we were motivated to develop a modular reformulation of Rackpropagation learning with better convergence properties. Another idea that we will use is that of autoassociation. Consider first a simple modification of the Backpropagation algorithm that is shown in Figure 1. The figure shows the standard three-layer architecture used in most experiments. We will refer to the layers as the input, internal, and output layers, as shown on the figure. The number of units at each layer we term the width of the layer. A. Standard B. Autoassociative Configuration Configuration Figure 1. In the autoassociative configuration the output is constrained to be identical to the input. To simplify what follows, we neglect the width of the layers and just use a representative unit for all the units in a layer. Let us use two-way connections so that now the internal units are connected to the input units. Note that now the same learning algorithm can be adapted to this special problem of predicting the input. Activation from the input is propagated to the internal units and then back to the output, where it can be interpreted as a virtual copy. That is, it is the input as reconstructed from the internal representation. Since the input is known, this can be used to generate an error signal, just as in the feedforward case, that is then sent backwards around the network to the input weights. This architecture was proposed by Hinton and Ballard 279 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. pg;$hart and has been recently studied by Zipser . 3. Learning with Modular Hierarchies The main result of this paper is to show how a purely autoassociative system system can be modularized in a way that is resistant to changes in problem scale. Consider Figure 2, which describes the general idea. Consider that an autoassociative module is used to learn a visual representation. Now imagine that a similar process takes place at the output (motor) level, where in this case the system is codifying efficient internal representations of quickly varying movement commands. Both the motor and visual internal representations, being codes for the more peripheral representations, will vary less. This means that if one views the situation recursively, at the next level inward the problem of encoding peripheral representations is repeated, but now it is cast in terms of more abstract, more invariant representations of the peripheral signals. It also means that the same principles can be applied recursively to generate a set of learning equations that are geared to the new levels. Thus one can imagine that the abstract visual and motor levels are coupled through another autoassociative network that has a similar architecture to the lower levels but works on the abstractions created by them rather than the raw input. The next autoassociative module, termed ABSTRACT in Figure 2, starts with copies of the internal representations of the SENSORY and MOTOR modules and learns a more abstract representation by autoassociation. ABSTRACT SENSORY Coupling to Upper Layer Error modulated Coupling MOTOR Figure 2. The main idea : Peripheral modules can work in an almost decoupled fashion to build more abstract representations. These are tightly coupled by more abstract modules that build still more abstract representations. The depth of two in the figure is rey~il schematic: the principle extends to arbitrary . 280 Cognitive Modeling Perhaps the biggest advantage that has occurred with this reformulation is that the equations at each level can be thought of as being relatively decoupled. This means that they can be run in parallel so that the error propagation distances are short. In practice one would not want them to be completely decoupled, as then higher-level representations could not effect lower- level re occur i P resentations. At the same time, problems may the different levels are coupled too tightly before the system has learned its patterns, since in this case errorful states are propagated through the network. Here again the hierarchical reformulation has a ready answer: since there is now a measure of error for each layer, the activation of the upper, output levels can be coupled to the lower levels by a term that tends to zero when the error is large and one when the error is small. To develop the solution method in more detail, consider the error propagation equations from [Rumelhart et al., 19861. They minimize an error measure E = C&,, where where the subscript i ranges over the output units and the d denotes the desired output. The unsubscripted s denotes the actual output. In what follows, the subscript p, which ranges over individual patterns, will be dropped for convenience. For a two-layered system such as is characterized in Figure lA, the equations that determine the activation are given by: sj = 0 (1 (Wji si + Bj) where o(x) = l/(1 + e-Dx). The output of the jth unit sj ranges between zero and one. The synaptic weights wji and threshold 6j are positive and negative real numbers. The equations that change the weights to minimize this error criterion are: A wji = q 4 s, for output units, sj = (sjd - Sj’ sj (1 - Sj’ and for hidden units, sj =sjcl - “j’Ek6k wKj These equations are derived in [Rumelhart et al., 19861. Now let us consider the architecture of Figure 2. In this architecture, the connections from the hidden units feed back to the input units, so that now the prime notation has a special meaning. It is the activation level of the input units that is predicted by the hidden units. This is subtracted from the actual input level, which may be regarded as clamped, in order to determine the error component used in correcting the weights. Thus essentially the same equations can be used in an autoassociative mode. The elegance of this formulation is that it can be extended to arbitrary modules. Where the subscript m denotes the different modules, the equations that determine the activation are now given by: S. Jm=u(zWji mSim+eJm) , , The equations that change the weights to minimize this error criterion are: AW~i,n=qrn6jrnstm , 9 for output units, sjm = qm - sjm$p - Sjm) l 3 9 and for hidden units, Now for the counling between modules. A module m2 is said to be hierakhi&ZZy coupled to a module ml if the activation of the input layer of rn2 is influenced by the internal layer of ml, and also the activation of the output layer of” m2 influences the internal layer of In this case rnp is said to be the more abstract of the tmwl modules and rkl the less abstract. The modules are directly input coupled if the activation of a subset of the units in the input layer of mg is a direct copy of the activation of units in the internal layer and output coupled if the activation of the units in the internal level of ml uses the activation of the output units in its sigmoid function. The hierarchical algorithm works as follows. Consider first the “sensory” module in Figure 2. This can be thought of as a standard autoassociative Backpropagation network. The “motor” module can be thought of in the same way. Each of these modules builds an abstract renresentation of its visual innut in its own internal la&. Next the activation of-these internal layers is copied into the input layer of the “abstract” module. In the architecture we tested, the abstract module has double the width of the sensory module, and the widths of the sensory and motor modules are eaual. The abstract module learns to reproduce this input by autoassociation in its output layer. This module does two things: first, it builds an even more abstract representation of the combined visual and motor inputs: and second, it couples these two inputs so that ultimately the visual inputs will produce the correct motor patterns. While the coupling in the upward direction is straightforward, the coupling in the downward direction is more subtle, so we will develon the rationale for it in detail. Remember that the eauation for updating the weights has the following simple form: Awji = ~6~s; In this equation n is a parameter that must be chosen by experiment. Normally, 9 is constant throughout, or at least for each layer, but is this the right thing to do? Recently Baum et al. [1986] have shown that sparse __- -_ _ units are on simultaneously, have special virtues in terms of retrieval properties that are noise resistant, and Scalettar and Zee [19861 have demonstrated that sparse encodings emerge under certain experimental conditions where noise is added to the input,. In addition, a straightforward argument shows that, to the extent that the internal representation can be made sparse, the learning process will be speeded up. The reason for this is that the weight change for a given pattern may not be in the same direction as that of the other patterns, so that the different weight changes may interfere with each other. Sparse encodings tend not to have this problem, since the activation of the unit whose weight is to be changed is likely to be non-zero for only a few patterns. One way to make the encodings sparse is to incorporate additional procedures into the basic learning algorithm that favor sparse representations. Scalettar and Zee [ 19861 used a selective weight decay where the largest weights (in absolute value) decayed the slowest. For reasons that will become clear in a moment, we changed the weight modification formula to: A wJi = q Fj si ‘s, I sjm$ Under this “winner gets more” (WGM) heuristic, the unit in a layer with the most activation has its weights changed the most. In other words, the weight change was scaled by the relative activation of the unit. This heuristic had a marked positive effect on convergence. Figure 3 shows a comparison between the two formulae in the simple case of learning identity patterns using a 4-4-4 network. Normal vs. Heterogeneous Scaling encodings, where only a small fraction of the internal Ballard 281 This result is important because, in the limit, the downward coupling between modules will have the same effect. The argument is as follows: in downward coupling the activation from the output layer of the more abstract module is added in to that of the internal layer of the less abstract module to which it is coupled. Since the abstract module is autoassociating, its pattern should, in the limit, be identical to that of the lower module’s internal layer. Thus adding this activation in is equivalent to scaling the weight change formula relative to the rest. Thus this procedure should improve convergence since it is a type of WGM strategy. The coupling between modules is handled as follows. Suppose that the bits of the internal representation of the “sensory” module ml map onto the first N bits of the “abstract” module, mg. Further, suppose the N bits of the “motor” module m2 map onto the second N bits of the “abstract” module. Then the upward coupling is determined by: $qJut = sinternal 2, m i = 1, . . . . N 3 i, m 1 input S. r+N,m = sinternal 3 1, m 2 The downward coupling is more subtle. Consider the computation of the activation of the internal units of modules ml and mg. The decoupled contribution is given by xWjimSi m+ ‘j,, 9 , m = m1,m2 To this is added a term where y is a function of the error in the abstract layer EA. Where EA is given by y is defined by 1 l!=- 1 + aEA forsomea > Oso thaty = 1 when EA is large. where EA = 0 and is small 4. Experimental Results The simulations all used atterns with one unit (bit) set to one (on). Thus for a our-bit input layer the F patterns were: (10 0 0), (4) 10 0), (0 0 lo), (0 0 0 11, and these were paired with corresponding output patterns. This problem has been termed the encoder problem by Ackley et al. [1985]. The main difference in our architecture is that there are sufficient internal units so that no elaborate encoding of the pattern is forced. As a baseline, the encoder problem was tested on 4-4-4 feedforward architecture: four input units connected to four internal units connected to four output units. The Backpropagation algorithm was used. The results of this simulation are shown below. Figure 4 shows how the squared error varies with the number of iterations. Following the simulation used by Scalettar and Zee [1986], a r-l of 0.75 and a l3 of 2.0 were used. The weights and thresholds were initialized to random numbers chosen from the interval (-0.5, 0.51, and there was no “momentum” term [Rumelhart et al., 19861. 3 Level vs. 5 Level Backpropagation q ,001 I . . . . . . . . . . . . . . . . . . . , . . . . . . . . , .,.n 10 100 1000 10000 lteretlon Figure 4. Next the encoder problem was tested on a 4-4-4-4- 4 architecture under the same conditions. In all the simulations tried, the algorithm found the desired solution, but took very much longer. Figure 4 compares the rate of convergence of the three-layer and five-layer networks. Although it eventually converges to the correct solution, it takes 20 times longer. Now consider the hierarchical modular architecture. The particular hierarchical architecture that we tested can be thought of as three distinct modules. There is a 4-4-4 system that learns representations on an input pattern (the encoder problem), a 4-4-4 system that learns to encode the output pattern (in this case the encoder pattern again), and an 8-8-8 system that encodes the internal representations produced at the internal layers of both the input and output modules. The eight-bit wide system of units serves to couple the input to the output. The input and output can be thought of as at the same level in terms of the abstract hierarchy, whereas the eight-bit system is above them. In the simulation, the state of the internal units for the input and output modules is copied into the “input” units of the more abstract module. This pattern is then learned by autoassociation using the Backpropa ation algorithm. At each step the activation o B the internal units of the upper module is 282 Cognitive Modeling added to that of the internal units of the lower modules, - .---- _ -- but weighted by a coupling factor. The coupling factor depends on the effort of the upper module. We used a factor y, where where EA patterns. was the average absolute error over all the Figures 5 shows the initial and final states of the system. Figure 6 shows the error behavior, comparing the modular system to the original three-level 4-4-4 configuration. lo7 3 Level BackPropagation vs. Modular 001 ! . . . . . . . . . . . . . . . . . . . . ..*. 1 10 100 II Figure 6. lteratlon JO These results are very positive, as they show that the convergence of the modular system is comparable to that of the original 4-4-4 system. Experiments are now underway on larger systems to try and confirm this initial result. 6. iscussion and Conclusion The theme of this paper is that a variety of technical problems make the credit assignment problem difficult for extensively connected, large-scale systems. Realistic models will eventually have to face these problems. The solutions offered here can be ~lli summarized as attempts to break the symmetry of a fully connected layered architecture. The module concept breaks symmetry by having selective rules for error propagation. The weights for different units have different modification rules. Figure 5. (A) Initial states, and (B) final states of the modular system. The four columns of units denote the response to individual patterns. The individual rows are in groups of three denoting (from top to bottom) internal, input and output units. The sensory and motor modules are 4-4-4; the abstract module is 8-8-8. The main technical result of this paper is to show that multiple-layer hierarchical systems can be built without necessarily paying a penalty in terms of the q convergence time. To do this it was shown that the Backpropagation algorithm, when cast in terms of an autoassociative, three-layer network, could be made into a modular system where modules at different levels of abstraction could be coupled. Almost all of the results are based on empirical tests, although there are some analytical arguments to suggest that the experimental results should have been expected. The experimental results are very encouraging. The conventional Backpropagation architecture, when extended to three internal levels instead of one, did not converge, whereas the comparable modular architecture (with twice as many connections, however) converged in a time comparable to that of the much smaller system with only one internal layer. q > 0.75 #ill, 0.5 > 0.25 s 0.25 The main disadvantage of this scheme arises in the use of the network. In the case where a sensory input is provided and a motor response is desired, only one-half of the inputs to the abstract layer will be present. This places a much greater demand on the abstract layer to determine the correct pattern instead of an alternate. A more realistic assumption might be to assume that both sensory parts and motor parts of the pattern are present, and the pattern completion problem is no worse than in the feedforward system. Ballard 283 Having shown that modules of three-layer autoassociative systems can be built, we now ask whether error propagation at the internal level is really necessary. Consider that making the weight adiustment formula anisotropic actually improved convergence. Thus it seems plausible that a mixed learning system would be possible where the output weights are adjusted based on error correction but the internal weights are adjusted based on some other criterion. We conjecture the substitute criterion may only need to have certain smoothness properties and produce different internal states for the different input patterns. Thus there is likely to be a large family of learning algorithms that will work in the modular architecture. Acknowledgements This work was done at the program on Spin Glasses, Computation, and Neural Networks held during September to December 1986 at the Institute for Theoretical Physics, University of California at Santa Barbara, and organized by John Hopfield and Peter Young. This research was supported in part by NSF Grant No. PHY82-17853, supplemented by NASA. Thanks go to all the participants in the program who helped refine the ideas in this paper over numerous discussions, especially Jack Cowan, Christoph Koch, Alan Lapedes, Hanoch Gutfreund, Christoph von der Malsburg, Klaus Schulten, Sara Solla, and Haim Sompolinsky. In addition, at Rochester, Mark Fanty and Kenton Lynne provided helpful critiques. The importance of scaling problems in learning emerged during discussions at the Connectionist Summer School at Carnegie Mellon University in June 1986, particularly with Geoff Hinton. Thanks also go to Beth Mason and Peggy Meeker, who typed the many drafts of this manuscript. References Ackley, D.H., G.E. Hinton, and T.J. Sejnowski, “A learning algorithm for Boltzmann machines,‘* Cognitive Science 9,1,147-169, January-March 1985. Barlow, H.B., “Single units and se,nsation: A neuron il4trzinor perceptual psychology?, Perception I, 371- , . Baum, E., J. Moody, and F. Wilczek, “Internal representations for content addressable memory,” Technical Report, Inst. for Theoretical Physics, U. California, Santa Barbara, December 1986. Feldman, J.A., “Dynamic connections in neural networks,” Biological Cybernetics 46,27-39,1982. Lapedes, A. and R. Farber, “Programming a massively parallel, computation universal system: Static behavior,” Proc., Snowbird Confi on Neural Nets and Computation, April 1986. Pearlmutter, B.A. and G.E. Hinton, “Maximization: An unsupervised learning procedure for discovering regularities,” May 1986. Technical Report, Carnegie Mellon U., Rumelhart, D.E. and D. Zipser, “Feature discovery by competitive learning,” January-March 1985. Cognitive Science 9, 1, 75-112, Rumelhart, D.E., G.E. Hinton, and R.J. Williams, *‘Learning internal representations b propagation,” in D.E. Rumelhart and J.L. MC 6 leTlET (Eds). Parallel Distributed Processing. MII’ Press, PP. 318-364,1986. Scalettar, R. and A. Zee, “A feedforward memory with decay”’ Technical Report NSF-ITP-86-118, Inst. for Theoretical Physics, U. California, Santa Barbara, 1986. Zipser, D., “Pro amming networks to compute spatial functions,” IC !Y Report 8608, Inst. for Cognitive Science, U. California, San Diego, La Jolla, June 1986. 284 Cognitive Modeling
1987
57
651
Reducing Indeterminism in Consultation: A Cognitive Model of User/librarian Interactions Hsinchun Chen and Vasant Dhar Department of Information Systems New York University Abstract In information facilities such as libraries, finding documents that are relevant to a user query is difficult because of the indeterminism involved in the process by which documents are indexed, and the latitude users have in choosing terms to express a query on a par- ticular topic. Reference librarians play an important support role in coping with this indeterminism, focusing user queries through an in- teractive dialog. Based on thirty detailed observations of user/librarian interactions obtained through a field experiment, we have developed a computational model designed to simulate the ref- erence librarian. The consultation includes two phases. The first is handle search, where the user’s rough problem statement and a user stereotyping imposed by the librarian are used in determining the ap- propriate tools (handles). The second phase is document search, in- volving the search for documents within a chosen handle. We are collaborating with the university library for putting our model to use as an intelligent assistant for an online retrieval system. 3.. 1ntscoductisn While archival information sources such as libraries are relying in- creasingly on the electronic storage medium for organizing large volumes of information, access to such information is often difficult, thereby limiting the usefulness of computer-based retrieval systems. For the inexperienced user, the problem of finding documents that are relevant to a query can be difficult for three reasons: 1. it requires knowing what information sources (we refer to these as handles) are avaliable in a library, and which of these might be useful, 2. it requires knowledge about the classification scheme (such as the Dewey Decimal classification or other index- ing schemes) pertinent to the handles, and 3. the query itself is not well defined because the user is not clear about the topic for which answers are being sought. Several directions have been proposed for improving subject access. The National Library of Medicine’s CITE public access online catalog offers natural language query input, automatic medical subject head- ings display, closest match search strategy, ranked document output, and the use of dynamic end user feedback for search refinement [Doszkocs83]. The system also supports conventional known-item search options. Other directions include improved classification schemes for documents such as the Dewey Decimal Classification [Cochrane85], providing more extensive linkages between fields in different records that allow users to browse and navigate through a database [Noerr85], and the application of the “hypertext“ concept to catalogs, that is, breaking the linearity of the traditional file structure and providing links in a variety of different directions in From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Searching Uncertainty: An even higher degree of indeterminism has been observed in the terms users employ in describing concepts. One study revealed that on average, the probability of any two people using the same term to describe an object ranged from 7 to 18 percent [FurnasBfL]. In summary, evidence suggests that there is con- siderable latitude involved in i) the classification of a document into a particular category, and ii) the term a searcher might use to describe a subject area. Matching: The uncertainty in indexing and searching reduces the likelihood of an exact match between the user’s term and that of the indexer. Bates [Bates861 argues that for a successful match, the sear- cher must somehow generate as much “variety” (in the cybernetic sense, as defined by [Ashby73]) in the search as is produced by the indexers in their indexing. The variety produced by an indexer can also be viewed as redundancy in the sense that it consists of partially overlapping meanings applied to a document. To increase the chances of a successful match, there should be a number of labels for each document. This requires preserving the redundancy (generated by the indexer) associated with each document. In practice, however, catalog systems discourage redundancy [Bates86], leading to a reduced likelihood of a successful match. In this research, our goal has been to understand the consequences of the indeterminism inherent in indexing and searching for docu- ments. Specifically, our objective is to understand the strategies used by reference librarians in coping with the indeterminism as- sociated with helping users find documents relevant to their queries. In the following section, we present a cognitive model of the reference librarian involved in this activity. 3. Process Model of Consultation The consultation process in the user/librarian interaction consists of two phases. The first is what we call handle search. In this phase, a librarian categorizes the clues in the user’s initial problem statement into a template that can be matched against characteris- tics of the various handles. The librarian also often stereotypes the user into one of several categories (described shortly), and determines what types of handles are likely to be most relevant to the user. During this phase, the librarian does not focus on the details of the query, but functions more like a “traffic controller”, guiding the user to the right handle. For example, a freshman looking for materials for a term paper (a common occurrence) is likely to be directed to general textbooks instead of journals containing the latest research articles on the topic which might be more appropriate for a graduate student working on a Ph.D dissertation. It can be the case, particularly with sophisticated users, that a user is not satisfied with the adequacy or relevance of the sources sug- gested by the librarian. In such cases where the librarian might not have understood the user’s problem, the query is restated, typically in different terms, in order to rectify the misconception. On the other hand, if the user is not uncomfortable with the handles sug- gested by the librarian, the consultation moves into the document search phase. For users unfamiliar with the handle, the librarian goes a step further, helping with the document search. If the user is not satisfied with the documents retrieved after this phase, the con- sultation resumes with a different handle. The overall process model is schematized in Exhibit 1. In the remainder of this section we describe each of the components of Exhibit 1, along with a represen- tation that models the knowledge used in the parts of the consul- tation. The numbers associated with each component of Exhibit 1 correspond to section numbers where they are described. Exhibit 1. Procas Model of Consultation ____-_-__ ----------- --- 1 A 1 5 I 3 1 3 StereocyDlcal user ~odellng I c I L r 3.1. Handle Search A library can be viewed as a large hierarchy of indexes, each index pointing to other indexes or to documents. In general, reference librarians have extensive knowledge about the library indexing scheme. For the librarian, the information sources are distinguished by their area of applicability and the types of documents they point to. In the initial stages of the consultation, the librarian performs a “goal-directed” questioning process aimed at extracting sufficient in- formation to classify the problem statement and the user into a cer- tain type. This process of categorization significantly reduces the type and number of potentially relevant handles and documents. 3.1.1. Classification Scheme for the Handle Librarians appear to classify handles according to a few attributes, namely, the types of documents they point to (books, articles, etc.), the fields (psychology, engineering, etc.) and the geographical area (Central America, Asia, etc.) covered by them, and the time frame of documents to which they refer. Knowing about these features provides the librarian with a good general perception of the ap- plicability of each handle. We represent a handle in terms of a frame-like structured object where the values of the above-mentioned attributes distinguish it from other handles. Exhibit 2 lists the at- tributes of the data structure. Different combinations of slot values reflect the purpose or functionality of the handle. For example, the Business Periodicals Index (a handle) provides pointers to articles (type of information) in business (field of applicability) written in the last 30 years (currency) pertaining to any part of the world (area covered). Similarly, the Central America Monitor, is in the form of a newsletter (type of information), provides information about recent (currency) economic and political events(field of applicability) in Central America (area covered). 286 Cognitive Modeling Exhibit 2 Data str"Cture of the har,dle iobj ect handle area covered <global. continent. country. state. > currency vange Of time, type Of informatian <journal article. textbook, "ldeotape. government document. statistics. newsletter. > field of appllcabllity <psychology. b"siness. englneerlng. politics. law. medlcrne. > 3.1.2. Rough Problem Statement In the first phase of the consultation, the terms in the user’s query are translated by the librarian into values that fill the slots of the handle structure. For example, when an user states “I am looking for GDP information in El Salvador,, (dialog one in Exhibit 3) , the term “GDP,, implies that the user is looking for statistics (type of information) in the business (field of applicability) area of El Sal- vador. The librarian can then ask questions that will result in values for those attributes where no information was supplied by the user. In this example, the librarian asks the user about the specific time frame of interest. During this initial interaction, the librarian at- tempts to solicit only those items of information that can suggest ap- propriate handle, without worrying about the details of the query. Exhibit 3 shows several sample dialogs illustrating the slot-filling process that characterizes handle searching. 3.1.3. Stereotypical User Modeling The users’ problem statement may only partially constrain the scope of handles that might be appropriate. In such cases, ,,stereotypical’, information about the user provides further con- straints on what handles might be most appropriate. During the consultation the librarian develops an understanding of the type of user being dealt with on the basis of verbal and non- verbal clues. Usually, the type of question brought up, the age of the user, appearance, and the way the question is phrased all play a role in the formation of the stereotype. Some of these clues may be “confirmatory” (i.e. a freshman may be expected to dress in a cer- tain way). We have found that the level of education of the user and the scope of the inquiry are the two major factors involved in the formation of stereotypes. A higher level of education is associated with greater subject familiarity. Users with a higher level of subject familiarity (i.e. Ph.Ds) are likely to require more academically oriented information. In contrast, users with lower levels of subject familiarity are likely to require less scholarly treatment. It is also the case that users with higher levels of education tend to work on research projects, while users with lower levels of education tend to work on limited scope class projects or papers. The possible stereotypes can be visualized as cells in Exhibit 4, each corresponding to a unique set of education level and scope of the query. Because of the correlation between level of education and scope of the query, the more commonly encountered stereotypes can be ex- pected to fall along the diagonal line in the table. The stereotypes can be useful in constraining or confirming what information sources might be appropriate. For example, journal ar- ticles tend to have a more academic treatment than magazines or newsletters. Knowing the level of education of the user and the scope or purpose of the query can provide important clues about the relative usefulness of these sources to the user. We represent the gradation of information in the sources in Exhibit 5. EXhlblt 3 Segments of protocols lndlcatlng problem statement categonzatlon ____--___----------_______L_____________------------------------------- DlalOg area CllE-*tlCy type field -______---------------------------------------------------------------- " COP El Salvador Central statistics b"Slll*SS America L "hen? " 1977 to 1980s 77-87 L Index tO International Statlstlcs Or Central Banks Publlcatlons ---_-_-------------_____________________------------------------------ u article brain drain L What7 WtlCle ” foreign engzneer US brought to US L When' U last year 85-87 L Which field? U busmess business L BUslneSS Periodical Index -_____--_--------_-_____________________------------------------------- u economic development Central b"Slness c0st.a mea America L General informatlonr u Yes article L Central America Monitor or Latin America Regional Reports _____--__-------__-_---------------------------------------- u compare short term therapy to long psychology term tbt?RXp,' psych disorder L Arrlcle' U Could be art1c1e L Recent artlcle~ u Yes 70-86 L Psychological Abstracts ------___--------_-_____________________------------------------------- U User. L Llbrarlan Exhlblt 4 Stereotypes sI ,, sI 2, . . s.4 level Of education freshman --- junmr SOphOU'Or* S8i?lOT !nasters Ph D thesis ’ s1 1 research paper1 . %,‘I high SCODe Of -- qllery class project I class paper I s4,1 low S4.4 OrIented information EXhlbit 5 Gradation of different types of lnformat~on high I journal article. thesle level Of -- 1 go"ernment document academrcally I OTlented I lnfornatlon I magazine article. textbook I navspaper article. newletter 1OW information sources, where the ordering is a heuristic reflecting decreasing usefulness of sources to that stereotype. For users that are ranked higher along the diagonal line of Exhibit 4 (e.g. Ph.D-. working on thesis), where the librarian generally suggests sources such as journal articles and government documents, the ordering is a “top down” version of Exhibit 5. For users that fall toward the lower left hand corner of Exhibit 4 (e.g. freshman working on a class paper), the reverse ordering applies. Other stereotypes have different orderings. The general process of matching users to sources is described more precisely in the following subsection. 3.1.4. Handle Matching After the initial problem statement and the stereotyping of the user, the librarian knows the type of information the user is looking for, the field, the geographical areas, and the kime frame pertinent to the query. Since each handle has specific values corresponding to each of the attributes, the problem of selecting a handle is one of matching the two sets of attribute values. In other words, the librarian attempts to find handles that cover the user’s information requirements based on the four attributes. Two heuristics have been observed in this handle matching process. A. Minimum Superset Heuristic A stereotype (a cell in Exhibit 4) is represented as an ordered list of Chen and Dhar 287 In some cases there may be more than one handle that is ap- propriate for the user’s query. In this situation, the librarian generally recommends the handle that provides Yjust-enough” infor- mation since it saves the user the trouble of eliminating information from the handle that is irrelevant to the query. For instance, a user looking for information in psychology is likely to be pointed to the Psychological Abstracts instead of Social Sciences Index even though both might qualify as candidate handles baaed on their attribute values for a query. We refer to this heuristic as the minimum super-: set heuristic and define it as “the ratio of extent of information in the handle to the extent of information needed by the user,, as measured by the attribute values of the query and the handle. The lowest score a superset handle (a handle that completely covers the requirement of the query) can have is one, which implies an exact match. Handles that are over-qualified have a score higher than one. All qualifying handles are arranged as an ordered list according to decreasing scores. 3.2.1. Detailed Problem Statement In order to be able to retrieve documents that will address the specific needs of the user, the librarian elicits specific terms from the user. This leads to a somewhat more detailed problem statement than what was expressed initially. This more detailed statement must then be sharpened and translated into a form where “official terms” (used in the indexing scheme) are included in it. Further, in order to capture the “semantic content,’ of the problem, the ordering of such terms and the operators (these could be Boolean operators such as AND, OR and NOT) used must be chosen appropriately. If the user can provide as many detailed terms as possible, it creates more potential access points to the official terms, which in turn in- crease the chance of matching. 3.2.2. Generation of Qffkial Terms B. Partial Match Heuristic In some cases, there might not be any handle that meets the user’s requirements completely. For example, a user looking for in-depth information on political trends and economic development in Asia may discover that the Business Periodical Index covers articles in business whereas the Social Sciences Index provides information on the politics of the region. In such cases, the librarian builds a list of partially-matching handles where the ordering reflects the relevance of the handles to the query. The ordering is baaed on “the ratio of the extent of information supplied by the handle that is required by the user to the total extent of information needed by the user,’ along the four attributes defined earlier. For example, if a user wants in- formation in two distinct fields whereas a handle provides infor- mation in only one of these fields, the handle is assigned a score of 0.5 on that attribute. The same scoring scheme applies to the two attributes: the area of applicability and type of information (listed in Exhibit 2). For the currency of information attribute, if a user wants documents dated from time x to time y and a handle provides documents from time s to time t where x is less than s, and t is greater than y, the handle is assigned the score (y-x)/(t-s). If t is less than y, the score is (t-x)/(&s); if t is less than x, the score is zero. The ordering of the handles is based on the overall scores of the matching. If there are both over-qualified handles and partial- qualifying handles which cover the user’s query, the over-qualified handles are ranked higher than the partially-qualifying handles. When a suggested handle is not deemed as an appropriate one by the user, it is generally reflective of a misconception of the problem by the librarian. If none of the suggested handles are appropriate, the query is restated by the user, and the handle search starts over. Except for very sophisticated users, it is not generally the case that a user can determine the relevance of a handle solely by its label. Rather, assessing the relevance of a handle generally requires explor- ing what documents it actually points to. This latter search process, what we term document search, is the second phase of the consul- tation model depicted in Exhibit 1. The chances of terms in the user’s query matching official terms is generally low. The librarian therefore initiates a “terms translation” process which includes consulting the Thesaurus and a brainstorming process aiming at eliciting official terms that might be similar to terms in the detailed problem statement. The Thesaurus can be viewed as a large semantic-network of terms (concepts) where links are of two types: relations between unofficial and official terms, and set-superset relations (like IS-A links). The users can converge on the official terms by traversing the network. In this stage of the consultation, both the user’s and the librarian’s familiarity with the subject area play an important role in determin- ing the appropriate requirements. If the user or the librarian is familiar with the subject area, more terms might be proposed, in- creasing the chance of matching terms in the Thesaurus. The librarian might suggest terms directly, or urge the user to provide them. The goal is to end up with a query which includes only of- ficial terms. 3.2.3. Combination of Terms After the official terms have been generated they must be arranged in a way that expresses the ‘semantic content” of the user’s problem. The combination of terms is generally limited by the facilities available on the system. For example, many online databases provide boolean operators for combining terms. Some of these allow for the generation of temporary sets for further process- ing. The ordering of terms and operators are generally suggested by the user, with the librarian sometimes providing predictions on how large the resulting sets are likely to be. 3.2. Document Search Combining the terms results in a listing of documents that match the structured query. If the resulting set contains too many docu- ments, the query must be tightened; this can be done by substituting ANBs for ORs in the query and/or rearranging the terms. Similarly, if the resulting set is too small, the query must be tightened by sub- stituting OI3.s for AN& or as before, rearranging the terms. If the iterative process of query refinement results in documents that are not relevant, the document search phase begins over again with a different handle. The consultation terminates when a reason- able number of documents have been found that the user feels are relevant to the query. Exhibit 6 shows a protocol segment of an in- teraction illustrating the process of document search. Lines 1 through 7 in Exhibit 6 illustrate the process involving the generation of initial terms corresponding to the query. Italics in- The way in which a handle is explored depends on the specific ac- cess methods provided by it. For example, strategies for finding in- formation in an online database differ from those used for Central Banks Annual Reports which are stored on microfiche. In this study, we limited ourselves to online access tools. These tools include the library’s online catalog system and several other commercial online databases. 288 Cognitive Modeling dicate the user supplied terms. Lines 8 through 28 reflect the trans- lation process of all the italicized terms into official terms using a thesaurus corresponding to one online system called ERIC’ . The underlined terms in Exhibit 0 are the official terms used to represent the user’s problem. The librarian performs the search on the database for the user using the boolean combinations of terms ap- proved by the user (lines 29 through 33). The interaction terminates when the user feels comfortable with the relevance and number of documents (line 35 through 39) that are produced by a query stated in terms of the official terms and the boolean operators. EXhiblt 6 A protocol segment Indlcatlng the document search phase protocol stages -___-__--___-__-________________________---------------------------------- 1 u compare wo types Of SE"*eOtS oet*11 problem statement 2 engmeenng and engmwing technology 3 looking at the difference in three Yarlables 4 fli-st. career matuntV From a practical standpoint, our model should prove useful in two ways. Firstly, it should remove some of the burden from reference librarians, particularly for routine types of queries. Secondly, given the increasing importance of providing remote access to library facilities, an intelligent online assistant should prove to be effective in increasing the accessibility to these facilities. As a closing caveat, we should point out that we do not expect to replace the reference librarian, nor do we think it practically possible to do so. In the course of this investigation, we have observed some unusual cases involving extensive dialogs between users and librarians directed at clarifying requirements, with some of these taking the better part of an hour. In such situations, typically in- volving sophisticated users with unusual queries, the librarian has lit- tle choice but to engage in a detailed communication process and 5 L IS this ridely acceptid concept? 6 U Yes Other warlaDleS eel~esteem 7 and vocottonal rnterest (The 1ibrarle.n "688 the thesaurus of ERIC descriptors ) 8 L Have you used ERIC before? Terms translation 9 ” NO. only “60 soc1*1 sci*nce 1nclex an* 10 psych abstract “learn” about the details of the user’s problem in order to render reasonable assistance. Such situations are clearly out of the realm of computer based assistance. However, for the large majority of user queries, our model should prove to be a useful practical online assis- 11 L ERIC US86 ei~glneWS and englheerlng 12 technlclans also engineering technology 13 " they are "hat I rant 14 L look at related terms mechablcal design 15 engineers 16 U No. Chat is different tant. 20 l*tBre*t 21 u try YOCatlonal development or career 22 &?".3lOp*Qt 23 L they "se career de"elopmer,t under tnac 24 there is vocatlcaal msturlty References 30 L yes re can 'and* thee t"o then *or* 31 these do you think it Vrll cover 32 your problem? 33 u Yes (Use the ERIC 0411ne database ) 35 L how does these articles look7 Check the relevance and 36 u one or t.Ycl fit amount Of lnformatlon 37 L how bo you feel abOUt 107 hlts7 38 " 1967 1s a little bit far back but I "ant 39 them all _____--_--_--_____-_------------------------------------------------------- u user. L Llbi-al-Ian 1. Ashby, W. Ross. An Introduction to cybernetics. Methuen, London, 1973. 2. Bates, Marcia J. “Subject access in online catalog: a design model’. Journal of the American Soeiety of Information Science , ( 1986), . (in press). 2. Cochrane, Pauline A.; Markey, Karen. “Preparing for the use of clsasification in online cataloging systems and in online catalogs’. Information Technology and Librarice 4, 2 (June 1985), 91-111. 4. Doszkocs, Tamss E. ~CITE NLM: natural-language searching in an online catalog@. In formation Technology and L&a&s 8, 4 (December 1983), 384-380. 6. Furnas, George W. et al. Statistical semantics: how can a computer use what people name things to guess what things people mean when they name things. Proceedings of the Human Factors in Computer Systems Conference, Gaithersburg, MD. New York: Association for Computing Machinery, March, 1982. 4, 0. Harris, L. R. .User oriented data base query with the ROBOT natural language query system’. International Journal of Man Machine Studies 9, ( 1977), 697-713. extent to which such systems must model their users. The central in- An important consideration in building intelligent systems is the ference problem in user modeling can be stated as follows: given 7. Hendrix, G. G. et al. @Developing a natural language interface to complex data’. ACM Transctions on Database Systems 8, ( 1978), 105-147. 8. Hjerppe, R. Project HYPERCATafog: visions and preliminary conceptions of an extended and enhanced catalog. Proceedings of IRFLS, Bth, Frascati, Italy, Septcm- some observed behavior of the user, infer the state of the user model that accounts for the behavior [Konolige85]. The various views in the literature on the importance of being able to infer the user’s evolving “mental states” in the course of a dialog appear to have been driven by specific features of different problem domains. While extensive user modeling is usually considered necessary in tutoring situations where correcting users’ misconceptions may be important [Sleeman85], it is less clear whether the same type of modeling is necessary for explanation and text generation systems /Swartout85]. ber, 1985, pp. 15-18. 9. Jacoby, J.; Slamecka, V.. Zndezer consistency under minimal conditions. Documentation, Inc., Bethesda, MD, 1962. 10. Konolige, Kurt. User modelling, common-sense resoning & the belief-desire- intension paradigm. User Modelling Panel, Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, California, August, 1985. 11. Noerr, Peter L.; Bivins Noerr; Kathleen T. ‘Browse and navigate: an advance in database access method”. Zn/ormation Roeeasing d Monogement 21, 3 ( 1985), X6-213. 12. Sager, N.. Natural Language Information FVoceeeing: A Computer Grammar of English and Its Applications. Addison-Wesley, Reading, MA, 1981. Given the idiosyncratic needs of the diversity of users in a library, one might expect that a detailed user model would be necessary in order to render useful advice to a user. However, since this is clearly a practical impossibility, the librarian must adopt cruder but more generic strategies that have a good chance of being successful and ap- plicable to the cross section of users. The strategies that they use, as described in this paper, reflect an effective compromise between sup- port tailored to individuals and support suitable for a large popula- tion of users. 18. Sleeman, D. Student models in intelligent tutoring systems. User Modelling Panel, Proceedings of the Ninth International Joint Conference on Artificial lntel- ligence, Los Angeles, California, August, 1985. 14. Stevens, Mary Elizabeth. Automatic Zndezing: A State-of-the-art Report. U.S. Government Printing Office, Washington, DC, 65. 16. Swartout, Bill. Explanation & the Role of the user model: how much will it help? User Modelling Panel, Proceedings of the Ninth International Joint Conference on Artificial fntelligence, Los Angeles, California, August, 1985. 1Q. Walker, Donald. ‘ The organization and use of information: Contributions of Information Science, Computational Liigustics and Artificial intelligence”. Journal o/ the American Society for Information Science , (September 1981), 347-363. 17. Waltz, D. L. “An English language question answering system for a large rela- tional data base”. Communicationo of the ACM 21, ( 1978), 526539. 1 Educational Resources Information Center Chen and Dhar 289
1987
58
652
A Mechanism for Early iagetian Learning Gary L. Drescher Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, Ma 02139 Abstract I propose a mechanism to model aspects of Piage- tian development, in infants. The mechanism com- bines a powerful empirical learning technique with an unusual facility for constructing novel elements of representation- elements designating states that are not, mere logical combin&ions of other represented states. I sketch how this mechanism might recapit- ulate the infant’s gradual recognition that there ex- ist physical objects that persist even when the infant does not perceive them. I also report results of a preliminary, partial implementati0n.l I. Statement of the According to Piaget’s constrzlctiuist theory of mind, the elements of mental representation- even such basic ele- ments as the concept of physical object- are constructed afresh by each individual, rather than being innately sup- plied [Piaget 1952,1954]. At first, the infant’s conception of the world is virtually solipsist: the infant represents the world only in terms that correspond to basic sensory im- pressions and motor actions. As the infant interacts with the world, it learns that some actions affect some sensa- tions. But the infant does not understand that there are objects “out there”, objects that its actions affect, that can be perceived by sight or touch, and that persist even when not perceived. Crucially, the infant later transcends this limitation, inventing for itself the idea of physical object, constructing new terms of representation to augment the innate senso- rimotor ones. The infant constructs the concept gradually, in stages; along the way, intermediate representations be- come less subjective, less tied to the infant’s own perspec- tive and activity. Progression from subjective to objective or abstract representations is a central theme of Piage- tian development; the physical-object concept provides an early, paradigmatic example. lThis research was done at the MIT AI Laboratory. This work is nonmilitary, but my use of laboratory computers obliges me to state that the laboratory’s AI research has been supported in part by the Advanced Research Projects Agency of the Department of Defense under ONR contract N00014-85-K-0124. This does not imply my approval of the United States policy of terrorizing civilians to impose repressive regimes for US strategic or economic advantage. Piaget supplies elaborate observations of characteris- tic behaviors at each developmental stage as reflections of the infant’s underlying representations of the world. But, Piaget stops short of explaining what mechanism under- pins the development he describes; that is the goal of my present effort. I take Piagetian development as a working hypothesis; trying to implement it is a way to test and refine the hypothesis. A schema has a context, action, and result. The con- 290 Cognitive Modeling From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. text is a (possibly empty) set of items, as is the result. A schema asserts that if its context is satisfied- if the designated states obtain- then taking the action makes the result more Zileely to obtain than if the action weren’t taken. A reliable schema asserts, further, that the action makes the result likely (not just mqve likely); schemas keep track empirically of their reliability. Only reliable schemas can serve as elements of a ‘plan” (a set of schemas coordi- nated to achieve a goal); unreliable schemas are stepping stones to finding reliable ones, as described below. Note that a schema’s context is not a precondition for taking the action; the same action might be taken in a number of contexts, with different expected results. Note also that a schema, even if reliable, is not a rule that says to take the action when the context is satisfied; rather, the schema just asserts what would happen if the action were taken then. The Schema Mechanism lives in a two-dimensional microworld, populated by objects that can be seen, felt, grasped, and moved. The mechanism controls a body that has a hand and an eye. Each primitive item corresponds to a sensory input; for example, for each of 25 regions in the visual field, there is an item that is in the On state whenever an object appears in that region. (This is meant to be analogous to an output of lowlevel vision in humans, rather than, say, to the state of a retinal cell.) Other vi- sual primitive items provide detail about the appearance of objects at the central, foueal region of the visual field. Tactile primitive items report contact with the and, and other parts of the body. Finally, proprioceptiue primitive items report the body-relative position of the hand, and the glance orientation. For each of 25 glance orientations, there is a visual-proprioceptive item that is On whenever that orientation is current; similarly, there are 25 haptic- proprioceptive items that report hand position. There are ten primitive actions: four actions for moving the hand in- crementally forward, back, right, or left; four for incremen- tally changing where the visual field maps to; and opening and closing the hand. A schema whose context conditions are currently sat- isfied competes for activation- having its action taken- based in part on its leading to the satisfaction of some goal. (Also, a schema can suppress its action if the schema predicts an undesirable result in the current situation.) Schemas can form an implicit chain from a current state to a goal state, the result component of each schema in the chain including the elements of the next schema’s con- text; the mechanism’s parallel architecture lets such chains be found quickly. The mechanism’s built-in goals include mundane ones (eg eating), as well as curiosity-based goals, which appeal to heuristic assessments of the usefulness and interestingness of the mechanism’s constructs. In addition to built-in goals, some states become valued as goals be- cause of their strategic facilitation of other things of value. I omit further discussion here of criteria for activation and valuation, to emphasize instead the machinery for building new structures. I. em88 The Schema Mechanism looks for results that follow from actions; and, if a result follows unreliably, the mecha- nism seeks conditions under which the reliability improves. The mechanism builds schemas that reflect these discover- ies. Typically, the derivation of a reliable schema involves building a series of intermediate ones, which alternate be- tween discovering intermittent results of a schema’s acti- vation, and finding additional conditions that must hold for the results to follow reliably. In the beginning, for each primitive action, there is also a built-in schema with that action, and with empty context and result. These initial schemas, which assert nothing, are points of departure for building contentful schemas. The Schema Mechanism builds new schemas from ex- isting ones by extending the context or result of an exist- ing schema. The old schema doesn’t change, but a copy, or spinof schema, appears, with a new item added to its context or result. Every schema has an extended context and an ex- tended result, in addition to the context and result proper. Each extended context or result has an slot for every item, primitive and nonprimitive, in the mechanism’s database. For each schema, each extended result slot keeps track of whether the associated item turns On more often if the schema has just been activated than if not. If so, the mech- anism attributes that state transition to the action, and builds a spinoff schema, with that item included in the re- sult. (If a schema’s activation makes some item more likely to turn Off, the item’s negation joins the result of a spinoff schema.) A result attributed to a schema’s activation may be arbitrarily unlikely to follow the schema’s activation; the result must only be significantly more likely than if the schema isn’t activated. A spinoff schema can thus be arbi- trarily unreliable. But a schema’s extended context tries to identify conditions under which the result more reli- ably follows the action. Each extended context slot keeps track of whether the schema is significantly more reliable when the associated item is On (or Off). When the mech- anism thus discovers an item whose state is relevant to the schema’s reliability, it adds that item (or its negation) to the context of a spinoff schema. (Extensions of this scheme, described in [Drescher 1985,1986], increase its sensitivity to certain kinds of context conditions, reduce the proliferation of effectively redundant spinoffs, and suppress otherwise- reliable schemas when exceptional, overriding conditions ho1d.j For purposes of execution only, three-part schemas could instead be two-part production rules, context and action collapsing into the left-hand part of a rule. But a bipartite structure is inadequate for building new schemas by marginal attribution, which needs to treat context, ac- tion, and result differently. The Schema Mechanism uses only reliable schemas to pursue goals. But the mechanism needs to be sensitive Drescher 291 to intermittent results, because a reliable effect can seem arbitrarily unreliable until the relevant context conditions have been identified. Consider, for example, the action of shifting the glance incrementally to the left. This reliably turns On the item designating an object at, say, the center of the visual field- provided that an object was seen just left of center beforehand. Until that prior condition is recognized as such, the result will be seen to follow from the action only infrequently. Moreover, the same action, in other contexts, has dif- ferent results (eg, making other visual-field items change state); furthermore, the given result often occurs without the action in question, caused instead by another glance action, by a hand action, or by an external event; and, whether or not the result obtains, the action typically ac- companies many other, coincidental transitions. Despite all this, the result is more likely to occur at a given mo- ment if the glance-left action is taken than if not (pre- suming, realistically, that objects’ images spend somewhat more time being approximately stationary in the visual field than they spend moving). Thus, the initial glance- left schema can identify the visual-field-center item as a tentative result; this prompts the construction of a spinoff schema, whose extended context then finds the condition (namely, the visual-field left-of-center item being Cn) that confers reliability. This discovery spawns another schema, this time a reliable one. An alternative mechanism might look for reliable re- sults already paired with appropriate contexts, rather than trying to identify infrequent results independently first. But there are too many such pairs to consider them exhaus- tively. Usually some conjunction of conditions must hold for a result to follow an action reliably; hence, contexts of reliable schemas typically include more than one item. Re- sults, too, must be able to include multiple items, in order to chain to multiple-item contexts. With m actions and n items (and their negations), there are m32n expressible schemas; even if contexts and results were limited to, say, five items each, there would be about m(2n)1° expressible schemas. If there are to be thousands, or perhaps millions, of actions and items, even m(2n) lo possibilities are far too many for exhaustive search. One might try to relieve the combinatoric problem by partitioning actions and items into categories, designing the mechanism to seek connections within categories, not between them. Indeed, it seems plausible that most actions are irrelevant to most items. But I am skeptical that many categories of mutual relevance can be usefully character- ized in advance. Among the primitives, for example, hand actions have haptic proprioceptive results, tactile results, and visual results; thus, we can exclude neither inter- nor intra-modal connections. As for nonprimitive items and actions, it seems even less plausible to be able to impose apriori constraints on the mutual relevance of constructs that themselves are not known a priori. Thus, I propose instead the present marginal attri- Each synthetic item is based on some schema that bution scheme, whereby the met hanism can identify an says, in effect, how to recover a manifestation of something action’s contribution to a result before hypothesizing the corresponding context conditions, even if, out of context, the result follows the action only infrequently, and amid many other, irrelevant events. This approach is not in- expensive; exhaustive cross-connectivity between schemas and items may seem an exorbitant, brute-force solution. But it is a bargain compared to the size of the space being searched, the space of expressible schemas. IV. ctions The Schema Mechanism builds new actions, called compos- ite actions. Each composite action has a goal state which, like a context or result, is a set of items. A composite ac- tion identifies schemas that can help to achieve the goal state- schemas that chain to the goal state from various other states. When a composite action is initiated, it co- ordinates the successive activation of schemas to reach the goal state (if possible from the initial state); these schemas need not independently compete for activation. Any newly-achievable result- any conjunction of items that appears for the first time in some reliable schema’s result- is a candidate goal state for a new com- posite action. As with each primitive action, the mecha- nism builds for each new composite action a schema with empty context and result that uses that action. For ex- ample, if the mechanism has built schemas that say how to turn on a lightswitch, then the mechanism could also define a composite action whose goal state is lightswitch- on. The schema with that action can then discover what results from the lightswitch being on. It is important to be able to represent the action at the right level of abstraction- as lightswitch-on, rather than just as whatever primitive motor action is responsible for pushing the switch on. A schema that looks for results of lightswitch-on per se discovers and represents effects that are independent of the particular motor sequences re- sponsible; hence, in the absence of contrary evidence, the discovery automatically generalizes to other motor imple- mentations of the same higher-level action. Furthermore, the mechanism regards a composite action as having been taken whenever its goal state is satisfied, even if external events are responsible; hence, composite actions let the mechanism look for the effects of external events, not just of its own actions. It is important for a learning mechanism to discover re- lations among existing representational elements, and to organize such knowledge at appropriate levels of abstrac- tion. But a constructivist system’s greatest challenge is to synthesize new elements of representation, to designate what had been inexpressible. Synthetic items enable the Schema Mechanism to do this. 292 Cognitive Modeling that is no longer shown; we can say that the synthetic item reifies this recoverability, construing the potential- to-recover as a thing in itself. For example, suppose the Schema Mechanism moves its hand away from some stationary object directly in front of its body (and suppose the mechanism’s eye is directed away from the object). Presumably the object is still present; but, at first, the mechanism has no way even to represent this fact, since the object now has no manifes- tation in the state of any primitive items. The Schema Mechanism, like a four-month-old infant in Piaget’s the- ory, is simply oblivious to the possibility of reaching again for the unperceived object, or of turning to look at it. But suppose there is a schema with empty context, whose (nonprimitive) action is moving the hand directly in front of the body (as indicated by a haptic propriocep- tive item), and whose result is touching-something. (In other words, this schema says: if I reach directly in front of me, I’ll touch something there.) Of course, this schema is unreliable; it only works when there is in fact an object sitting there. But, significantly, this schema is locally con- sistent: if it activates and achieves its result, it is likely to be reliable if activated again in the next little while (be- cause, typically, objects tend to stay put for a while). The’Schema Mechanism keeps track of the observed local consistency of each schema. If a schema is unreli- able, but locally consistent, the mechanism constructs a synthetic item for it. This item designates whatever un- known condition in the world governs the schema’s success or failure; the schema’s local consistency implies that this condition is slow to change state. In the present example, the mechanism creates a synthetic item that designates a palpable object directly in front of the body. While a syn- thetic item is On, the mechanism regards the item’s host schema (the schema for which the item was created) as reliable. An item is useful only to the extent that some ma- chinery turns the item On or Off when the condition it designates does or doesn’t obtain. Each primitive item, of course, is simply wired to some peripheral apparatus that maintains the item’s state. To maintain the state of a given synthetic item, the mechanism exploits three kinds of clues: e When the item’s host schema activates successfully, the item turns On; unsuccessfully, Off. The item re- verts to the Off state a while after being turned On; the length of the while is the empirically established expected duration of the host schema’s local consis- tency. o The host schema’s extended context looks for items whose states correlate empirically with the synthetic item’s: an item whose being On implies that the syn- thetic item is On (or Off); or whose being Off implies that the synthetic item is On (or Off). If the mecha- nism finds a strong correlation, it thereafter turns the synthetic item On or Off according to the correlated e item’s state. The synthetic item, like any other item, may appear in the context or result of subsequently constructed schemas. If the item (or its negation) is in the result of a reliable schema, the mechanism turns the item On (or Off, respectively) when that schema has completed activation. A synthetic item’s state-maintaining criteria bootstrap from one another: as these criteria accumulate, the item becomes more likely to turn On or Off when appropriate, increasing the mechanism’s opportunity to discover further correlates of the item’s state. The crucial step is the first one: synthesizing an item to reify an unknown condition lets the mechanism start to learn about that condition. 0 etiea scenario In [Drescher 1985,1986], I present a detailed hypotheti- cal scenario in which the Schema Mechanism builds its way toward a late-sensorimotor-stage conception of phys- ical objects. First, the mechanism assembles a substrate of spatial knowledge: it builds a network of schemas that denote the adjacency of pairs of visual field items by not- ing the transformation from an item to an adjacent one by the appropriate incremental-glance action (as in the exam- ple above). A similar network shows the relations among visual proprioceptive items, again with respect to glance actions; and another network relates the haptic proprio- ceptive items via incremental hand-motion actions. Early intermodal coordinations appear. Some schemas anticipate contact between hand and body when the hand moves incrementally from certain proprioceptively-designated starting places. Other schemas predict visual effects of moving the hand when it is in view; still others anticipate tactile contact when, for example, the hand is seen just left of some object, and moves left. This anticipation corresponds to the earliest form of Piagetian visual-tactile coordination in infants. The visual-field schemas chain together to enable the mechanism to foueate: to look directly at some object that appears at the visual periphery. The visual proprioceptive schemas chain together to enable the mechanism to shift from any glance orientation to any other; similarly, chains of haptic proprioceptive schemas lead from any hand posi- tion to any other. Each proprioceptive item is by now an achievable result; hence, each such item is the goal state of a “positional” (as opposed to incremental) action, the action of moving the hand or eye to a certain orientation. Positional hand actions facilitate knowing how to move the hand into view: each visual proprioceptive item serves as the context of a schema whose action is moving the hand to a certain position, and whose result is seeing the hand. When an object is in view, schemas for moving the hand into view near the object chain to schemas that say, based on the visual appearance of the hand and object, how to move the hand to touch the object. This coordination ex- tends the mechanism’s earlier, cruder visual-tactile coor- Drescher 293 dination. Other schemas chain in the opposite direction, eral real-time seconds per simulated second. I plan to move enabling the mechanism to shift its gaze to look at what MARCSYST to a massively parallel machine before com- the hand touches. pleting the implementatia? The posit&al actions also, facilitate the construction of synthetic items that designate objects at the various positions. For instance, as in the example above, there is a schema with empty context, whose action is moving the hand directly in front of the body, and whose result is touching; the corresponding synthetic item designates a palpable object directly in front of the body. Other synthetic items designate palpable objects at other body- relative positions. Analogous schemas, with positional eye (rather than hand) actions, give rise to synthetic items that designate visible objects at various positions. At first, nothing prevents, say, a palpable-object synthetic item from being On while the visual-object item for the same position is Off; in that case, the mechanism knows that it can reach back to the object, but is oblivious to the possibility of glancing at it. Later, though, each visual- object item’s state-maintaining apparatus recognizes the corresponding palpable-object item’s state as an indicator of the item’s own state (and vice versa). Later still, the mechanism synthesizes items that designate objects hid- den by obstacles; each such item’s host schema shows how to recover the object by displacing the obstacle (in con- trast with the simpler action of reaching or glancing back to an unhidden object). Each of these elaborations of the concept of physical objects corresponds to a milestone in Piagetian development. [Jones 19701, [Cunningham 19721, [Becker 19731, and [Klahr, Wallace 19761 propose mechanisms for aspects of Piagetian or sensorimotor development; Cunningham’s work, including a detailed sensorimotor scenario, inspired my own effort. Becker’s schemas, like mine, have a con- text, action, and result. None of these systems addresses the combinatoric problem in finding empirical associations; and none except Klahr and Wallace’s constructs nontriv- ially novel elements of representation. Klahr and Wallace’s system builds tokens that designate the applicability of production subsystems; these tokens are similar in spirit to synthetic items. Acknowledgements I am grateful to Seymour Papert, and Hal Abelson, Phil Agre, David Chapman, Marvin Minsky, Ron Rivest, Gerry Sussman, and other friends and colleagues for illuminating discussions and support. Ron Rivest showed me how to halve the implementation’s memory needs. David Chap- man helped with comments on this paper. References The Schema Mechanism’s earliest acquired abilities are probably innate in humans. My proposal of a tabula ruSa learning mechanism is not meant to deny the exten- sive innate domain-specific competence in human periph- erul modules. But by the present hypothesis, this com- petence is not available to the central system, which sees only the outputs of the peripheral modules, presented as “gensyrm?. The central system must reconstruct in its own scheme of representation much of the innate periph- eral competence, as a first step to surpassing the innate competence. . he Implementation An existing program, called MARCSYST (Marginal At- tribution and Representation Construction System), par- tially implements the Schema Mechanism. So far, MARC- SYST builds spinoff schemas, but does not yet implement composite actions or synthetic items. In a typical run of 20,000 simulated seconds, the mechanism builds schemas that accord with the beginning of the hypothetical sce- nario: there are schemas that comprise much of the even- tual visual field network, and the visual and haptic proprio- ceptive networks; and many schemas that designate hand- eye, and hand-body coordination. These results, while quite preliminary, are on the right track. After accumulating several hundred schemas, the present Lisp Machine version of MARCSYST slows to sev- [Becker 19731 Becker, J. “A Model for the Encoding of Ex- periential Information”, Computer Models of Thought and Language, eds. Schanck, R. and Colby, K. pp 396- 434. San Francisco: Freeman, 1973. [Cunningham 19721 Cunningham, M. Intelligence: Its Origins and Development New York: Academic Press, 1972. [Drescher 19851 D rescher, G. The Schema Mechanism: A Conception of Constructivist Intelligence. MS thesis, MIT, 1985. [Drescher 19861 Drescher, G. “Genetic AI: Translating Pi- aget into LISPn, MIT AI Laboratory Memo 890,1986. [Jones 19701 Jones, T. A Computer Model of Simple Forms of Learning. Phd thesis, MIT, 1970. [Klahr, Wallace 19761 Klahr, D. and Wallace, J. Cognitive Development: An Information Processing View. New York: Lawrence Erlbaum, 1976. [Piaget 19521 Piaget, J. The Origins of Intelligence in Children. New York: Norton, 1952. [Piaget 19541 Piaget, J. The Construction of Reality in the Child. New York: Ballantine, 1954. 294 Cognitive Modeling
1987
59
653
Integrating Diverse Reasoning ethods in the BBP Blackboard Control Architecture1 M. Vaughan Johnson Jr. and Barbara Hayes-Roth Knowledge Systems Laboratory * Stanford University Abstract The BBl blackboard control architecture has been proposed to enable systems to integrate diverse rea- soning methods to control their own actions. Previ- ous work has shown BBl’s ability to integrate hierar- chical planning and opportunistic focusing. We show how it can integrate goal-directed reasoning as well and demonstrate these capabilities in the PROTEAN system. We also compare BBl with alternative con- trol architectures. I. Overview Many researchers have recognized the need for AI systems to use diverse reasoning methods - individually or together - to control problem-solving actions [Corkill et al., 1982, Davis, 1976, Durfee and Lesser, 1986, Erman et al., 1980, Erman et al., 1981, Genesereth and Smith, 1982, Hayes-Roth, 1985, Hayes-Roth and Hayes-Roth, 1979, McCarthy, 1960, Newell et al., 1959, Stefik, 1981b, Stefik, 1981a, Terry, 1983, Weyrauch, 19801. In previous papers, we proposed the BBl blackboard control architecture [Hayes-Roth, 19851, which enables systems to construct control plans for their own actions in real time. We argued that BBl can accommodate a range of reasoning meth- ods and demonstrated its performance and integration of hier- archical planning and opportunistic focusing in several appli- cation systems [Garvey et al., 1987, Hayes-Roth, 1985, Hayes- Roth et al., 1986b, Hayes-Roth et al., 1986a, Tommelein et al., 19871. In this paper, we extend the empirical evidence for BBl’s capabilities. Specifically, we show how BBl supports goal- directed reasoning and integrates it with hierarchical planning and opportunistic focusing. We demonstrate these capabilities within PROTEAN [Buchanan et al., 1985, Hayes-Roth et at., 1986b], a BBl application system for protein structure mod- elling. Although we discuss only PROTEAN, we also have demonstrated these capabilities in other BBl systems, includ- ing the FEATURE system [Altman, 19861 for identifying in- teresting features of protein structures. In fact, BBl provides IThe work was supported by the following contracts and grants: NIH Grant RR-00785; NIH Grant RR-00711; Boeing Grant W266875; NASA/Ames Grant NCC 2-274; DARPA Contract N00039-83-C-0136. We thank Mike Hewett, Alan Garvey (especially for his work on hierarchical strategies in PROTEAN), Bob Schul- man, Jeff Harvey, Craig Cornelius, and Reed Hastings for their work on BBl and PROTEAN. We thank Russ Altman for noticing the need for goal-directed reasoning in his FEATURE system. We thank Bruce Buchanan and Ed Feigenbaum for supporting our work in the Knowledge Systems Laboratory. generic control mechanisms to support these three kinds of rea- soning individually and in combination, in any BBl application system. II. easoning in A. The BBl Blackboard Control Archi- tecture BBl [Hayes-Roth, 1984, Hayes-Roth, 1985, Hewett and Hayes- Roth, 19871 is a domain-independent architecture based on the blackboard model [Erman et al., 1980, Nii, 19861. Multiple in- dependent knowledge sources (KSs) post and modify solution elements on a commonly accessible blaclcboard. Domain KSs solve domain problems on the domain blackboard. Control KSs develop a dynamic control plan on the control &&board. Both blackboards distinguish solution elements for different solution intervals and different abstraction levels. All knowledge sources operate simultaneously, generating knowledge source activation records (KSfiRs) when specified trigger events occur in the con- text of specified precondition states. A scheduler sequences the execution of pending KSARs in accordance with the current control plan. Thus, BBl repeatedly executes the following ba- sic cycle: Interpret the action of the scheduled KSAR, producing modifications to the appropriate domain or control black- board. If the KSAR changes the control blackboard, it may alter the criteria used to rate KSARs in the next step. Update the agenda to include KSARs triggered by the re- cent blackboard modifications and rate all KSARs against the current control plan. Schedule the highest-rated KSAR. The next two sections describe how PROTEAN is built in BBl and how BBl enables PROTEAN to integrate hierarchical planning and opportunistic focusing. B. PROTEAN PROTEAN models the three-dimensional conformations of proteins in accordance with biochemical constraints. PRO- TEAN’s domain-specific knowledge is layered upon ACCORD (a domain-independent framework for the class of arrangement problems), which is layered on BBl. (We refer to the growing set of compatible modules exemplified by these three as BB* [Hayes-Roth et al., 1986a].) ACCORD supports an incremental assembly method for solving arrangement problems. The problem-solver defines one or more partial arrangements, each comprising a subset of the 30 Al Architectures From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Sl : Incrementally assemble one PA S2 : Define one PA STRATE(IY Fl : FAVOR.CONTROL.KSARS F2 : Per?Orm> Create ANY.NAME F3 : Perform> Include SECONDARY-FE in PA1 FOCUS Trigger: On cycle 0, PROTEAN’s control KS, Assemble-One-PA, Did-Include Object (The-Object) initiates problem-solving activity by posting strategy Sl and in Partial-Arrangement (The-PA). focus Fl, which favors control KSARs. Preconditions: Has The-PA Anchor (The-Anchor). Is-Constrained-By The-Object The-Anchor with Constraints (The-Constraints) Action: Anchor The-Object to The-Anchor in The-PA with The-Constraints. Given the following event and states: Did-Include RandomCoil4-1 in PAl. Has PA1 Anchor (Helixl-1). Is-Constrained-By RancomCoil4-1 Helisl-1 with Constraints (CSetHlR4) this KS generates the action: Anchor RandomCoil4- 1 to He1 ixl-1 in PA1 with CSetHlR4. PROTEAN might rate its action against a control decision perform certain kinds of actions: Perform: Position Anchorees in PA1 with Strong Constraints. to by determining that: Anchor is-a Position action. RandomCoilB-1 is-a Anchoree. PA1 is PAl. CSetHlR4 is-a Constraint-Set, containing Constraints. The Constraints in CSetHlR4 are moderately Strong. C. PROTEAN’s Control Reasoning PROTEAN integrates opportunistic focusing with hierar- PROTEAN uses three of BBl’s generic control KSs, chical planning when other control KSs, triggered by intermedi- Initialize-Prescription, Update-Prescription, and Teminate- ate solution states, insert focus decisions into its evolvidg con- Prescription, to integrate hierarchical planning and opportunis- trol plan. BBl’s Terminate-Prescription KS deactivates those tic focusing. Consider PROTEAN’s hierirchical planning dur- focus decisions when their objectives are satisfied, just as it does ing the first eleven cycles of its reasoning about a protein called during hierarchical planning. Section 4 illustrates opportunistic the lac-repressor headpiece (figure 1). focusing. 1; Hl : INTEGRATION-AND.SCHEDULlNG.RULES H2 : PREFERCONTROL-KSARS HEURISTIC 0 1 2 3 4 5 6 7 6 9 10 11 12 13 Figure 1: BBl/PROTEAN Control Blackboard at Cycle 12 On cycle 1, Initialize-Prescription, which was triggered by Sl, modifies Sl so that its first prescribed sub-strategy, Define- one-PA, is its Current-Prescription. On cycle 2, PROTEAN’s control KS, Define-$he-PA, which was triggered by Sl’s new Current-Prescription, posts sub-strategy S2. On cycle 3, Initialize-Prescription, which was triggered by the posting of S2, modifies S2 so that its first prescribed sub- strategy, Create-the-Space, is its Current-Prescription. On cycle 4, PROTEAN ‘s control KS, Create-the-Space, which was triggered by S2’s new Current-Prescription, posts focus F2. On cycle 5, the scheduler uses F2 to rate pending KSARs and chooses one that creates PAl. This satisfies F2’s objective. On cycle 6, Terminate-Prescription, triggered by satisfac- tion of F2’s objective, deactivates F2. On cycle 7, Update-Prescription, triggered by deactivation of F2, modifies S2 so that its second prescribed subordinate, Include-All-Structures, is its Current-Prescription. On cycle 8, PROTEAN ‘s control KS, Include-All- Structures, which was triggered by S2’s new Current- Prescription, posts focus F3. On cycle 9 and each of several subsequent cycles, the scheduler uses F3 to rate pending KSARs and chooses the best rated ones in turn. Eventually, the scheduled KSARs will Wisfy F3’s objec- tive and trigger Terminate-Prescription. It will deactivate F3 and the process of plan elaboration will resume. Johnson and Hayes-Roth 31 Es. Generic C roa ss fw Goal-Directed asoning A. The Semantics of Goal- soning Goal-directed reasoning entails identifying and performing ac- tions in order to perform other desirable actions. These other actions may be desirable per se or because of their effects. For example, suppose that PROTEAN wishes to perform actions of this type: Yoke several long Helices in PA1 with Constraints. Suppose also that the current agenda contains an appropriate yoking action: Yoke Helix2-1 with Helix3-1 in PA1 with CSetH2H3. but that it requires satisfaction of one precondition prior to execution: Is-Anchored Helix3-l to Anchor in PA1 with strong Constraints. PROTEAN might reason backward from its goal (to perform the designated class of yoking actions), identifying a subgoal to perform actions that satisfy the precondition: BBl can initiate goal-directed reasoning in two situations: (a) the system notices that it has an important focus, but no exe- cutable KSARs that satisfy it; or (b) the system notices that it has a highly rated KSAR with unsatisfied preconditions. The first situation corresponds to conventional goal-directed reason- ing; the system deliberately sets about enabling itself to per- form desirable actions. The second situation differs in motiva- tion; here, the system notices an opportunity to enable itself to perform desirable actions. When there is exactly one very important focus on the control blackboard, the two kinds of goal-directed reasoning will favor the same kinds of actions. However, when there are several foci of varying importance, the two kinds may favor different types of actions. For exam- ple, in the first situation, the system would favor actions that satisfy the single most important focus, while in the second situation, it could favor actions that get a high overall rating against the set of active foci. In this section, we describe generic BBl knowledge sources for both situations. BBl’s generic control knowledge source, Satisfy-Priority- Focus, is triggered whenever no executable KSARs rate highly against an important focus. When executed, it determines what potential actions could rate highly against the focus. If trig- gered (not executable) actions on the agenda match the poten- tial actions, Satisfy-Priority-Focus posts a goal-directed focus decision for each unsatisfied precondition: Perform Actions that Promote: Is-Anchored Helix3-1 to Anchor in PA1 with strong Constraints. Now suppose that the agenda contains no actions that promote the designated state. PROTEAN might continue rea- soning backward to identify a new subgoal. For example, it might determine that the action: Anchor Helix3-1 to Helixl-l in PA1 with CSetH3Hl. would promote the event: the desired state if it were executed, and that Did-Include Helix3-1 in PAl. would trigger that action. PROTEAN could then identify a new subgoal to perform actions that cause the designated event: Perform Actions that Cause : Did-Include Helix3-1 in PAl. As in all goal-directed reasoning, this regression through enabling conditions could continue indefinitely. This simple example illustrates two key aspects of the se- mantics of goal-directed reasoning. First, the goal that initi- ates goal-directed reasoning need not be the ultimate goal of the problem-solving process, but can be any intermediate goal along the way. A good control architecture must be able to in- tegrate goal-directed reasoning capabilities with any other rea- soning method that might produce intermediate goals. Second, a “goal” can b e 1s mguished as a desire to perform an action, d’ t’ cause an event, or promote a state. A good control architecture must exploit knowIedge of the relations between different types of actions, events, and states to guide goal-directed reasoning. The BBl mechanism for goal-directed reasoning meets these two objectives. Promote: <state> If no matching actions appear on the agenda, Satisfy- Priority-Focus identifies knowledge sources that specify a matching action and, for each one, posts a goal-directed focus decision for each KS triggering condition: Cause: <event> If any executable KSARs rate highly against a goal- directed focus posted by Satisfy-Priority-Focus, the BBl sched- uler will choose them. If their actions produce the specified state or event, this will trigger Terminate-Prescription, which will deactivate the goal-directed focus. On the other hand, if no executable actions rate highly against the goal-directed focus, this will retrigger Satisfy-Priority-Focus. It will then post sub- goal foci to trigger and satisfy the preconditions of knowledge sources that could rate highly against the prior goal-directed fo- cus. Thus, Satisfy-Priority-Focus can reason backward through a chain of subgoals to trigger and satisfy the preconditions of knowledge sources that match high-priority foci. We have not yet implemented Satisfy-Priority-Focus. BBl’s generic control knowledge source, Enable-Priority- Action, is triggered whenever a highly rated KSAR has unsat- isfied preconditions. When executed, it posts a goal-directed focus for each unsatisfied precondition: Promote: <state> If any executable KSARs rate highly against a goal- directed focus posted by Enable-Priority-Action, the BBl scheduler will choose them. If their actions produce the spec- ified state, this will trigger Terminate-Prescription, which will deactivate the goal-directed focus. In addition, if the goal- directed focus raises the ratings of any triggered (but not ex- ecutable) KSARs, this can retrigger Enable-Priority-Action, which will post another goal-directed focus for each of their unsatisfied preconditions. Thus, Enable-Priority-Action can 32 Al Architectures reason backward through a chain of subgoals to satisfy the preconditions of high-priority actions. We have implemented Enable-Priority-Action and used it in several systems, includ- ing PROTEAN. BBl integrates diverse reasoning methods by permitting inde- pendent control knowledge sources to contribute decisions to an explicit, dynamic control plan on the control blackboard. For example, consider PROTEAN’s reasoning during part of its work on the lac-repressor headpiece (figure 2). At cycle 27, PROTEAN has elaborated its hierarchical plan through sub-strategy S3. It is busy anchoring and yoking structured secondary structures in PAl. Is-Anchored Helix3-1 to Anchor in PA1 with Constraints. On cycle 28, a number of new KSARs are triggered, in- cluding two control KSARs. First, the anchoring of BelixZ 1 produced an intractably large family of locations and this triggers PROTEAN 's control KS, Now-Restrict. In the con- text of its hierarchical strategy, PROTEAN occasionally trig- gers Now-Restrict to post an opportunistic focus for restricting (statistically sampling the legal locations) of positioned objects. PROTEAN introduces this focus only when it identifies an un- manageably large family of locations for an object. Second, KSAR34’s high priority triggers Enable-Priority-Action. Since PROTEAN has an overriding preference for control actions (fo- cus Fl), the BBl scheduler chooses to execute these two KSARs on cycles 28 and 29, producing focus decisions F7 and F8 (figure 3). F7 is an opportunistic focus to restrict Helix2-l’s family of locations. F8 is a goal-directed focus on actions that can satisfy KSAR34’s outstanding precondition. ‘31: lncmmsntally assemble one PA , S2 : Define one PA I S3 : Position anchomble STRUCNRED-SECONDARYSTFIUCNRE STRATEaY > Fi : FAVOR-CONTROL+KSARS , F3 : Perform> zde SECONDARYSTRUCNRE In PA1 F4 : Perform> Orient PA1 about long constraining constrained STRUCNRED.SECONDA~FlY.STRU~TURE FS : Perform> Anchor long inflexible consb-ained consb-ainlnp STRUCTURED.SECONDARY-STRUCNRE to HELIXl-1 in PAi with strong CONSTRAINT-SET I , F6 : Perform> Yoke several long inflexible constraining recently.reduced STRUCNRED-SECONDARY-STRUCNRE in PA1 FOCUO with strong CONSTRAINT-SET - Hl : lNTEGRATION.AND.SCHEDULlNG.RULES , H2 : PREFERJZONTROL-KSARS WErnTEC , 1 . , 16 16 17 16 19 20 21 22 23 24 26 26 27 26 ~QEA~.~QE~-K~l~..~~~T~.SO~UTlONP~Tl~~ARRA~~MSNT.P~3 \ KSPR4e..-~*NCHO”.“-o~~o,~ ,.,. TO-HELIX 1.1-1 N.P*,.W,T”.WCT”,Hl) Figure 2: BBl/PROTEAN Control Blackboard and Exe- cutable Agenda at Cycle 27 On cycle 27, PROTEAN has two active hierarchical plan foci, F5 and F6. The BBl scheduler chooses to execute KSAR47: Anchor HelixlZ-1 to Helixl-1 in PA1 with CSetHlH2. because it is the highest-priority (92) executable KSAR. Actu- ally, KSAR34: Yoke Melix3-1 and Helix2-1 in PA1 with CSetH2H3. has a higher priority (196) because of PROTEAN’s preference for yoking actions (focus F6), but it is not yet executable be- cause it has an unsatisfied precondition: Sl : lncmmentally assemble one PA 52 : Define one PA , f S3 : Position anchorable STRUCTURED-SECONDARY-STRUCTURE smTEc4Y > F? : FAVOR-CONTROL-KSARS , F4 : Perl’orm> Orient PA1 about long constraining constrained STRUCTURED.SEC?NDARY-STRUCTURE Fti : Perform> Anchor long inflexible constrained constraining STRUCNRED.SECONDARY.STRUCNRE to HELlXi.1 in PA1 with 3tmng CONSTRAINT.SET I , F6 : Perform> Yoke several long inflexible constraining recently-reduced STRUCTURED-SECONDARY-STRUCTURE in PA1 with strong CONSTRAINT.SET I b F7 : Perform> Restrict HELIXP-1 in PA1 with ANY.NAME b f F6 : Promote> Is-anchored HELIX3.1 to ANCHOR in PA1 with CONSTRAINT.SET FQCUS - Ill : INTEGRATION.AND-SCHEDULING-RULES , H2 : PREFER-CONTROL.KSARS l-mJRlsnc , I I . . . . VI I 16 19 20 21 22 23 24 25 26 27 28 29 30 31 Figure 3: BBl/PROTEAN at Cycle 30 On cycle 30, the BBl scheduler uses all five active foci to rate pending KSARs and chooses to execute KSAR49: Anchor Helix3-1 to Helixl-1 in PA1 with CSetHlH3. On cycle 31, the anchoring of Helix3-1 produces the state targetted by the goal-directed focus, F8. This allows goal KSAR34 to become executable and triggers Terminate- Prescription. The BBl scheduler chooses to execute the Terminate-Prescription KSAR, deactivating F8. johnson and Hayes-Roth 33 On cycle 32 (fig ure 4), the scheduler uses the remaining four active foci to rate KSARs. It chooses to execute KSAR59: Restrict Helix2-1 in PA1 with Sampling-Constraint-2. Sl : Incrementally assemble one PA S2 : Define ““2 , S3 : Position anchorable SlRUCNRED-SECONDARY.STRUCTURE STRATEQV > Fl : FAVOR.CONTROL.KSARS FS : Petform> Anchor long inflexible constrained constmininQ STFuCTURED.SECONDARY-STRUCTUREto liELIX1.l in PA1 with strong CONSTRAINT.SET I > F6 : Perform> Yoke several long inflexibls constraining recently-reduced STRUCTURED-SECONDARY.STRUCTLlRE in PA1 with strong CONSTRAINT-SET I F? : Perform> Restrict HELIXP-1 in PA1 with ArjY.NAME I > F6 : Promote> Is-anchored HELIX%1 to ANCHOR In PA1 with F&US CONSTRAINT-SET I i Hl : H2 : PREFER.CONTROL-KSARS HEURISTIC * I . I . 1 I I 20 21 22 23 24 25 26 27 26 26 30 31 32 33 Figure 4: BBl/PROTEAN at Cycle 32 On cycle 33, the reduction in Helixa-l’s locations achieves the objective of opportunistic focus F7 and triggers Terminate- Prescription. The scheduler chooses to execute it, deactivating F7. On cycle 34, the BBl scheduler uses the two remaining strategic foci, F5 and F6, to rate pending KSARs and returns to its planned anchoring and yoking activities. It first chooses the previous goal KSAR34, because of its high priority as de- termined by F5 and F6. V. Discussion We have developed a goal-directed reasoning mechanism that goes beyond the syntactic method of backward-chaining through rules [Buchanan and Shortliffe, 19841. The BBl mech- anism follows semantic links relating actions, events, and states to determine which actions will achieve a specified goal. In addition, the goal-directed reasoning mechanism oper- ates in two conceptually different situations: deliberate efforts to perform particular kinds of actions; and detection of oppor- tunities to perform generally desirable actions. The mecha- nism exploits BBl’s control semantics (decisions to perform an action-class, cause an event-class, or promote a state-class in order to achieve specified objectives) and ACCORD’s represen- tation of the relations (cause, promote, trigger, enable) among particular actions, events, and states. Finally, BBl distinguishes itself from other control archi- tectures in its ability to integrate diverse reasoning methods with a uniform mechanism. Although some systems permit multiple reasoning methods, they provide separate mechanisms that must be selected for any given problem-solving system [Genesereth and Smith, 1982, Nii and Aiello, 19791 or com- bined modally within a system [Newell and Simon, 1972, Pohl, 1969, Pohl, 1971, Rosenbloom and Newell, 19821. For ex- ample, Corkill, Lesser, and Hudlicka’s vehicle-tracking system [Corkill et al., 19821 p er ormed f goal-directed reasoning when- ever it could generate a goal, and otherwise performed data- driven reasoning. Although the Hearsay-II blackboard system [Erman et al., 19801 integrated reasoning methods similar to those we have demonstrated in PROTEAN, it engineered each method in a different domain-specific tailoring of its underly- ing architecture. By unifying these several methods within a principled architecture, we make them available to many dif- ferent application systems, with whatever form of integration is appropriate. The utility of a control architecture depends upon three factors: (1) the architecture’s functional capabilities; (2) the utility of these capabilities for particular application systems; and (3) the cost of these capabilities. In this paper, we have produced empirical evidence of BBl’s value on the first two factors. We have shown that it supports three different rea- soning methods (hierarchical planning, opportunistic focusing, and goal-directed reasoning) individually and in fully integrated combinations. We also have shown that at least one applica- tion system, PROTEAN, can usefully exploit integrated reason- ing with all three methods. Other reports [Hayes-Roth et al., 1986a, Schulman and Hayes-Roth, 19871 demonstrate the utility of BBl’s capabilities for explaining and learning about its own actions - both of which rely critically upon its control archi- tecture. Although we do not address the cost of BBl’s control reasoning in this paper, that issue is addressed in detail else- where [Garvey et al., 19871, with current evidence suggesting that the computational advantages of BBl’s control reasoning outweigh the computational costs. References [Altman, 19861 R. Altman. FEATURE. Technical Report, Stanford, Ca.: Knowledge Systems Laboratory, Stanford University, 1986. [Buchanan and Shortliffe, 19841 B.G. Buchanan and E.H. Shortliffe. Rule-Based Expert Systems: The MYCIN Ex- periments of the Stanford Heuristic Programming Project. Menlo Park, Ca.: Addison-Wesley, 1984. [Buchanan et al., 19851 B.G. Buchanan, B. Hayes-Roth, 0. Lichtarge, M. Hewett, R. Altman, P. Rosenbloom, and 0. Jardetzky. Reasoning with Symbolic Constraints in Ex- pert Systems. Technical Report, Stanford, Ca.: Stanford University, 1985. [Corkill et al., 19821 D.D. Corkill, V.R. Lesser, and E. Hudlicka. Unifying data-directed and goal-directed con- trol: an example and experiments. Proceedings of the AAAI, :143-147, 1982. [Davis, 19761 R. Davis. Applications of Meta Level Knowl- edge to the Construction, Maintenance, and Use of 34 Al Architectures Large Knowledge Bases. Technical Report Memo AIM- 283, Stanford University Artificial Intelligence Laboratory, 1976. [Durfee and Lesser, 19861 E. Durfee and V. Lesser. Incremen- tal planning to control a blackboard-based problem solver. Proceedings of the AAAI, ~58-64, 1986. [Erman et al., 19811 L.D. Erman, P.E. London, and S.F. Fickas. The design and an example use of Hearsay-III. Proceedings of the Seventh International Joint Conference on Artificial Intelligence, :409-415, 1981. [Erman et al,, 19801 L.D. E rman, F. Hayes-Roth, V.R. Lesser, and D.R. Reddy. The Hearsay-II speech-understanding system: integrating knowledge to resolve uncertainty. Computing Surveys, 12~213-253, 1980. [Garvey et al., 19871 A. Garvey, C. Cornelius, and B. Hayes- Roth. Computational Coats versus Benefits of Con- trol Reasoning. Technical Report KSL-87- 11, Stanford, Ca.: Knowledge Systems Laboratory, Stanford University, 1987. [Genesereth and Smith, 19821 M.R. Genesereth and D.E. Smith. Meta-Level Architec- ture. Technical R.eport HPP-81-6, Stanford, Ca.: Stanford University, 1982. [Hayes-Roth, 19841 B. Hayes-Roth. BBI: An Architecture for Blackboard Systems that Control, Explain, and Leanz about Their Own Behavior. Technical Report HPP-84-16, Stan- ford, Ca.: Stanford University, 1984. [Hayes-Roth, 19851 B. Hayes-Roth. A blackboard architecture for control. Artificial Intelligence Journal, 26~251-321, 1985. [Hayes-Roth and Hayes-Roth, 19791 B. Hayes-Roth and F. Hayes-Roth. A cognitive model of planning. Cognitive Science, 3:275-310, 1979. [Hayes-Roth et al., 1986a] B. Hayes-Roth, A. Garvey, M.V. Johnson Jr., and M. Hewett. A Layered Environment for Reasoning about Action. Technical Report KSL-86-38, Stanford, Ca.: Knowledge Systems Laboratory, Stanford University, 1986. [Hayes-Roth et al., 1986b] B. Hayes-Roth, B.G. Buchanan, 0. Lichtarge, M. Hewett, R. Altman, J. Brinkley, C. Cor- nelius, B. Duncan, and 0. Jardetzky. Protean: deriv- ing protein structure from constraints. Proceedings of the AAAI, 1986. [Hewett and Hayes-Roth, 19871 M. Hewett and B. Hayes-Roth. The BBl Architecture: A Software Engineering View. Technical Report KSL-87-10, Stanford, Ca.: Knowledge Systems Laboratory, Stanford University, 1987. [McCarthy, 19601 J. McCarthy. Programs with common sense. Proceedings of the Teddington Conference on the Mecha- nization of Thought Processes, 1960. [Newell and S imon, 19721 A. Newell and H. A. Simon. Human Problem Solving. Englewood Cliffs, N.J.: Prentice-Hail, 1972. [Newell et al., 19591 -A. Newell, J.C. Shaw, and H.A. Simon. Report on a general problem-solving program. Proceed- ings of the International Conference on Information Pro- cessing, 1959. [Nii, 19861 H.P. Nii. Blackboard systems. AI Magazine, 7(numbers 3 and 4):3:38-53 and 4:82-107, 1986. [Nii and Aiello, 19791 H.P. Nii and N. Aiello. AGE [attempt to generalize]: a knowledge-based program for building knowledge-based programs. Proceedings of the Interna- tional Joint Conference on Artificial Intelligence, 6:645- 655, 1979. [Pohl, 19691 I. Pohl. Bi-directional and Heuristic Search in Path Problems. Technical Report SLAC-104, Stanford, Ca.: Stanford University, 1969. [Pohl, 19711 I. Pohl. Bi-directional search. In B. Meltzer and D. Michie, editors, Machine Intelligence 6, pages 127-140, New York, NY.: American Elsevier, 1971. [Rosenbloom and Newell, 19821 P.S. Rosenbloom and A. Newell. Learning by chunking: summary of a task and a model. Proceedings of the Amer- ican Association foT Artificial Intelligence, ~255-258, 1982. [Schulman and Hayes-Roth, 19871 R. Schulman and B. Hayes- Roth. ExAct: A Module for Explaining Actions. Tech- nical Report KSL-87-8, Stanford University: Knowledge Systems Laboratory, 1987. [Stefik, 1981a] M. Stefik. Planning and meta-planning [MOL- GEN: Part 21. Artificial Intelligence, 16:141-169, 1981. [Stefik, 1981b] M. Stefik. Planning with constraints [MOL- GEN: Part 11. Artificial Intelligence, 16:111-140, 1981. [Terry, 19831 A. Terry. Hierarchical Control of Production Sys- tems. PhD thesis, UC, Irvine, 1983. [Tommelein et al., 19871 I. D. Tommelein, M.V. Johnson Jr., B. Hayes-Roth, and R.E. Levitt. SIGHTPLAN: a black- board expert system for the layout of temporary facilities on a construction site. Proceedings of the IFIP WG5.2 Working Conference on Expert Systems in Computer- Aided Design, Sydney, Australia, 1987. [Weyrauch, 19801 R.W. Weyrauch. Prolegomena to a theory of mechanized formal reasoning. Artificial Intelligence, 13:133-170, 1980. Johnson and Hayes-Roth 35
1987
6
654
Incrementa Getting Multiple Agents to Gary C. Borchardt Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 Abstract This paper presents a symbolic reasoning algorithm for use in the construction of mixed-initiative inter&aces; that is, interfaces allowing several human or machine agents to share collectively the control of an ongoing, real-time activity. The algorithm, called Incremental Inference, is based on propositional logic and is related in structure to the Truth Maintenance System; however, the notion of justifications in the Truth Maintenance System is replaced with a simpler notion of recency. Basic properties of the Incremental Inference mechanism are described and com- pared with those of the Truth Maintenance System, and an example is provided drawn from the domain of SPEC- TRUM, a knowledge-based system for the geological interpretation of imaging spectrometer data. I. Introduction The scenario addressed by this research involves several agents simultaneously in control of a subordinate activity. A flexible partitioning of duties allows any one agent tem- porarily to control the entire activity or all agents to control various aspects of the activity. A centralized interface serves to coordinate the requests of the agents. This scenario has applications in the shared remote/local control of robot mechanisms in space or undersea, in data interpretation or diagnosis tasks managed jointly by a human operator and one or more expert systems, and in the coordinated supervision of an activity by several expert systems with possibly overlap- ping areas of expertise. Several published results relate to this problem, yet few true “mixed-initiative” systems exist to date. Probably the most directly relevant work lies in the area of non-monotonic reasoning, especially Truth Maintenance Systems [Doyle, 1979; McDermott and Doyle, 1980; McAllester, 1980; McAl- lester, 19821 and the Assumption-based Truth Maintenance System of de Kleer [de Kleer, 19861. In particular, Doyle discusses the use of the Truth Maintenance System as a medium for dialectic argumentation between two or more agents. De Kleer discusses the utility of the ATMS approach in tracking multiple contexts (e.g., those applying to each of several agents) simultaneously. Also relevant is the work concerning focus in human dialog (e.g., [Grosz, 19771) and the work in distributed problem solving (e.g., [Davis and Smith, 1983; Corkill and Lesser, 1983; Rosenschein and Genesereth, 19851). While the Truth Maintenance System is apparently well- suited to the mixed-initiative interface task, there are also drawbacks to this approach. This is best illustrated by a sim- ple example. If in the course of mixed-initiative control of some process one of the agents brings a particular activity into consideration, and from this it is inferred that a particu- lar plan is now active, then if later that activity is taken cut of consideration (perhaps due to its completion), we are obli- gated within a Truth Maintenance System to retract the infer- ence concerning the associated plan, as there is no longer a measure of well-founded support for this conclusion. This is, of course, merely the process of carrying out truth mainte- nance. In the context of mixed-initiative interfaces, however, this actually gets in the way, as we would rather keep the plan “active” until we are forced to change its status, thereby minimizing the number of changes to be endorsed by the interested parties. In general, this amounts to a process of taking up inferred values as new default assumptions, retain- ing these as long as they do not conflict with other values of greater “recency.” The Incremental Inference algorithm amounts to a res- tructuring of the Truth Maintenance System model to fit the above criteria, replacing the notion of justifications with a simpler notion of recency.’ This effects a tradeoff in which the ability to reason based on well-founded support is exchanged for a heuristic capability in fluidly tracking a dynamically changing understanding between several parties. The following sections outline the organization and pro- perties of the Incremental Inference algorithm, discussing the nature of “inference” which may be drawn based on the notion of “recency” and examining parallels between the mechanism and that of the Truth Maintenance System, including a process of’conflict resolution for the Incremental Inference mechanism analogous to that of dependency- directed backtracking in Truth Maintenance Systems. Structurally, the Incremental Inference algorithm is similar to the truth maintenance algorithm used by McAllester in his Reasoning Utility Package [McAllester, 1980; McAllester, 19821. Inference is based on propositional logic expressions converted to conjunctive normal form. Propositions in the Incremental Inference algorithm amount to binary state vari- ables for control of the subordinate process, however. These ‘A preliminary description of the Incremental Inference algorithm appears in [Borchardt, 19871. 334 Default Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. variables describe the the pursuit of particular goals, the implementation of plans, the execution of activities, the utili- zation of individual objects and types of objects, the achieve- ment of milestones in the completion of tasks, and so forth. Logical constraints between the propositions involve, for example, the applicability of various plans to various goals, the compatibility of objects and types of objects with particu- lar activities and the precondition restrictions placed on activities by individual milestones. As a matter of convenience, no distinction is made between propositions and literafs in this representation: the negation of a proposition is itself considered a proposition, with certain restrictions placed on the values held by comple- mentary propositions. Thus, each clause may be considered to contain only propositions. Each proposition is associated with two values: a truth value, which may be either true, false or retracted, and a time value, which is a number.2 The truth value indicates whether or not the designated goal, plan, activity, etc. is currently considered “active” or “favored” by a consensus of all agents involved in the mixed-initiative environment. The time value indicates a measure of recency or strength of sup- port for the truth value. Complementary propositions are constrained by the mechanism to have compatible truth values (that is, both retracted or one true and one false) and equal time values. A global current time is maintained, which is always greater than or equal to the most recent time values in the system. Agents communicate with the system in two distinct ways, the first of which involves the submission of requests. A request is carried out by incrementing the global current time, assigning a new truth value to a selected proposition and updating the time value for the proposition to the new current time. In this case, the time value corresponds to a “timestamp,” recording the point in time at which the requested truth value was assumed. The time value does not serve this function in the case of propositions with truth values derived from those of other propositions, however. This is somewhat clearer in the examples which follow. Propagation of effects resulting from a request are com- puted locally at the level of individual clauses by a process of stabilization. Stabilization of a clause attempts to satisfy its implied logical disjunction by modifying the truth and time values of propositions where necessary, always protect- ing the current status of propositions with newer time values over those with older time values. This is the “heuristic” of the algorithm: while it is not guaranteed that modification of the proposition with least recency is the best choice, in many cases this is indeed a good choice, and at worst it produces a broad search for a new, consistent state, starting with modifications of the least recent values and working back toward the most recent values. The stabilization process may result in inference, updat- ing propositions to true or false, or retraction, updating pro- positions to the retracted state. Where inference occurs, the affected proposition is given a time value equal to the -he time values may he arbitrary as long as they increase mono- tonically with actual “clock time.” Integers are used here for simplicity. minimum time value among the remaining propositions in the clause. Thus, the strength of the inferred truth value is no greater than the weakest strength among the propositions which have made it so. This is somewhat analogous to the recording of justifications for inferred values in a Truth Maintenance System such as that of McAllester. The general rule for stabilization of an individual clause is given below. STABILIZATION OF A CLAUSE C: (If C contains only one proposition, assume the existence of an additional proposition within C, set to fake at the current time.) 1.) (Possible inference.) If there is a single proposition P having the oldest time value in C, and none of the remaining propositions in C are true, update P to true at the minimum time value among the remaining pro- positions in C. 2.) (Possible retraction.) If several propositions PI, P2, . . . . PN (N > I) share the oldest time value within C and there are no propositions in C which are true, modify those of PI, P2, . . . . PN which are false to retracted, leaving their time values unchanged. A reasonably efficient algorithm for the stabilization of a clause performs an initial scan through the clause, computing the oldest time value among its propositions, the number of propositions having this time value, the second oldest time value among the propositions and the newest time value for a true proposition within the clause. Following the determina- tion of these quantities, it is a simple matter to decide which case applies and to perform the appropriate action. The other means of interaction between the agents and the mechanism involves a process of refreshing. Since the rule for stabilization of a clause is guided by the heuristic of “recency,” propositions become more and more susceptible to change as their time values become less and less current. As a counteractive force, each agent is allowed to specify an interest in the propositions of any clause, and, given such an interest, the agent is queried prior to all modifications involv- ing propositions within the designated clause (even if gen- erated as a result of stabilizing other clauses). When queried concerning a tentative modification, an “interested” agent may attempt to block the modification by increasing the time value of the targeted proposition.3 In the simplest case, the refreshing process updates a proposition’s time value to the global current time. The refreshing device forces the mechanism to reconsider the clause generating the tentative inference or retraction, select- ing an alternative action. The refreshing process thus serves to allow agents to protect the status of various goals, plans, activities, etc., of current importance to them. The interests specified by the agents may be changed whenever desired and constitute a means by which the agents may partition the 3As a matter of “sfreamlining,” the querying of interested agents is bypassed where a proposition to be modified already has a time value equal to the current tune, is to he updated in time value only or has an initial truth value of rerructed. Borchardt 335 responsibilities for various aspects of the overall independent of the requests made by each of the agents. task, As an example, consider the set of two clauses: (A v B v C) and (NOT-A v D v E), with initial truth and time values as follows. A: qalse,S) NOT-A: (true,5) B: (true3) D: CfalseJ) C: Cfalse,2) E: CfalseJ) If one of the participating agents submits a request to update proposition B to false at a new current time of 7, a value of true is inferred for C. The time value assigned to C is the minimum of the time values for A and B. Assuming no refreshing of values occurs, the following state results. A: Cfalse,5) NOT-A: (true,S) B: Cfalse,7) D: (false, I) C: (true,5) E: CfalseJ) On the other hand, if an agent “interested” in the first clause blocks the inference by refreshing C to the current time, a reevaluation of the first clause results in an inference of true for proposition A, giving it a time value of 7. This value is echoed in a value of false at 7 for NOT-A, and a stabilization of the second clause results in the retraction of D and E. A: (true,7) NOT-A: Cfalse,7) B: Cfalse,7) D: (retracted,I) C: Cfalse,7) E: (retracted,l) If all of the above occurs, plus one of the agents interested in the second clause blocks the retraction of E, the following results. A: (true,7) NOT-A: ualse,iT) B: Cfalse,7) D: (true,‘/) C: galse,7) E: cfalse,7) Finally, if the initial inference for C and the subsequent retractions of D and E are all blocked, the resultant state con- tains values of retracted at time 7 for all of the propositions. In this case, an intermediate state with all propositions of the second clause set to false at 7 resolves to a state in which these are all retracted at 7. Subsequent stabilization of the first clause then forces a retraction of B and C. As the refreshing of values has overturned even the initially requested value, a suitable action to take is to retreat to a previous “safe” point agreed upon by all agents. roper-ties of the Algorith Following a request submitted by one of the agents in the mixed-initiative environment, all propositions whose time values stabilize at the new current time may be taken to have well-founded support, based on the truth value of the propo- sition designated in the request and all propositions refreshed to the current time, provided these propositions themselves have retained their designated values. Likewise, if the propo- sition designated in the previous request cycle plus all propo- sitions refreshed during that cycle have retained their desig- nated values, then all propositions with time value equal to the previous current time may be taken to have well-founded support, based on the collective requested and refreshed pro- positions of the last two cycles, and so on. In general, if we take care to note the time of the most recent request cycle for which either the designated proposi- tion or one of the propositions refreshed during that cycle has been overridden, we may conclude that all propositions with time values newer than this time do indeed have well- founded support, based on the collective propositions desig- nated and refreshed in all cycles since the noted time. In the context of mixed-initiative interfaces, however, the notion of well-founded support is of lesser concern. Here, in all cases, one may consider a derived truth value for a propo- sition to be an indication that, in order to protect the status quo for “some other” proposition with equal time value, it was necessary to update the proposition under consideration as indicated. One noteworthy aspect of the inference/retraction process in the Incremental Inference mechanism involves the nature of the retracted truth value. This value may be thought of as signaling the presence of a contradiction regarding the propo- sition in question.4 In fact, the retracted value serves as a medium through which a process analogous to that of dependency-directed backtracking in Truth Maintenance Sys- tems is carried out. Inspection of the rule for stabilization of a clause reveals that a retracted value assists in the genera- tion of inferences and retractions much as would a value of false. The negation of a retracted proposition, however, also behaves as if it were false in all clauses containing it. The net effect is that inferences and retractions are propogated among the propositions with time values older than a retracted proposition as if the retracted proposition were both true and false.’ This continuance of inference/retraction may often result in a resolution of the conflict causing a retraction. This occurs when one or more agents refresh values, block- ing either the “true” or “false” aspects of the retracted propo- sition. In such cases, the refreshed values propogate back toward the retracted proposition in a sort of “reflected wave” motion, forcing the retracted proposition to assume either the true or false state. Despite its lack of the usual apparatus for performing truth maintenance, the Incremental Inference mechanism does retain a limited capability of reasoning based on the notion of well-founded support. This reasoning capability is actually of the momtonic variety; that is, it does not tolerate changes in antecedents. This can be seen by considering the well- foundedness of propositional values starting at the current time and working backwards. Regarding the heuristic nature of the mechanism in tracking the “shifts of attention” generated by requests, it 41t is for this reason that a simpler designation of unknown as in McAllester’s system was not used. ‘In a similar manner, de K&r’s ATMS also continues to pexform inference based on individual facts involved in contradictions, as long as they do not combine in support with their contradictory counterparts. 336 Default Reasoning may be noted that increased intricacy of logical constraints well, logical constraints may be set up such that if one agent tends to promote the retraction process, as it is more likely relinquishes interest in an area of the decision making, some that multiple propositions within a clause end up with the other agent is then forced to take up an equivalent interest. same time value. In such cases, the mechanism relies more heavily upon the interested agents for direction through the refreshing of values. Where the logical constraints are fairly “loose,” the time values tend to be more widely distributed; thus, inference prevails over retraction. Two additional properties of the mechanism involve questions of completeness for the inference produced and eventual termination of the stabilization process following a new request. Similar to McAllester’s clause-based reasoning mechanism, the inference produced by the Incremental Infer- ence system is logically incomplete. That is, in’some situa- tions, usually involving “case analysis,” the mechanism will fail to make inferences which logically should be made. This can result in global states in which certain propositional truth values are inconsistent with other propositional truth values. As each new focus of attention for the system may be achieved by a sequence of several requests, however, it is possible to work around the incompleteness by gradually approaching a desired global state, resolving conflicts due to previously undetected inconsistencies as they appear until a global state with all propositions set to true or false exists, for which there can be no inconsistencies. As well, areas subject to incomplete inference may often be “bridged” through the inclusion of additional clauses in the system. Regarding the termination of the stabilization process, the mechanism is guaranteed to converge upon a new, glo- bally stable state following a new request. This can be seen in the nature of the rule for stabilization of a single clause. A proposition, when updated, is normally given a newer time value. The only exception involves the modification of a proposition from true or false to retracted, in which case the time value remains unchanged. It is thus possible for at most two truth values (true or false, then retracted) to be associ- ated with a proposition before the time value must be incre- mented. The global current time sets an upper bound on the increase of time values. The example in Section III illus- trates this, as the final remaining option is a retraction of all propositions in the clauses at the current time. A useful extension of the Incremental Inference mechanism is to represent the interests of the participating agents with respect to individual clauses not as external parameters, but as propositions in the mechanism itself. If a proposition representing an interest is true or retracted, the agent is con- sidered to be interested in the specified clause; if it is false, the agent is not interested. This allows the construction of multilayer Incremental Inference reasoning systems, where a higher-level system is used to reach a consensus regarding interests in a lower-level system describing the current state of affairs. This approach has been taken in the SPECTRUM system, as illustrated in the next section. Using such a dev- ice, it is possible to add an additional layer of “interests in interests,” so that one agent may, for instance, block another agent’s attempt to relinquish interest in a particular area. As A second extension, also employed in the SPECTRUM system, is to include a set of higher-level structures, called decision sets, representing groups of related clauses. This device springs from the fact that, whereas the clausal con- straint is of the form at least one, a corresponding constraint of the form at most one has an equivalently simple rule for stabilization, as follows. IZATHBN 0F A DECISION SET D Q3PF TYPE AT MOST ONE: I.) (Possible inference.) Nnodify all propositions with time values older than the newest true or retracted proposition(s) h D to false at the time of the newest true or retracted proposition(s). 2.) (Possible retraction.) If several propositions BI, P2, . . . . PN (N > I) share the status of being the newest true or retracted values in D, modify those of PI, P2, . . . . PN which are true to retracted, leaving their time values unchanged. In this case, inference and retraction may both occur dur- ing the same stabilization. The above rule is equivalent in effect to the stabilization of the N!/(N-2)!2! clauses implied by the at most one constraint (e.g., for three propositions A, B and C, this is equivalent to the clauses (NOT-A v NOT-B), (NOT-A v NOT-C) and (NOT-B v NOT-C) >. Choosing combinations of the above rule and that given previously for clauses, four types of decision sets are produced: uncon- strained, at least one, at most one and exactly one. Decision sets behave in most respects like ordinary clauses; that is, agents may specify interests in particular decision sets, and the decision sets are stabilized as single entities. The rules for stabilization vary according to the type, however. For the exactly one constraint, the stabilization rule for at most one is applied, followed by the rule for at least one.6 The exactly one constraint is extremely useful in building compact representations of logical constraints and has been employed in the area of resolution-based inference [Tenenberg, 19851, in the SNePS system [Shapiro, 19791 and in the ATMS model [de. Kleer, 19861. A third useful extension to the Incremental Inference algorithm concerns a variation of the refreshing process. In some cases, especially when the inference or retraction pro- cess attempts to update a proposition using a time value much less than the current time, it is convenient to be able to merely “resist” the change instead of unconditionally “reject- ing” it. This amounts to a partial refreshing of the time value: just enough to prevent the tentative inference or retrac- tion, but no further.7 In this case, the “resisting” agent may 6This ordering is necessary due to an occasional interaction of the rules. ‘For a proposition subject to inference, this is the time value for the inference. For a proposition subject to retraction, this is the same value plus a small increment. Borchardt 337 again of the be queried at a later time concerning the modification same proposition given support of greater recency. The Incremental Inference mechanism has been incorporated within SPECTRUM, a knowledge-based system for geologi- cal interpretation of imaging spectrometer data. An initial overview of the SPECTRUM application appears in [Bor- chardt, 19861. The example described below has been simplified somewhat from the SPECTRUM domain and involves the Incremental Inference algorithm in a mixed- initiative user/system interface for control of a particular seg- ment of the analysis, involving a variant of the Zsodata algo- rithm Duda and Hart, 19731 for the clustering of multidi- mensional data points into uniform, distinct classes. The Isodata algorithm consists of a preliminary activity, initialize, followed by a cyclic repetition of three activities, cluster, extract and merge, with merge occurring zero, one or many times before each subsequent return to the cluster activity. Individual propositions in the Incremental Inference mechanism are used to represent each of the four activities. Two additional propositions, using-a-map and using-some-plots describe data quantities associated with the activities. A special proposition, handshake is used by either agent to signal a desire to execute the currently specified activity. Critical factors for mixed-initiative control are the determination of whether or not to perform one or more merging operations prior to each successive iteration and when to halt the process. A number of decision sets for the interface are thus set up as indicated below. Each entry corresponds to a function call defining a decision set (name, set of elements) or specifying the interest of an agent in a particular decision set (decision set, agent, proposition or constant truth value representing the designated interest).* exactly-one(dsZ [not-isodataqlan at-most-one(aTs2 [not_using~a~map initialize initialize cluster extract merge]) cluster extract]) at-most-one(ds3 [using-a-map merge]) at-most-one(ds4 [not-using-someqlots cluster merge]) at_most_one(ds5 [using-someqlots initialize extract]) at-least-one(ds6 [not-isodataqlan user-merge-interest spectrum-merge-interest]) associated-interest(ds6 spectrum true) at-least-one(ds7 [rwt-&.ster forego-merge]) associated-interest(ds7 user user-merge-interest) associatedjnterest(ds7 spectrum spectrum~merge-interest) 8The syntax here draws from the STAR language used in the imple- mentation of the SPECTRUM system CBorchardt, 19861. unconstrained(ds8 [using-a-map using-someqlots]) associated-interest(ds8 user user-quantity-interest) unconstrained(ds9 [handshake]) associatedjnterest(ds9 user true) associatedJnterest(ds9 spectrum true) The following truth and time values serve as a point of departure for the example. current time = 16 isodataqlan: (trueJ5) handshake: (false,1 6) initialize: (false,ZS) using-a-map: (true,Z5) cluster: ($alse,lS) using-someqlots: Cfalse,ZS) extract: (trueJ5) forego-merge: CfalseJ 4) merge: Cfalse,IS) user-merge-interest: (trueJ2) spectrum~merge-interest: (falseJ2) user-quantity-interest: Cfalse,l3) In this state of affairs, the extract activity is currently “in focus” and the user has specified an interest in protecting the opportunity to perform a merge operation at the appropriate time. The following is then a possible mixed-initiative con- trol sequence continuing from this point. 1.) Following a request by the user, the current time is incre- mented to 17 and using-someqlots is updated to true at the new current time. This results in an update of initial- ize and extract to faIse at 17 plus an attempted retraction of the three remaining propositions in dsl, not-isodataqlan, cluster and merge. SPECTRUM, hav- ing an interest in the proposition isodataqlan, blocks its retraction. The user accepts the retraction of cluster, but when queried about a subsequent retraction of forego-merge, blocks this (that is, the user does not wish to forego the merge activity). Thus, cluster returns to a status of false at a newer time 17. Due to the refreshing operations by SPECTRUM and the user, the third retracted proposition, merge, is then updated to true at time 17. The user requests an update of handshake to true, SPECTRUM allows this update, and the merge operation is executed, followed by a reset of handshake to false. 2.) Next, the user relinquishes interest in protecting merge by requesting that user_merge-interest be set to false. SPECTRUM is queried regarding this change and regard- ing the subsequent update of spectrum-merge-interest to true. SPECTRUM accepts both updates, completing the exchange of this interest responsibility to SPECTRUM. As a separate request, the user then expresses a new interest in the types of data quantities associated with the activities. Thus, user-quantity-interest receives a new truth value of true. 3.) At this point, SPECTRUM takes initiative in requesting consideration of the cluster activity. As the user is at present interested only in data quantity types, the user acknowledges only the update of using-a-map to true. 338 Default Reasoning SPECTRUM, now posed as the responsible party for forego-merge, must grant permission for the update of this proposition to true. SPECTRUM then issues a request for handshake to be updated to true. The user accepts this update, and the cluster activity is executed, followed by a subsequent reset of handshake to false. The Incremental Inference algorithm provides a general framework supporting a variety of mixed-initiative interface applications. Alternative configurations may be constructed in which particular agents provide only requests or only react to the requests of other agents. The partitioning of duties may be “lateral,” that is, dividing interests according to rela- tively disjoint portions of the decision space, or they may be more or less “vertical,” with certain agents taking an interest in more general issues while other agents take an interest in the specific issues underlying these general issues. The boundaries of responsibility assigned to various agents may change dynamically, allowing sudden shifts in control in unpredicted circumstances. As a commonsense reasoning mechanism, the Incremen- tal Inference algorithm is interesting due to its retention of inferred values as new default assumptions, held as long as they are consistent with future values. This paper has attempted to explore some of the differences associated with reasoning based on recency as contrasted with the standard notion of well-founded support. This style of reasoning pro- vides a useful heuristic in the realm of mixed-initiative inter- faces and may apply equally well in other, related domains. OW ts The research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The author would like to thank Marc Vilain, Jerry Solomon and Steven Vere for insightful comments regarding the algorithm and its use. Steven Groom imple- mented a large portion of the code and discovered many nuances in its behavior. eferences [Borchardt, 19861 Gary C. Borchardt. STAR: a computer language for hybrid AI applications. In J. Kowalik, edi- tor, Coupling Symbolic and Numerical Computing in Expert Systems, pages 169-177, North-Holland, Amster- dam, 1986. [Borchardt, 19871 Gary C. Borchardt. Mixed-initiative con- trol of intelligent systems. In Proceedings NASA Space Telerobotics Workshop, Pasadena, California, January 1987. [Corkill and Lesser, 19831 Daniel D. Corkill and Victor R. Lesser. The use of meta-level control for coordination in a distributed problem solving network. In Proceedings ZJCAZ-83, pages 748-756, Karlsruhe, West Germany, August 1983. [Davis and Smith, 19831 Randall Davis and Reid G. Smith. Negotiation as a metaphor for distributed problem solv- ing. Artijcial Intelligence 20(1):63-109, January 1983. [de Kleer, 19861 Johan de Kleer. An assumption-based LEVIS. Artificial Intelligence 28(2): 127-162, March 1986. [Doyle, 19791 Jon Doyfe. A truth maintenance system. Artijicial Intelligence 12(3):231-272, 1979. [Duda and Hart, 19731 Richard 0. Duda and Peter E. Hart. Pattern Classijcation and Scene Analysis. John Wiley and Sons, New York, 1973. [Grosz, 19771 Barbara J. Grosz. The representation and use of focus in a system for understanding dialogs. In Proceedings ZJCAZ-77, pages 67-76, Cambridge, Mas- sachusetts, August 1977. [McAllester, 19801 David A. McAllester. An Outlook on Truth Maintenance. AI Memo No. 551, Artificial Intelli- gence Laboratory, Massachusetts Institute of Technology, August 1980. [McAllester, 19821 David A. McAllester. Reasoning Utility Package User’s Manual, Version One. AI Memo No. 667, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, April 1982. [McDermott and Doyle, 19801 Drew McDermott and Jon Doyle. Non-monotonic logic I. Artificial Intelligence 13(1-2):41-72, 1980. [Rosenschein and Genesereth, 19851 Jeffrey S. Rosenschein and Michael R. Genesereth. Deals among rational agents. In Proceedings ZJCAI-85, pages 91-99, Los Angeles, California, August 1985. [Shapiro, 19791 Stuart C. Shapiro. The SNePS semantic net- work processing system. In N. V. Findler, editor, Asso- ciative Networks, pages 179-203, Academic Press, New York, 1979. [Tenenberg, 19851 Josh D. Tenenberg. Taxonomic reasoning. III Proceedings ZJCAZ-85, pages 191-193, Los Angeles, California, August 1985. Borchardt 339
1987
60
655
An Approach to Default Reasoning Based on a First-Order Conditional Logic James P. Delgrande School of Computing Science, Simon Fraser University, Burnaby. B.C.. Canada V5A lS6 Abstract This paper presents an approach to default reasoning based on an extension to classical first-order logic. In this approach, first -order logic is augmented with a “variable conditional” operator for representing default statements. Truth in the resulting logic is based on a possible worlds semantics: the default statement C-P is true just when p is true in the least exceptional worlds in which 01 is true. This system provides a basis for representing and reasoning about default statements. Inferences of default properties of individuals rely on two assumptions: first that the world being modelled by a set of sen- tences is as uniform as consistently possible and, second, that sentences that may consistently be assumed to be irrele\.ant to a default inference -are, in fact.-irrelevant to the inference. Two formulations of default inferencing are proposed. The first involves extending the set of defaults to include all combinations of irrelevant properties. The second involves assuming that the world being modelled is among the simplest worlds consistent with the defaults and with what is contingently known. In the end. the second approach is argued to be superior to the first. 1. Introduction I\lany commonsense assertions about the real world express default or prototypical properties of individuals or classes of mdlvlduals; rather than strict conditional relations. Thus for example. “birds fly” seems to be a reasonable enough assertion, even though birds with broken wings generally don’t fly, and quite probably no penguin flies. The import of “birds fly” then certainly isn’t that all birds fly, but rather is more along the lines of “typically birds fly”. The issues and problems of such “exception-allowing general statements” have of course been extensively addressed in Artificial Intelligence, most notably with the various default reasoning schemes and approaches based on various theories of uncertainty. In [Delgrande 861 and [Delgrande 87a] another alternative was introduced. In this approach. “birds fly” is interpreted as “all other things being equal, birds fly”, or “ignoring exceptional conditions, birds fly”. For this approach, an operator, 3, is introduced into classical first-order logic (FOL). The statement c~3 p is interpreted as “in the normal course of events, if cy then 0”. In the resulting logic. called N. one can consistently assert, for example, that: (x)(Bird(x) 3 ny(x)). Bird(opus). but +7y(opus>; or that: (x)(Ruven(x) 3 Black(x)) and (x)((Raven(x) A Albino(x)) 3 ~BZack(x)). or that: (x>(Pengzfin(x) 3 Bird(x)), (x)(Bird(x) 3 $Zy(x)) and (xUenguin(x) 3 +Zy(x>>. Thus in the first case, all birds normally fly, but opus is a bird that does not fly. In the second and third examples, the sen- tences Bre satisfiable while having the antecedents of the condi- tionals being true also. An advantage of this approach is that one can represent and reason about defaults. Thus it is a theorem of the system that OQ 3 (((we- pII>-(a* +>I. Hence, if 01 is possible and (r3/3 is true, then it is not the case that ~~37p is true. As a second example, we have the derived rule: If I-N (x)(P(x)*Q(x)) and i--N (x>(&d =’ R(x)) then tN (x>Wx)*R(x)). From this it follows that we can say say that ravens are nor- mally black: black things are not white; and hence ravens are normally not white. This approach arguably provides an appropriate basis for representing and reasoning about statements of default proper- ties; in particular, it is meaningful to talk about the consistency of a set of default statements. However the logic N did not - in fact could not - allow modus pmens as a rule of inference for the variable conditional. For. if it did, then in the first example above we could deduce Fly(opus) and so arrive at an incon- sistency. Similarly, in the second example, if we knew Raven(ops) and AZbino(ops). then we could conclude both BLack(opus) and ~BZack(opus). The reason that inconsistency does not arise with the above examples is that the truth of a3p depends not on the present state of affairs, but on @‘simpler” or “less exceptional” states of affairs. Thus Raven(opus)+ Black(ops) is true if, in the least exceptional states of affairs in which opus is a raven, opus also is black. Hence in such states of affairs, exceptional conditions such as being an albino. being painted red, being in a strong yel- low light, etc. are “filtered out”. In this way, it is quite possible that Raven(opus)+ Black(opus) is true. even Raven(opus)~BZeck(opus) is not. though However it nonetheless seems reasonable that if we knew only that Raven(opusj3 Black(opus) and Raven(opus) that we should be able to conclude “by default” that BZack(ops) is true. One possible way to do so is to translate assertions expressed in iV into appropriate statements of some default logic for reasoning deductively about individuals. Thus, the previous formula would have the Raven(x): MBlack(x) intuitively acceptable translation BZack(x) in the formalism of [Reiter SO]. In this paper. a second alternative for reasoning deduc- tively about default and prototypical properties of individuals is described. Consider where we know only that (xXRaven(xh BLack(x RavenCopus). and Has-wings(opus). Given this information we cannot deduce anything about opus’s blackness, simply because it is consistent with what is known 340 Default Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. that opus may not in fact be black’. However. if we pragmati- tally and Q priori decide that the world at hand is one of the least exceptional worlds consistent with what’s known, and we decide also that having wings is irrelevant to blackness. then we could conclude BZack(ops). In terms of ‘states of affairs” or “possible worlds” this means that if we assume that the world being modelled is as “normal” as possible consistent with the above sentences, and that havmg wings is irrelevant to blackness then BZack(ops) must be true at the world being modelled. The next section provides an overview of related work in Al. while the following section provides a brief description of the logic N. Section 4 introduces the overall approach to default reasoning. Section 5 expands on this. and describes two approaches to default inferencing. Section 6 discusses some examples of default reasoning in this framework. while the last section examines what we have gained from this approach. Further details and proofs of theorems may be found in [Del- grande 87b]. Another approach in Al for dealing with default properties is prototype theory [Rosch 781. [Brachman 853. Here member- ship in the extension of a term is a graded affair and is a matter of similarity to a representative member or prototype. Proto- type theory is concerned generally with descriptions of individu- als. or predicting properties of individuals. Hence such approaches appear to address a concern that is somewhat different from ours - perhaps recognismg an individual as a bird. based on the fact that it flies, or alternatively, predicting whether an individual flies, given other information about it. However in the present approach we want to attribute flight as following in the normal courseof events from the conditions of being a bird. In such a case. notions of typicality and resem- blance to a prototype appear too weak for our requirements. Finally. Donald Nute [Kute 861 has investigated default reasoning in a conditional logic for representing subjunctives, However his approach is limited to reasoning with a restricted set of sentences in a propositional logic. 2. Related Work Much of the work in AI for dealing with defaults and pro- totypical properties has centred around systems of default and non-monotonic reasoning. McDermott and Doyle, for example. in their augmentation of first-order logic [McDermott and Doyle SO]. represent “birds fly” by the statement: (x)((Bird(x) A MFZy(x)) 3 Fly(x)). This can be interpreted as “for every x. if it is true that x is a bird, and it is consistent that x flies, then conclude that x flies”. On the other hand, in Reiter’s system [Reiter 801. “birds fly” would be represented by the rule: Bird(x): MFZy(x) fly(x) This can be interpreted as “if something can be inferred to be a bird, and if that thing can be consistently assumed to fly. then infer that that thing flies. Circumscription [McCarthy 801 per- mits similar inferencing: in this case, one typically circumscribes an “abnormality” predicate to minimise the number of abnormal (with respect to flight) birds. A general limitation with these approaches is that one can- not generally reason about defaults. Thus in Reiter’s approach, if 3. A Logic for Representing Defaults In [Delgrande 87a] a conditional logic [Chellas 75].[Nute 801 called ZV. for representing default statements, was presented. The language of this logic is that of FOL, but augmented with a binary connective 3. The intended interpretation of ~3 /3 is “if cy then normally 0” or “all other things being equal, if (Y then p”. In this logic one can represent statements such as “ravens are normally black” or “albino ravens are normally not black”. Truth in the logic is based on a possible worlds semantics. Infor- mally, (r3 8 is true at a world if, ignoring exceptional condi- tions, p is true whenever (Y is. What this amounts to is. if we consider “less exceptional” states of affairs, then a3 fi is true just when the least exceptional worlds in which cy is true also have 8 true. The accessibility relation E between worlds in this system then is formulated so that Ewlwz holds between worlds wi and w2 just when w2 is at least as uniform. or at least as unexcep- tional, as wl. In [Delgrande 87a]. the following conditions were argued to be required for the accessibility relation E: Reflexive: Eww for all worlds w. Transitive: If Ewlw2 and Ew2w3 then Ew1w3. -- we knew that every penguin had to be a bird and that birds nor- mally fly but that penguins do not normally fly, there is no means within the system of concluding that birds that aren’t penguins normally fly. Similarly, in most systems the assertions ‘penguins are birds” and “typically penguins aren’t birds” can be asserted without difficulty - in Reiter’s system the default rule is never applied and in McDermott and Doyle’s the truth value of the formula NY(X) is independent of that of MFZy(x). Yet these sentences seem to be inconsistent: if every penguin must necessarily be a bird, then it certainly seems that “typically penguins aren’t birds” should be false. A second. epistemological difficulty with these approaches is that their semantics rests on a notion of consistency with a set of beliefs. Thus, in the above approaches, one would conclude that a bird flies if this does not conflict with other beliefs. How- ever the issue of whether birds fly or not (or normally fly or whatever) is a matter that deals with birds and the property of flight. and not with particular believers. Hence the relation between birds and flight. whatever it may be. should be phrased independently of any set of beliefs. Yet, on the other hand. if all that I know is that birds normally fly and that opus is a bird. then it would seem reasonable to assume that, ceteris paribus. opus flies. Thus perhaps these approaches are best viewed as tel- ling us how to consistently extend a belief set, rather than as representing the relation between, say. birds and flight. Forward Connected: If Ew1w2 and Ewlw3 then either Ew2w3 or Ew3w2. The propositional modal logic corresponding to this accessibility relation is the standard temporal logic S4.3 [Hughes and Cress- well 681: it subsumes S4 but does not subsume S5. The language L for representing defaults has the following primitive symbols: denumerably infinite sets of individual vczri- abZe5 x, y, 2, ’ * . . individual constants a. b, c, . and predicate symbols, P. Q, R. . . (each with some presumed arity), together with commas, parenthesis. and the symbols y, 1, 3, and Q. Variables and constants together make up the set of terms. The set of well-formed formulae (wffs) is specified in the usual fashion. Where no confusion arises, lower-case words may be used to stand for constants and capitalised words may be used to stand for predicate symbols. As usual. conjunction, disjunction, biconditionality, and the existential quantifier are introduced by definition. The symbols (Y, (3. y,... will stand for arbitrary well- formed formulae of L. Sentences of L are interpreted in terms of a model M = <U’, E. DI. V> where W is a set, E is a reflexive, transitive and forward connected binary relation on elements of W. DI is a domain of individuals, and V is a function on terms and predi- cate symbols so that 1. for term t. V(t)E DZ. Delgrande 341 2. for n-place predicate symbol P. V(P) is a set of (n+l)- sentences constraining how the world must be or could be. while tuples <tl, . . . ,t,,w> whereeachtiEDIandwE W. C is a set of contingent sentences constraining how the world being modelled is. as “all ravens must Informally W is a set of possible worlds. E is an accessibility relation on possible worlds, and V maps atomic sentences onto worlds where the sentence is true, and predicate symbols onto relations in worlds. For wff (Y. the symbolism IIc#’ stands for the set of worlds in M in which cy is true. The symbolism t$ cy is used to express that 01 is true in the model M at world w (or simply true. if some M and w are understood). Validity, denoted i= cy. and satisfiability have their usual definitions. For conveni- ence. we define a world selection function f. in terms of which the truth conditions for 3 are specified: Thus in D we would include statements such be birds” and “all ravens are normallv black”. Defbition: f(w. Ilall”) = {wr I Ewwl and e1 cy. and for all w2 such that Ewrw2 and es cy. we also have Ew,wl}.’ This function then, given a world w and proposition Ilall”, picks out the least exceptional worlds in which a! is true. Given a model M = < W. E, DI, V> , truth at a world w is given by: Definition: (i> For n-place predicate symbol P, terms t,, . . . , t,, and wE W. I=$?&,, . . . ,tJ iff <V(tJ, . . . , v(t,>. w> E V(P). (ii> *lo! iff not * ~11. (iii> ecz3/3 iff ifecrthenep. (iv> +! a 3 0 iff f(w. Ildl”) C IIpP. (v) l=V<x>a iff f or every V’ which is the same as V except possibly V(x) * V(x). and where M’= <W.E.DI,V’>.~~I. The conditional logic N is the smallest set of sentences of L. that contains classical first-order logic and that is closed under the following axiom schemata and rule of inference. Axiom Schemata2 VN (x>(a*P> 3 (cr*(x)P> if (Y contains no free occurrences of x. Rule of Inference RCM From p 3 y infer (cw+/3> I> (cy3y). The notions of theoremhood in A’. and derivability and con- sistency are defined in the usual manner. The symbolism r 7N CY means that 01 is derivable from r in N. We obtain: Theorem: i=cu iff , -IIl a. Soundness is proven by a straightforward inductive argu- ment. Completeness is proven by showing that there is a canoni- cal N-model, in which every non-theorem of N is invalid. This proof is an adaptation of the method of canonical models in first-order modal logics [Hughes and Cresswell 841, but modified to accommodate the variable conditional operator. 4. Default Reasoning: Initial Considerations A default theory T is an ordered pair <D. C> where D is a set of wffs of N and C is a non-empty consistent set of wffs of FOL. D is intended to represent necessary or conditional Included in C would be statements such as ‘opus is a ra;en” and “everyone taking CMPT882 this semester is under 6 feet tall”. The goal is to define a “default” provability operator which, fol- lowing [McDermott and Doyle SO], I will write as T I- p to indi- cate that p follows by default from T. The first part of this enterprise is startlingly easy. Con- sider for example where all that is known is Bird(opus)+F’Zy(ops) and Birdcops). As argued. we should not be able to conclude from this that K?y(opus>. simply because, while the truth of Birdcops) relies of this state of affairs, the truth of Bird(opus)+FZy(opus) relies on other less exceptional states of affairs, and there is no necessary connection between this state of affairs and the other states of-affairs. Yet nonethe- less it does seem reasonable to conclude F7y(opus> “by default”. The key point here is that in drawing this default conclusion, one is relying on a tacit assumption: that the world at hand is as unexceptional as possible, consistent with what is known. That is. given the above, it is entirely consistent that opus is a penguin, is tethered. or simply (for no known reason) does not fly. The default conclusion relies on assuming that if none of these exceptional factors are known to hold, they are not to hold. This assumption can be stated as follows: assumed The Assumption of Normality: The world being modelled is among the least exceptional worlds according to D in which the sentences of C are true. Thus it seems that we would want to say that T - p just when, in the presence of “background information” D. p is true in all least exceptional worlds in which C is true3. Hence: Temporary Definition: T - p iff (so it seems> D --,,. C=&-p. This does in fact work example if we have that in a large class of simple cases. For D = {Raven(x)+BZack(x). (Raven(x)~AZbino(x))3 ~Black(x)},~ C = {Raven(a)}, then we can make the default conclusion that Black(a). If we have that C = {Rauen(a).AZbino(a)~. then we can derive by default that -Black(a); we cannot now derive Black(a). because we cannot prove {Raven(x)*Black(x). (Raven(x>AAZbino(x>>* ~BZack(x)} Similarly, TV (Raven(a>AAZbino(a>>3 if we have Black(a). D = {Qu.uker(x>+ Pacifist(x). Rep&can(x)+ 4’acijist(x>) and learn that Quaker-(a) then we can conclude Pacifist(a). If we learn also that Republican(a). then we can conclude nothing con- cerning whether a. by default, satisfies Pacijist. However the approach to this point also fails to work for a large class of simple cases. If we have that: D = {Raven(x)+ Black(x)}, C = {Raven(a). Has-wings(a)} then the relation {Raven(x)+ Black(x)} tN (Raven(a)lWas_wings(a))3 Black(a) 1 Note that the apparent circularity in this and the next definition is benign. 2 I am following the conventions of [Chellas 751 and [Nute 801 for naming ax- ioms and rules of inference. 342 Default Reasoning worlds where Raven(a) is true. Black(a) is also true. However there are models where, in the simplest worlds where Raven(a) and Has-wings(a) are true. Black(a) may not be true. Hence (Raven(a)AHas_wings(a))3 Black(a) is not entailed by D. It seems however that, based on what is known, there is no good reason for supposing that having wings has any effect on blackness. In a word. having wings seems irrelevant to whether a raven is black. This is the second assumption that I make in order to be able to draw default inferences in a default theory. It may be stated as: Assumption of Relevance: Only those sentences known to bear on the truth value of a conditional relation will be assumed to, in fact, have a bearing on that relation’s truth value. This is of course rather vague, and part of section is to make this notion more precise. the task in the next 5. Two Approaches for Default Reasoning The general idea in this paper is to use the logic N for representing defaults, and use metatheoretic considerations to sanction contingent default inferences. To this end, two assump- tions were identified in the previous section as being essential for default inferences. In the previous section also, the formal sys- tem N was used to suggest an initial approach for default inferencing. As mentioned though. this approach fails for a wide class of simple cases. Two approaches are presented in this sec- tion to rectify these difficulties. ‘The general idea in both approaches is to consider only a subset of the models of a default theory T for default inferences. Interestingly, the approaches derive from somewhat complementary intuitions, yet there is a high degree of symmetry between them. The First Approach: Consider the statement cw3y. This statement is true at a world w in model M iff f<w. Ilc#‘)C IIylI”. Intuitively, ,0 is irrelevant to the truth of this statement if knowing p doesn’t alter our judgement of the truth of the conse- quent of the conditional. Hence, according to our truth condi- tions for the conditional, p is irrelevant to cu3y iff f(w.llaA~llM) GllyllM and f(w.llaA+llM) E Ilyll”. So one approach is to assume, whenever possible, that a proposition /3 has no effect on the truth value of cu3y. Hence, informally, we begin with a set of assertions D and extend this set by iteratively considering each conditional cy3y in D and each wff /3 of FOL. and if aA/33y is consistent, adding it to D. Thus if D is (Raven(x)+ Black(x). (Raven(x>AAZbino(x))+ -Black(x)} we will add statements including (Raven(x)AHas_wings(x))3 Black(x) and (Raven(x)AAZbim(x)A+?as~wings(x)h yBZack(x). However, this isn’t quite right. If D is {Q(x)3P(x). R(x>*-.lf'b)} (Q(x)fW(x)>* I? ye could consistently add either x or (Q(x)AR(x))~-P(X) (but not both) by this recipe. The solution is to add aAP3y only if there is no other “relevant” conditional that denies y. This can be accom- plished as follows: Definition: a3y is supported in r if there is /3 such that: 1. t-FOL aDfl* 2. rtNk-Y* 3. If there is 6’ such that c -FoL (YIP’ and r CN -@‘=3 y> then tFoL p D>p’. L’sing this we can define the procedure for forming an extension. If PO. 61. . . . is some ordering of wffs of FOL. we obtain: Definition: An extension E(D) of D is defined by: 1. EO = D. 2. Et+1 = 6. where 6 is defined by: Initially 6 = Izr. For each DL-,ar+y, 6=S U {cxAP,+y} if oA/3,3y is supported in D; 6 = 6 U icrA~pl+ y) otherwise. 3. E = I&,. i=O The procedure may be thought of as adding an inordinate number of default frame axioms to a set of defaults. in order to say that apparently irrelevant sentences are in fact irrelevant. Clearly only a single extension is produced. We obtain that D C E(D) and E(E(D)) = E(D) for an extension. Hence, under the process of forming an extension, an extension is a fixed point of the set of defaults. However, if D, G D2, it may not be the case that E(D,) C E(D,). An example is D1 = {cz+y} and D2 = D1 U {cyA/3~-y), wherein any E(D,) contains crA@+y but no E(D,) does. We also obtain: Theorem: E(D) is consistent if D is. Theorem: For any /3 E FOL and cz3yE D, aA/ yE E(D) or aA++yE E(D). We can define default provability as we did in the last sec- tion, but now incorporating assumptions of relevance via the extension. That is: Definitions T I- p iff E(D) tN C3p. Thus p follows by default from T if. considering all assumptions of irrelevance, p follows conditionally from the known facts C. This approach yields reasonable default inferences, with one exception. Consider where we have D = (~3fl. P3y. a3 -ty) and C = (01). If would seem that in this case the best strategy is to conclude neither y nor my. However, since Dt NC31Y, then in the extension of D we will also conclude yy. There seems to be no obvious remedy for this difficulty in this approach: fortunately it does not occur in the next approach. The Second Approach: This approach is perhaps the comple- ment of the first. Whereas before we added assumptions to D to constrain the models that we wanted to consider-for a default inference, here we assume that the world at hand is among the simplest worlds. consistent with what Thus for example if we know only that is known contingently. D = {Raven(x)+ Black(x)). C = { Raven(opus), Has-wings(opus) 1 then if the state of affairs modelled by C were among the sim- plest worlds according to D then, by the definition of 3, BZack(opus) must be true in that state of affairs. So the idea is to first make whatever conclusions we can about C under the assumption of normality. Given such an extension (or exten- sions) to C we can specify that p follows as default inference from T iff p follows in FOL from all extensions of C. There is a minor difficulty with this approach however arising again from the relative strength of defaults. Consider where we have: D = (Raven(x)+ Black(x). (Raven(x)r\AZbinu(x))+ -Black(x)}. Thus in the least exceptional states of affairs in which there are ravens, ravens are black, and in the least exceptional states of affairs in which there are albino ravens. ravens are not black. From this it follows that the states of affairs in which there are ravens are less exceptional than the states of affairs in which Delgaande 343 there are albino ravens. This means that if we have that C = (Raven(opus). AZbino(ops)), then in extending C we should only consider the second default. It is interesting also to see how this approach handles possi- ble transitive relations in the defaults. Consider where we have that D=(Qu.aker(x)+ Pacijist(x). Pacifist(x)+ Vegetarian(x)} and C={Quaker(a)}. If we assume that the world at hand is among the least excep- tional consistent with C. then we can conclude PaciJist(a). How- ever, given this new information. it now also becomes reasonable to conclude that Vegetarian(a). barring evidence to the contrary. So effectively we need to “iterate” over default transitivities, while allowing for the fact that particular transitivities may not be warranted. Hence in the above example, if we were to add Qzuker(x)+~Vegetarian(x) to D. we would still want to con- clude Pacifist(a) but not be able to conclude Vegetarian(a). This is accomplished as follows: Definition: A maxim.aZ contingent extension E(C) of C is defined by: 1. cc=c. 2. If D TV a3 y and I-F~L Ci3a! and if there is CY’ so that t- FoL CiJa’ and D t--~ -(a’*~> then I-~L CYD~’ then Ci+i =CiU(~~y). 3. E(C) = ‘~Ci- i=O This means that c~>y is added to Ci, if Ci implies 01 and for any o’ implied by C,. which conflicts with the default conclusion of y, cy’ IS lml?lled by cy. If we use I-’ for default derivability in this approach. we obtain: Definition: If T= <D. C> then T I-’ p iff f%!?(C) tFoL p. ‘Vote that the number of extensions will typically be finite. Two extensions are distinct if and only if there are transitivities in the defaults that conflict. That is. we get more than one exten- sion only if we have defaults of the form a+y along with a3P. @=-ly. We obtain also that. with respect to default derivability, the inferences of the first approach subsume those of the second: Theorem: If Tt-’ p then TI- p. The two approaches exhibit a high degree of symmetry. The first approach involves extending D. The basic issue in this approach concerns satisfying the assumption of relevance; the assumption of normality is trivially satisfied. The second approach on the other hand involves extending C. The basic issue in this approach concerns satisfying the assumption of nor- mality; the assumption of relevance is trivially satisfied. Of the approaches. the first is similar, from a technical standpoint, to other procedures for forming maximal sets of formulae. How- ever, it does not appear to lend itself to any straightforward implementation. In addition it sometimes leads to over-strong inferences. The second appears to have somewhat more promise for providing a basis for an implementation. In addition, the quantifier-free fragment of the logic N is decidable and so the second approach applied to specific individuals is easily seen to yield a decidable system. The next section describes a set of example default inferences under the second approach. 6. Some Examples The second approach to default reasoning arguably leads to reasonable and intuitive default inferences. As a first example, assume that we have the default portion of a theory: D, = {Adult(x)+ Employed(x). Univ-st(x)+ -&npZoyed(x)} , say. adults are typically employed. while university students normally are not. If we knew that someone was an adult then we could conclude by default that that individual was employed. If we knew that someone was an adult and a univer- sity student, then we could draw no conclusion. If, on the other hand, we knew that someone was an adult and was Dutch. then we would still conclude that they were employed. Of course, we also know that university students are typically adults. and so the defaults could be augmented to: D2 = D1 U (Lhiv-a(x)3 Adult(x)}. Now if we were told that someone was an adult and a university student, we would conclude by default that that person was not employed. The reason that we can now draw a conclusion is that in any model of D2, in the simplest worlds in which some- one is a university student, that person is not employed (but is an adult). From the logic N, we have the relation: 02 b-N Adult(x)+ VUniv-St(x). and so from N we can derive the default that, given D2, adults are normally not university students. Consider next the defaults: D3 = {Raven(x)+BZack(x), Raven(x)+Fly(x), (Raven(x)AAZbino(x))3 ~BZack(x)}. Not unexpectedly, we can conclude by default that ravens with wings are black. and that ravens that fly (or don’t fly) are black. Moreover albino ravens are concluded by default to fly but to not be black. Consider further where we augment the defaults so that we have: D4 = D3 U {Bear(x)+ BZack(x). (Bear(x)/Was_iZZness_X(x))3 +Zack(x)). The default conclusions in D3 go through as before. However, now if we learn that a particular raven has illness X then we would not conclude by default that the raven was not black: rather we would still conclude that the individual was black. The reason for this is that, by our notion of relevance, illness X has no apparent connection with the colouring of ravens, even though it clearly does for bears. Transitive relations among the defaults appear to be han- dled correctly. If we have: D5 = {Quaker(x)+ Pacifist(x). Pacijist(x)+ Vegetarian(x)). and we know contingently that Quuker(a) and Republican(a), then we could conclude by default than Vegetarian(a). If we were to augment D5 with either RepubZican(x)+qPacijist(x) or Republican(x)3 -Vegetarian(x) then in neither case could we form the default conclusion Vegetarian(a). Nor could we if ~(RepbZican(x)3 PaciJist(x)) were added. If, on the other hand, we have: Db = {Quaker(x)=+ Pacifist(x). Pacifist(x)3 Vegetarian(x), (Pacifist(x)ARep.&Zican(x))3 ~Vegetarian(x)}, and we know that Quaker(a). then we could conclude Vegetarian(a). If we knew that Quuker(a) and Republican(a). then we could conclude that Pacifist(a). However, since we have that pacifists are normally vegetarian, but that republican pacifists are normally not vegetarian, we would conclude ~Vegetarian(a). 7. Discussion The logic N together with the approaches described in this paper provide a basis for representing, and reasoning about, default statements, and for performing default inferencing. Arguably the properties of the logic conform to commonsense intuitions concerning default statements. Arguably also, the logic is more appropriate for representing information about defaults than default logics or non-monotonic logics, in that its semantics does not rest on the notion of consistency with a given set of assertions. Thus the relation between ravens and black- 344 Default Reasoning obligation [van Fraassen 721. The techniques presented herein then should, with simple modification. be applicable to default inferences concerning counterfactuals. subjunctives, and notions of obligation. Thus. as an example. if we had the stateinents “if John comes, it will be a good party” and “if John and Sue come, it will be a dull party” represented in one of the logics of [Lewis 731, and if we also knew that only John would be going to the party, then using techniques similar to those of this paper it should be possible to formalise the reasoning that would let us conclude that (likely) it will be a good party. Acknowledgements I would like to thank Robert Hadley and Sharon Hamilton for their very helpful comments on an earlier draft of this paper. This research was supported by the Natural Science and Engineering Research Council of Canada grant A0884. References [II Dl 131 [41 bl [61 [71 [81 [91 [lOI El11 iI21 [I31 iI41 I151 iI61 R.J. Brachman. “‘I Lied About the Trees’ or Defaults an@ Definitions in Knowledge Representation”, The Al Mqa- zinc. 6.3, 1985. pp 80-93. B.F. Chellas, “Basic Conditional Logic”. JournaZ of Philo- sophical Logic 4. 1975. pp 133-153. M Davis, “The Mathematics of Yon-Monotonic Reasoning”, Artificial InreZligence 13. 1980, pp 73-80 J.P. Delgrande, “A Propositional Logic for Natural Kinds”, AI-86 Canadian Society for Computational Studies of Artificial Intelligence Conference. May 1986 J.P. Delgrande. “A First-Order Logic for Prototypical Pro- perties”, Artificial Intelligence (to appear). 1987 J.P. Delgrande. “A Formal Approach to Default Reasoning Based on a Conditional Logic (Extended Report)“. LCCR TR 87-9, School of Computing Science, Simon Fraser Univer- sity, 1987. G.E. Hughes and M.J. Cresswel!. An Introduction to Modal Logic, Methuen and Col. Ltd.. 1968. G.E. Hughes and M.J. Cresswell, A Companion to Modal Logic, Methuen and Col. Ltd., 1984. D. Lewis, Counterfactuals, Harvard University Press, 1973. J. McCarthy, “Circumscription - A Form of Non- Monotonic Reasoning”. Artificial Intelligence 13, pp 27-39, 1980. D. McDermott and J. Doyle, “Non-Monotonic Loglc I”, Artificial Intelligence 13, 1980. pp 41-72. D. Nute, Topics in Conditional Logic, Philosophical Studies Series in Philosophy, Volume 20, D. Reidel Pub. Co.. 1980. D. Nute. “A Non-Monotonic Logic Based on Conditional Logic”, ACMC Research Report 01-0007. Advanced Compu- tational Methods Center, The University of Georgia, 1986 R. Reiter. “A Logic for Default Reasoning”, ArtifzciaZ InteZZi- gence 13. 1980. pp 81-132. E. Rosch. “Principles of Categorisation” in Cognition and Categorisation , E. Rosch and B.B. Lloyds eds., Lawrence Erlbaum Associates. 1978. B.C. van Fraassen. “The Logic of Conditional Obligation”. Journal of Philosophical Logic I, 1972. pp 417-438. Delgrande 345
1987
61
656
Department of Computer Science Carnegie-Mellon University Abstract Most of the effort AI has put into common sense reasoning has involved inference by sequential rule application. This approach is most effective in well characterized domains where any valid chain of inference from a set of observa- tions leads to an acceptable interpretation. In more realis- tic cases where there are multiple consistent interpretations that are not equally good, or where there are no consistent interpretations, it seems more natural to choose the best alternative based on the interpretations themselves rather than the chains of inference used to derive them. ,uKLOIVIZ is a connectionist network which uses simulated annealing to search the space of interpretations, or models. Incon- sistent theories lead to generation of models which come as close as possible to satisfying all of the axioms, so counter-factual reasoning can be accomplished by the same mechanism as factual reasoning. An example involving conflicting information is presented for which @CLONE finds an intuitively plausible interpretation. The model based approach described below embodies three key ideas taken from other work on common sense reasoning. In a possible worlds semantics a counterfactual implication A > B is true if B holds in the most plausible world where A is true [Ginsberg, 19861. The hard part of reasoning this way is finding the appropriate world. This is the task of construct- ing a vivid knowledge base examined by &evesque, 19861, which suggests using &faults and other heuristics. [Johnson- .Laird, 19831 also finds one or a few models of a scenario, and improves the efficiency of the search by using models whose structure is analogous to the problem domain. Such models are called direct [Hayes, 19851. For a system that can find plausible models, this kind of reasoning is easier than or- dinary implication, which would require checking whether B holds in all consistent models where A does. pICLONE finds plausible analog models as determined by a continuous evalua- tion function which maximizes the number of assertions in the knowledge base (MB) that hold in the model. All assertions can therefore be treated as defaults. Expert systems such as ISIS [Fox, 19831 also use real valued constraints to guide the search for a good solution, but the search is over paths to solutions rather than the solu- tions themselves. The disadvantage is that the search control knowledge does not give a process independent semantics for characterizing the correct solution. Previous spreading activation models have been less ex- pressive than @CLONE. In finding similarities between words, the algorithm of [Quillian, 19681 spreads activation along all types of links identically. [Shastri, 19851 treats concepts as atomic propositions rather than predicates, and can simultane- ously consider only a single token of any type. The remainder of this paper presents an example of coun- terfactual reasoning, describes how it can be accomplished within the model based framework, and gives a detailed ex- planation of how this kind of reasoning can be implemented on a connectionist architecture. 0 e e At a Newport bar, June meets Ted, who is dressed like a sailor. Ted is excited about the approaching television season, and tells June how the schedule reflects the evolution of TV pro- gramming. June concludes that Ted is a sailor, and that he must spend a lot of time becalmed to be so interested in television. The next week she sees Ted’s picture in the newspaper with the caption “lvlillionaire Playboy Ted Turner.” June concludes that sailing must be only a hobby of Ted’s, since millionaires don’t have manual labor jobs but often have ostentatious pas- times. They also are unlikely to spend all day watching TV, so perhaps Ted has a job as a high level television executive. Appendix I contains the full @LONE description used to approximate June’s beliefs before seeing the newspaper. In the knowledge base Ted is asserted to be a (professional) sailor and it is asserted that one of Ted’s interests is a television-related activity. Sailors are defined to be people one of whose jobs is sailing. Millionaire-playboys are defined to be people who have an expensive hobby, and it is asserted that all of their jobs are armchair-activities, and they must have at least one job. The following is an abstract description of the way the final model is chosen by @LONE. The description is ab- stract because it refers to high level rule-like causal relations which do plot correspond in a simple way to the changing rela- tions among unit states, determined by the simulated annealing search algorithm [Smolensky, 19861. When a pICLONE network constructed from the knowl- edge base is asked “If Ted were a millionaire-playboy, what 346 Default Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. would his job and hobby be?“’ the system must try to recon- cile being a millionaire-playboy with its previous knowledge about Ted, that he is a sailor and is interested in TV. The counter-factual premise conflicts with the knowledge base be- cause sailing is a vigorous-activity, and the jobs of millionaire- playboys must be armchair-activities. The initial impact of this conflict on the selection of a model is that sailing is likely to still be one of Ted’s interests, but perhaps not his job. Since millionaire-playboys must have expensive hobbies and only two activities known to require expensive equipment are in the MB, flying and sailing are the most likely candidates. Sail- ing is chosen because it is already thought to be an interest. The plausible substitution that sailing is Ted’s job rather than his hobby is made because HAS-JOB and HAS-HOBBY are both subsumed by HAS-INTEREST, making it is relatively easy to slip between them. Millionaire-playboys must have a job that is an arm- chair activity and a profitable activity. Both TV-network- management and Corporate-Raiding fit this category, but the former is chosen because it is known that Ted is interested in television. TV-acting is rejected because it is not an armchair-activity, and TV-watching is rejected because it is not a profitable-activity. If the knowledge base did not specify that millionaire- playboys had expensive hobbies, the bias towards having sail- ing as an interest would not be sufficient for its being picked out as a hobby. Similarly, if millionaire-playboys did not have to have jobs none would be picked out. And if the query had been simply “What are Ted’s job and hobby?’ no contradic- tory information would have been introduced. The answer, that sailing is Ted’s job and he has no hobbies, would be con- structed from knowledge in the KB alone. PKLONE answers wh- questions of the form A > B(x). A is a set of propositions and B(x) is a set of proposition templates in which either predicate symbols or individual constants are left out, to be filled in by the system. The system searches for the most plausible model in which A holds, and answers by filling in the missing predicates and individuals in B(x). Since the response is filling in rather than assenting, the yes/no questions of [Ginsberg, 19861 must be recast into wh- form: ‘“If Ted were a millionaire-playboy, what would his job be?” rather than ‘“If Ted were a millionaire-playboy, would he be a sailor?’ ,&LONE is able to make very fine grained distinc- tion tween models because the definitions and assertions in the are decomposed into many constraints among micro- features [Hinton, 19811. Axioms which mention defined pred- icates are expanded by replacing the predicate with its defini- tion, and all axioms with conjunctions on the right hand side are broken up into multiple axioms. For instance, the similarity of HAS-JOB and HAS-HOBBY necessary for answering the exam- ple query is evidenced in the micro-features they both excite: <domain animal>, <range activity> and primitive class has- interest>. In addition, HAS-JOB has the micro-features <range proJitable-activity> and <primitive class has-job> while HAS- HOBBY has the micro-feature qkmitive elms has-hobby>. Because defined concepts are expanded out before the connectionist network is built, definitional knowledge is not represented explicitly. Instead, it is represented directly, in the relationships between patterns. Direct representations [Hayes, 19851 have properties isomorphic to formal properties of the entities they represent. PICLONE directly represents explicitly defined subsumption relations among both concepts and roles, and subset relations among sets of role fillers (see section IV I%.). Thus it is impossible for a @LONE network to represent that, for instance, Ted is a MILLIONAIRE-PLAYBOY but not a PERSON. In addition to making definitions (as distinct from assertions) non-defeasible, it improves the efficiency of the system because certain contradictory models are eliminated from the search space. The query language is highly constrained in that all pred- icates in both the premise and the consequent must be about the. same individual (Ted in the example). This way inhibition can be hard-wired between, for instance, the value restriction that all jobs be armchair activities, that sailing is a job, and that sailing is a vigorous activity. This would be inappro- priate if the restriction applied to one individual, but another was the sailor. &u-d-wiring units to enforce very specific con- straints produces a simple network topology, maximizes the independence of the units, and increases the effectiveness of parallelism. The constraints implement the model evaluation function, and are of two main types: each axiom has some associated cost for violation; and there is a penalty for including tuples in the extension of a predicate. Each axiom or tuple contributes additively to the evaluation function independently of which other axioms hold. Using this cost function, I.LKLONE’S sim- ulated annealing search algorithm generally finds a good ap- proximate solution early, which it then refines. In simulated annealing, all constraints are continually considered and con- tribute in accordance with their strength. The importance of satisfying constraints of any given strength is gradually raised. This way, more important constraints are generally satisfied first, except when contradicted by a number of less important ones. An advantage of the annealing search is that models are not evaluated entirely in isolation. At a given moment, the state of the system may represent a superposition of models. To the extent that two models overlap, they reinforce one an- other, so that models incorporating propositions which hold in the greatest number of competing interpretations are pre- ferred. Even if the “wrong” model is chosen, this maximizes the probability that individual beliefs are correct. This heuristic is necessary for correctly answering the example query. Mod- els in which flying is Ted’s interest are evaluated as plausible only when this is necessary to fulfill the hobby requirement of playboys. There are two groups of plausible models in which Ted’s interest is sailing: those in which it fulfills the hobby 1 ,LXLQIWE uses a formal query language, but English paraphrases are used in this paper. The formal version of this query is given in Appendix I. Detihick 347 requirement of playboys and those in which it fulfills the job requirement of sailors. Therefore, more minimal models in which sailing is Ted’s interest are plausible. This biases the evaluation toward Ted’s hobby being sailing. Unfortunately there is no process independent semantics to model the effect of this heuristic. 11 System Implementation A. Architecture A $CLONE KB is compiled into a connectionist network of very simple processors. Each processor has an graded activity level in [0, l] which it asynchronously updates based on the activity levels of its neighbors. Vectors of activity levels have meaningful interpretations in terms of concepts, roles, and in- dividuals. The architecture of the network supports direct rep- resentations of an individual and a set of other individuals to which it is related, together with the relations involved. There are five important modules: The subject module represents the individual the query is about; the subject-type module represents the subject’s type, which is a concept; the role- fillers module represents the set of individual/role pairs for all individuals directly related to the subject; the role-filler- type-restrictions module represents the set of value restric- tions imposed on role fillers by the concept represented in the subject-type module; the role-filler-types module represents the type of each individual in the role-fillers module, whether or not it is currently filling any roles. The communication path- ways between modules are shown in figure 1. The constraints among the role-filler-types, role-fillers, and role-filler-type- restrictions modules are too complex to be captured directly with pairwise links, so there is another module to mediate the interaction.2 There are also modules which do not par- ticipate in the direct representation, but serve only for input and output. The shape of the modules in figure 1 is a clue to what is represented: the modules with a single row of units represent a single entity, either a concept in the subject-type module, or an individual in the subject module. The mod- ules with multiple rows represent a set of pairs of entities: the role-fillers module represents sets of individual/role pairs; the role-filler-type-restrictions module represents sets of con- cept/role pairs; the role-filler-types module represents sets of individual/concept pairs. B. Representations Sets of Individuals Each individual maps to a unique bit in patterns representing sets of individuals. For the example domain there are seven individuals, so the patterns have seven bits. Set containment, an important relation in PKLONE, is represented directly. In figure 2 it is evident that {TV-Watching Flying Sailing} contains {TV-Watching Flying} because the pattern for the former contains that for the latter. 21 plan to use a more powerful connectionist model in the future which will not require this module. Role micro-features domain ACTIVITY domain ANIMAL range INANIMATE-OBJECT range PROFITABLE-ACTIVITY range ACTIVITY range EXPENSIVE-ACTIVITY range EXPENSIVE-ITEM range TV-RELATED-ACTIVITY P rimitive class HAS-HOBBY prim tive class HAS-EQUIPMENT primitive class HAS-JOB primitive class HAS-INTEREST 1 SUBJECT-TYPE C----ISUEUECTI ROLE-FILLER- RESTRICTIONS Individuals Ted , / , Figure 1: pKLONE has five important modules. Those modules which directly constrain one another are connected. The meaning of each unit can be deduced from the printed descriptions. For the subject and subject-type modules, the meaning is the description above the unit. For the other modules, the meaning involves the conjunction of the descriptions in the unit’s row and column. For instance, the top left unit in the role-fillers module means that Ted is filling a role whose domain is ACTIVITY. Roles The KB is examined to find all the properties which are either used to define a role or asserted to be true of one. Each role corresponds to a subset of these properties, or micro- features. Role patterns have one bit for each micro-feature. For the example domain there are 12 role micro-features, among which are <range inanimate-object> and <primitive class has-interest>. For roles, the relation of subsumption is directly repre- sented as the relation of set containment of patterns. For ex- ample, the pattern representing the role HAS-HOBBY is the set of micro-features {<primitive class has-hobby>) while HAS- EXPENSIVE-HOBBY has the micro-features {<primitive class has-hobby> <range expensive-activity>). HAS-HOBBY sub- sumes HAS-EXPENSIVE-HOBBY, and its pattern is contained by the pattern for HAS-EXPENSIVE-HOBBY, as illustrated in figure 2. [Hinton, 19811 originated this technique, which results in very efficient representations. Information relevant to a micro- feature need only be attached locally to a single unit, and yet it is used by all concepts having the property associated with the micro-feature. This avoids the dilemma facing conventional semantic network implementations: either cache information together with each concept that needs it, requiring duplication of information, or search up the inheritance hierarchy each time the information is needed. 348 Default Reasoning Pattern f Figure 2: Two examples illustrating how pairs of patterns are conjunctively coded. The first example (black) combines the two bit pattern for {TV-Watching Flying} with the one bit pattern for HAS-HOBBY, producing a 2x 1 bit pattern in the role-fillers module representing the fact that {TV-Watching flying} is the set of fillers of the HAS-HOBBY role. The second example (gray) combines the three bit pattern for {TV-Watching Rying Sailing} with the two bit pattern for HAS-EXPENSIVE-HOBBY, producing a six bit pattern in the role-fillers module. Since the former value permission necessarily follows from the latter, its two bit pattern is contained by the six bit pattern for the latter. Pair Representations Figure 2 illustrates the technique used for representing pairs of patterns. The size of the module required to represent a pair of entities is the product of the pattern lengths of the two entities. To represent the pairing AB, for each i and j the unit at coordinates i, j is turned on if and only if the ith bit in the pattern for A is on and the jth bit in the pattern for B is on. To store multiple pairs, the patterns for each pair are superimposed. This is a variation on the technique of cuarse coding [Hinton et al., 19861. The direct relation between subsumption and set contain- ment of patterns carries over to the modules representing pairs as well. Figure 2 illustrates this for the the role-fillers mod- ule. If {TV-watching Flying Sailing} is the set of fillers of the HAS-EXPENSIVE-HOBBY role, it automatically becomes the case that, for example, each of {TV-watching Flying} is filling the HAS-HOBBY role. This works because the implications of set A filling role R involve subsets of A and subsumers of R, either of which have patterns with fewer units on. There are five types of constraints that must hold between mod- ules (see figure l), and five more types that must hold within modules. One illustrative constraint of each type is described here. See [Derthick, 19871 for the complete description. ConstraiiTnts ithin Subject-Type odule These con- straints ensure that a coherent concept is represented. If the pattern for a concept is present, then the patterns for all con- cepts asserted to subsume it must be present, and the patterns for all concepts asserted to be disjoint from it must be ab- sent. This can usually be done with pairwise links among the units in a group. For example, an inhibitory link between the <primitive chss animal> and <primitive class activity> micro-features expresses that ANI~MAL and ACTIVITY are dis- joint. When more than one bit is required for discrimination, extra units are created to express the constraint. With the more powerful connectionist architecture mentioned in section A., these extra units will not be required. s, ~~le-~~l~er-Ty~e~~ and d&s These constraints ensure that all type restrictions on a role are satisfied by fillers of the role, and are the most complicated part of pKLONE. A ~Q~~-~~~e~~ty~es micro-feature represents the conjunction of an individual and a concept micro-feature. The former deter- mines the relevant column in the role-fillers module, and the latter determines the relevant column in the role-Giller-type- restrictiom module. If the individual is filling a role at least as specific as the one to which the value restiction applies, then the role-filler-types micro-feature must come on. This condition can be determined by first ORiag the two columns together, and then ANDing the resulting column. If the result is true, then the value restriction applies to this individual. When the network building algorithm is given the KB of Ap- pendix I, a Hopfield and Tank network [Hopfield, 19841 with 2531 units and 16,959 connections results. Empirically it was found that an annealing schedule exponentially increasing the gain for 500 time steps was sufficient for answering the queries mentioned above. (One time step involves updating the state of each unit.) This takes about ten minutes of CPU time on a Symbolics 360. The number of units scales as the third power of the size of the knowledge base, and the number of links scales as the fourth power. With the envisioned, more powerful connection- ist model, this will be reduced to the second power and third power, respectively. The only known theoretical bound on the number of time steps in the annealing schedule required for good performance is exponential, however if pKLONE gener- ally produces networks with smooth energy surfaces the results may be much better. Only two KBs have been compared to date: a 34% increase in KB size required a 16% increase in the annealing schedule length. Derthick 349 v. cess A more detailed description of the inference process outlined in section II. can now be given. The input/output modules ex- cite the Ted unit in the subject module, which in turn excites SAILOR in the subject-type module. Meanwhile, MILLIONAIRE- PLAYBOY also receives external excitation. Although incom- patible, these concepts were not explicitly made disjoint in the KB, and so no links were built within the subject-type module to inhibit the combination of the two patterns. In @LONE, re- lationships between concepts are maintained indirectly through the effect of each on the model anyway, so there is no need to precompute them. The SAILOR pattern contains the <permission has-job sailing> micro-feature, which excites the pattern for HAS- JOB in the sailing column of the role-fillers module. The MILLIONAIRE-PLAYBOY pattern contains micro-features for <minimum has-expensive-hobby I > and <restriction has-job annchair-activity>. The latter excites a pattern in the role- filler-type-restrictions module. At this point, the micro- feature in the role-filler-types module for <sailing is an armchair-activity> has a problem. On the one hand, sail- ing is known to be a VIGOROUS-A~TVITY, so the <sailing is a vigorous-activity> micro-feature in the role-filler-types module has a positive bias and is active. Since VIGOROUS- ACTMTY and ARMCHAIR-ACTMTY ate disjoint, the <sailing is an armchair-activity> micro-feature is inhibited. But the relevant columns in the role-fillers module and the role-filler- type-restrictions module indicate that sailing must indeed be an armchair activity. There is no way to satisfy all the constraints simultane- ously. The system’s choice depends on the relative strengths of the constraints, which are free parameters chosen by the ex- perimenter. Logically they are part of the MB, but as of now they are constants hidden in Lisp code. In a connectionist sys- tem these strengths can, in principle, be learned automatically. I have adjusted the strengths of the links so the constraint from the <permission has-job sailing> micro-feature in the subject- type module to the “sailing is a HAS-JOB" pattern in the role- fillers module is weakest. Therefore, the pattern for has-job in the sailing column of the role-fillers module is not sustain- able. From this point, the choice of Sailing as Ted’s hobby and TV-Network-Management as his job result from the similarity of their patterns, independent of any constraint strengths. The unit that differentiates HAS-JOB from HAS-INTEREST is forced off, but the remaining activation of the “sailing is a HAS- INTEXFlST" pattern leads to the eventual choice of sailing as Ted’s hobby. Space limitations prevent a detailed description of the selection of TV-Network-Management as Ted’s job. This paper introduced a novel semantics for question answer- ing based on finding an explicit partial model which plau- sibly reconciles long term knowledge with situation specific information. The Ted Turner example demonstrates that this method is effective for a non-trivial problem involving coun- ter-factual reasoning. Finding a plausible model is well suited to parallel constraint satisfaction using a special purpose ar- chitecture. The structure of the solution is constant so the models can take advantage of direct representations to reduce the search space. Explicitly represented constraints contribute independently to the evaluation function, so parallelism can be used effectively with units connected heterogeneously to en- force particular constraints. Representing concepts and roles as sets of micro-features results in many more constraints in the connectionist network than there are statements in the KB. This, along with the continuous activation levels of units, in- creases the smoothness of the evaluation function so that sim- ulated annealing is a good search technique. Future work will examine: knowledge bases of many sizes to better determine empirically how the search time scales; learning as an alternative to setting weights by hand; and the possibility of giving a semantics to the weights in terms of probabilities of models. cknowledgments Geoff Hinton and Dave Touretzky have been very helpful with the design of pKLONE and the preparation of this paper. Dis- cussions with Ron Brachman resulted in a more coherent KB language. I thank Oren Etzioni, Craig Knoblock, David Plaut, Roni Rosenfeld, David Steier, and the anonymous referees for providing useful comments. This research is supported by NSF grants IST-8520359 and IST-8516330, and an ONR Graduate Fellowship. The following input was used by the network building algo- rithm to produce a Hopfield and Tank network for answer- ing queries. The syntax derives from that of KL2’s defini- tion language [Vilain, 19851. Three ontological categories are used: concepts are classes of individuals. Roles are classes of two-place relations between individuals. DEFCONCEPT and DEFRQLE statements normally give necessary and sufficient conditions for determining whether an individual instantiates a concept or whether an ordered pair of individuals instantiates a role. Alternatively, if the language is not powerful enough to provide sufficient conditions for recognizing membership, a concept or role can be defined to be primitive. In this case, the extension of the concept or role must be explicitly de- clared using INSTANTIATE-CONCEPT or INSTANTIATE- ROLE statements. Conditions which necessarily hold of in- stances of concepts or roles, but are not part of the recognition criteria are asserted with ASSERT-CONCEPT or ASSERT- ROLE statements. (DEFCONCEPT Aniial (PIUMITIVE)) ;ANIMAL is a natural kid - you can’t defiue it (DEFCONCEPT Person (PRIMITIVE)) (ASSERT-CONCEPT Person (SPECIALIZES Animal)) ;PERSONS always turn out to be ANIMALS (DEFCONCEPT Millionaire-Playboy (SPECIAIJZES Person) (SOME Has-Hobby Activity-Requiring-Expensive-Equipment)) ;a PLAYBOY must have some HOBBY 350 Default Reasoning ;whichis ~~ACT~TY-REQ~JRING-EXPENSIVE-EQUIPMENT (ASSERT-CONCEPT Millionaire-Playboy (MIN Has-Job 1) ;a PLAYBOY must have a JOB (RESTRICTION Has-Job Armchair-Activity)) ;a PLAYBOY'sJOBs must be ARMCHAIR-ACTIVMYS (DEFCONCEPT Sailor (SPECIALIZES Person) (PERMISSION Has-Job Sailing)) ;sailing must be one of a SAILOR'S JOBS (DEFCONCEPT TV-Buff (SOME Has-Interest Television-Related-Activity)) (ASSERT-CONCEPT. TV-Buff (SPECIALIZES Person)) (DEFCONCEPT Activity (PRIMITIVE)) (ASSERT-CONCEPT Activity (DISJOINT Inanimate-Object) (DISJOINT Auimal)) (DEFCONCEPT Activity-Requiring-Expensive-Equipment (SPECIALIZES Activity) (SOME Has-Equipment Expensive-Item)) (DEFCONCEPT Armchair-Activity (PRIMITIVE)) (ASSERT-CONCEPT Armchair-Activity (SPECIALIZES Activity)) (DEFCONCEPT Vigorous-Activity (SPECIALIZES Activity) (DISJOINT Armchair-Activity)) (DEFCONCEPT Profitable-Activity (PRIMITIVE)) (ASSERT-CONCEPT Profitable-Activity (SPECIALIZES Activity)) (DEFCONCEPT IJnProfitable-Activity (DISJOINT Profitable-Activity) (SPECIALIZES Activity)) (DEFCONCEPT Television-Related-Activity (PRIMITIVE)) (ASSERT-CONCEPT Television-Related-Activity (SPECIAL~S Activity)) (DEFCONCEPI’ Inanimate-Object (PRIMITIVE)) (ASSERT-CONCEPT Inanimate-Object (DISJOINT Animal)) (DEFCONCEPT Expensive-Item (PRIMITIVE)) (ASSERT-CONCEPT Expensive-Item (SPECIALIZES Inanimate-Object)) (DEFROLE Has-Interest (PRIMITIVE)) (ASSERT-ROLE Has-Interest (DOMAIN Animal) ;ody ANIMALS canhave INTERESTS (RAN6E Activity)) ;O~~ACTIVITYSC~~ be INTERESTS (DEFROLE Has-Job (PRIMITIVE)) (ASSERT-ROLE Has-Job (SPECIALIZES Has-Interest) (RANGE Profitable-Activity)) (DEFROLE Has-Hobby (PRIMITIVE)) (ASSERT-ROLE Has-Hobby (SPECIALIZES Has-Interest) (DISJOINT Has-Job)) (DEFROLE Has-Equipment (PRIMHIVE)) (ASSERT-ROLE Has-Equipment (DOMAIN Activity) (RANGE Inanimate-Object)) (INSTANTIATE-CONCEPT (Activity-Rcquiring- Expensive-Equipment Vigorous-Activity) Sailing) (INSTANTIATE-CONC Activity-Requiting-Expensive-Equipment Flying) (INSTANTIATE-CONCEPT (Profitable-Activity Armchair-Activity) Corporate-Raiding) (INSTANTIATE-CONCEPT (Television-Related-Activity Armchair-Activity UuProfitable-Activity) TV-Watching) (INSTANTIATE-CONCEPT (Television-Related-Activity Vigorous-Activity Profitable-Activity) TV-Acting) (INSTANTIATE-CONCEPT (Television-Related-Activity Armchair-Activity Profitable-Activity) TV-Network-Management) (INSTANTIATE-CONCEPT (Sailor TV-Buff) Ted) The query discussed in the paper, “If Ted were a millionaire-playboy, what would his job and hobby be?” is written: ((SUBJFET Ted) (SUBJECT-TYPE IvIillionaire-Playboy) (WITH (ROLE Has-Hobby) (FILLERS ?)) (WITH (ROLE Has-Job) (FILLERS ?))) [Derthick, 1987-J Mark A. Derthick. A Model Based Approach to Knowledge Representation and Reasoning. Technical Report, CMU, Pittsburgh, PA, 1987. Forthcoming. [Fox, 19831 Mark Fox. Constraint-directed search: a case study of job-shop scheduling. PhD thesis, CMU, 1983. [Ginsberg, 19861 M. Ginsberg. Counterfactuals. Artijkial In- telligence, 30:35-79, 1986. [Hayes, 19851 l? J. Hayes. Some problems and non-problems in representation theory. In Ronald J. Brachman and Hec- tor J. Levesque, editors, Readings in Knowledge Repre- sentation, Morgan Kaufmann, 1985. [Hi&on, 19811 G. E. Hinton. Implementing semantic net- works in parallel hardware. In Parallel Models of As- sociative Memory, Erlbaum, Hillsdale, NJ, 1981. [Hinton et al., 19861 G. E. Hinton, J. L. McClelland, and D. E. Rumelhart. Distributed representations. In ParaZZeZ distributed processing: Explorations in the microstruc- ture of cognition., Bradford Books, Cambridge, MA, 1986. [Hopfield, 19841 J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences U.S.A., 81:3088-x)92, May 1984. [Johnson-Lair-d, 19831 Philip N. Johnson-Laird. Mental Mod- els. Harvard University Press, 1983. [Levesque, 19861 Hector J. Levesque. Making believers out of computers. Arti@ial Intelligence, 30:81-108, 1986. [Quillian, 19681 M. R. Quillian. Semantic memory. In M. Minsky, editor, Semantic information processing, MU Press, Cambridge, Mass, 1968. [Shastri, 19851 Lokendra Shastri. Evidential Reasoning in Se- mantic Networks: A Formal Theory and its Parallel Implementation. PhD thesis, University of Rochester, September 1985. Available as TR 166. [Smolensky, 19861 P. Smolensky. Foundations of harmony theory: cognitive dynamical systems and the subsym- bolic theory of information processing. In Parallel dis- tributed processing: Explorations in the microstructure of cognition, Bradford Books, Cambridge, MA, 1986. [Vilain, 1985-j M.B. Vilain. The restricted language arshitec- ture of a hybrid representation system. In IJCM-85, Mor- gan Kaufmann, August 1985. Deathick 351
1987
62
657
More On Inheritance Hierarchies with Exceptions Default Theories and Inferential Distance David W. Etheringtonl Artificial Intelligence Principles Research Department AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974-2070 ether%allegra@btl.csnet r Abstract In Artificial Intelligence, well-understood reason- ing systems and tractable reasoning systems have often seemed mutually exclusive. This has been exempli- fied by nonmonotonic reasoning formalisms and inheritance-with-exceptions reasoners. These have epitomized the two extremes: the former not even semidecidable, the latter completely ad hoc. We previously presented a formal mechanism for specifying inheritance systems? and minimal criteria for acceptable inheritance reasoning. This left open the problem of realizing an acceptable reasoner. Since then. Touretzky has developed a reasoner that appears to meet our criteria. We show that his reasoner is for- mally adequate, and explore some of the implications of this result vis-ii-vis the study of nonmonotonic rea- soning . 1. Introduction Nonmonotonic reasoning formalisms have been the subject of much interest lately (cf. [AI 19801, [AAAI 19861). They provide principles for represent- ing and reasoning with rules that generally hold but are subject to exceptions. Although the ability to rea- son with such rules appears to be a central facet of intelligence, the formalisms developed to this point have been intractable. For example, in the general case default logic is not even semidecidable. Motivated by a need to build systems with good computational properties, many researchers have sacrificed formal precision. Faced with the worst-case intractability of formal systems, they have despaired of formalism altogether. While this has sometimes led to very fast “inference” mechanisms. there has often been little more than vague intuitions about exactly ivhat these mechanisms infer. A canonical example of this has been the use of inheritance reasoning in AI systems. ’ Parts of this work were done at the University of British Columbia, and supported in part by an I.W. Killam Predoctoral Scholarship and by NSERC grant A7642. Inheritance reasoners represent a system’s knowledge as a connected set of nodes. The nodes represent classes and/or individuals, with associated sets of properties. The connections indicate the flow (or inheritance) of properties from “more general” to “less general” nodes. Such systems frequently make provision for exceptions to inheritance, allowing “peculiar” individuals or classes to preempt the nor- mal flow of properties. In the absence of adequate semantic characteriza- tions of inheritance systems, correct inference has typ- ically been defined (to the extent it has been defined at all) in terms of intuitions and the behaviour of par- ticular systems. This has lead to anomalous results, including mismatches between intuition and system performance (see [Etherington 1987b] or [Touretzky 19861 for examples). In earlier work [Etherington & Reiter 1983; Eth- erington 1987b], we presented a formal mechanism for specifying inheritance systems, and minimal criteria for acceptable inheritance reasoning. This left open the problem of realizing an acceptable reasoner. Since then, Touretzky has developed a reasoner that appears to meet our criteria. We show that mally adequate, and explore some of this result visd-vis the study of soning . his reasoner ‘ib for- of the implications nonmonotonic rea- 2. The Inheritance Language For the purposes of this paper, we adopt Touretzky’s [1986] network representation, which differs from that in [Etherington 1987b].2 This representation has four link types, shown in Figure 1. Each link has one interpretation if it originates from an individual-node, and another if from a class-node. Relational links have a third interpretation when they connect two individual-nodes. (We use upper and lower case letters for classes and individuals, respec- tively .) and 2 Specifically, strict links and exception links are not treated, relational links have been added. 352 Default Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Example 1 3. The Inferential Distance Algorithm We can illustrate IS-A and ISN’T-A links with an example from [Fahlman et nl 19811. Consider the fol- lowing facts about invertebrates: Molluscs are normally shell-bearers. Cephalopods are Molluscs, but normally are not shell-bearers. Nautili are Cephalopods and are shell-bearers. Fred is a Nautilus. Our network rep Figu s-e 2. resentation of these facts is given in Shell-bearer Mollusc Cephalopod Nautilus T Fred b Figure 2 - Network representing facts about Molluscs. Given the intuitive link definitions, above, one can see the correspondence between the facts con- tained in the English description and the links in Fig- ure 2. What remains to be done is to describe how such structures can be used to retrieve the information that common sense suggests is contained in the exam- pie. For example, are Cephalopods Shell-bearers? To show how the information in Example 1 is actually encoded in Figure 2, we must briefly describe how conflicting inheritance is resolved. Looking at Figure 2, Cephalopods could be Shell-Bearers by virtue of the chain of IS-A links through Mollusc. On the other hand, they might not be, because of the ISN’T-A link. The usual approach decides conflicting inheri- tance by choosing the value most closely-connected with the node in question, but this “shortest-path heuristic” can lead to anomalous results [Etherington 1982, 1987b; Touretzky 19861. A better approach is the inferential distance algo- rithm,3 which arbitrates such conflicts by appealing to a topological view of the network. This approach avoids the failings of the shortest-path heuristic, yet remains faithful to the intuitions that make inheri- tance networks appealing. In particular, more specific facts prevail over those less specific. Essentially, if an individual could inherit propertv P because she IS-A B, and property -P because she IS-A C, then the ambi- guity is resolved by considering relationships between B’s and C’s. If C IS-A B and not vice versa, -P is inherited; otherwise, if B IS-A C and not vice versa, P is inherited; otherwise, neither is inherited. As an illustration, consider the network of Figure 2. Because Nautilus is a subclass of Cephalopod, which is a subclass of Mollusc, inferential distance gives the desired results: Nautili, such as Fred, are Shell- Bearers, while Cephalopods not known to be Nautili are not. In the network of Figure 3, however, neither Republican nor Quaker is a subclass of the other, SO inferential distance sanctions no conclusions about whether Nixon is a Pacifist. 3 Due to space limitations, we can only present an oversim- Plified q?Proximation of the algorithm. The interested reader is referred to Touretzky’s [1986] dissertation. (1) IS-A: A->@B: Normally A’s are B’s, but there may be exceptions. a->*B: The individual a belongs to the class B. (2) ISN’T-A: AO-{*>@B: Normally A’s are not B’s. ao-j-/+>@B: a is not an B. (3) RELATED: A.=R= >@B: Normally A’s are related by R to B’s, ao=R- --BOB: Normally a is related by R to B’s. ao=R=>ob: a is related by R to b. (4) UNRELATED: A*$#=R#/=>@B: Normally A’s are not related by R to B’s, aowR{#=>@B: Normally a is not related by R to B’s. ao$#=R$#=>ob: a is not related by R to b. Figure 1 - Links and informal semantics. . Etherington 353 \o’ Nixon Figure 3- A genuinely ambiguous inheritance net. Touretzky [1985, 19861 also explores the applica- tions of ‘inferential distance to “‘inheritable relations”. These are relations between classes and/or individuals that, like class-membership, may be inherited by subclasses/instances and may be subject to exceptions For example, consider Example 2, whose correspond- ing network is shown in Figure 4. Example 2 Citizens dislike crooks. Elected crooks are crooks. Gullible citizens are citizens. Gullible citizens don’t dislike elected crooks. Dick is an elected crook. Fred is a gullible citizen. In this example. citizens generally dislike crooks, and hence elected crooks. However, Fred, the gullible citizen, doesn’t dislike Dick, the elected crook. citizen l =dislike=> l crook ? 1‘ gullible citizen b #dislike*> b elected crook f t Fred o o Dick Figure 4- Inheritable relations. The inheritance mechanism for inheritable rela- tions is similar to that for property inheritance except that, in addition to exceptions to IS-A inheritance, exceptions to relations (such as gullible citizens not disliking elected crooks) must be accounted for. 4. Default Logic In the spirit of [Etherington & Reiter 19831, we present a translation from Touretzky’s inheritance net- works to default logic [Reiter 19801. The proof-theory of default logic then provides minimal criteria that inference algorithms for inheritance systems should satisfy. Unfortunately, this presupposes a familiarity with default logic, which could not be reasonably be presented to the neophyte in the space available. We can only present a sketchy refresher. For our purposes. a (ilormnl) defadt is a rule of inference, of the form: which can be interpreted as saying that if 01(y) is known and it is consistent to believe p(Z), then it is reasonable to assume p(Z). The defaults can be viewed specifying preferred ways of extending one’s knowledge about the world. and defa satis may 5. Default Logic and Inheritance Networks In section 3, we presented a number of links that could be used to create inheritance networks. We now interpret these, using defaults and first-order formu- lae, as theories of default logic. Depending on whether it originates from a class A n individual a, an IS-A link to B is interpreted by: A (x ) : B (x ) B(x) or B(a) respectively. Similarly, ISN’T-A links from classes or individuals are identified, respectively. with: A (x) : -B(x) -B(x) Or -B (a) . The three forms of RELATED link for relation R - class-class, individual-class, and individual-individual - are represented, respectively, by: A (x)A B(y) : R(x,y) B(y) : R(a,y) R(x9y) 9 why) and R(a,b) . Finally, the three forms of UNRELATED link respec- tively yield: A(x) A B(y) : -R(x,y) B(y) : -R(a,y) -R (X7Y) ? -R hY) and -R(a,b) . These mappings allow Touretzky’s inheritance networks to be interpreted as default theories. For example, the default logic representation of the net- work in Figure 2 is: M (x ) : Sb (x ) C (x ) : M (x ) N (x ) : C (x ) ’ D= Sb (x) ’ iv (x ) C(x) ’ C (x ) : -Sb (x ) N (x ) : Sb (x ) -Sb(x) ’ Sb (x) I W= 4 (using the obvious abbreviations) 354 Default Reasoning Since the extensions of normal default theories represent orthogonal sets of beliefs that might be justi- fied given the first-order world-description and the defaults, we clearly must require that the conclusions drawn by an inheritance reasoner lie within a single extension of the corresponding default theory. This does not provide a complete characterization of an inheritance reasoner. however, since it does not specify which extension should be chosen in cases where there are more than one. Happily, inferential distance also provides a mechanism for choosing among multiple extensions. 6. Relating Inferential Distance and Default Logic Touretzky [1984] considers the possibility of applying the inferential distance topology to default theories, and gives some examples. The idea is that the notions which lead one link to be preferred to another in a network might also be applicable to help resolve conflicting defaults in default theories, without changing the forms of the defaults themselves (as in [Reiter and Criscuolo 19831). It is not clear from Touretzky’s presentation, however, exactly how the results of this application correspond to the results sanctioned by default logic. In this paper, we begin to explore this question. Conceptually, the inferential distance algorithm eliminates those extensions that violate the “hierarchi- cal” nature of the representation. then draws those conclusions that hold in the remaining extensions. This approach captures the semantic intuition that properties associated with subclasses should override those associated with superclasses. which is the funda- mental raisorz d’t?tre for inheritance representations. That it also avoids the pitfalls of incorrect behaviour that curse shortest-path inference algorithms is shown by the following theorem.’ Theorem In the absence of “no-conclusion” links, the ground facts returned by the inferential dis- tance algorithm lie within a single extension of the default theory that corresponds to the inher- itance network in question. The theorem begins to determine the connections between Touretzky’s work and default logic, by show- ing that ground facts returned by inferential distance - e.g., “Clyde is an elephant”, or “Clyde loves Fred” - belong to a common extension of the corresponding j The proof of the theorem is given in [Etherington 1987a]. default theory. However, inferential distance also sanctions normative conclusions, such as “Albino- elephants are [typically] herbivores”. We have begun to explore the relationship such statements inferred under inferential distance bear to the underlying default theory, but our results are only preliminary. I Touretzkv also allows “no-conclusion.’ links, which allow inheritance to be blocked without explicit cancellation. Default logic has no analogue for the no-conclusion link, and we have not considered them here. It appears straightforward to add a similar capa- city to the logic, assuming such links prove useful. The proof of the theorem suggests that its generaliza- tion to networks with no-conclusion links vis-&vis such an extended logic would present no problems. Touretzky [1986] explores the properties of inferential distance inheritance reasoning in detail. He also provides a constructive mechanism for deter- mining the ‘grounded expansions’ (analogous to exten- sions) of a network. Many of his results bear a super- ficial similarity in form and proof to the correspond- ing results for default logic. We speculate (as has Touretzkv) that this is no accident. In the next sec- tion, we suggest that the two approaches are so closely related that an inferential distance reasoner can be viewed as a restricted default logic theorem-prover. 7. Tractability As we mentioned earlier. tractabilitv is the rock on which formalism founders. Logic-based approaches in AI tend to be semi-decidable or worse, and so do not lend themselves to implementation. Conversely, informal systems often have attractive computational complexities (e.g., O(N) for inheritance in a hierar- chy). Furthermore, there has been an expectation that massively-parallel machine architectures could yield a further logarithmic improvement. Still, we argue that it is not particularly useful to do “I-don’t-know-what”, very quickly. Can principled commonsense reasoning be done quickly? The answer appears to be “yes, although perhaps not as quickly as unprincipled reasoning”. For exam- ple, one proposed parallel architecture for inheritance reasoning involves parallel marker-passing machines [Fahlman 19793. Touretzky shows that there are net- works for which parallel marker-passing algorithms cannot derive the conclusions sanctioned by the inferential distance algorithm. However, he also shows that any network can be “conditioned”, by adding logically-redundant links, in such a way that a parallel marker-passing algorithm caiz return correct results. Unfortunately, this conditioning, which must be done each time the network is modified, is expen- sive (Touretzky [1986] gives a polynomial-time algo- rithm that adds 0(IV2) links in the worst case) and is apparently not amenable to parallel marker-passing Etherington 355 implementation [Touretzky 1982: personal communica- tion; 19831. Compared with the worst-case undecidabilitv of II default theories, however, a polvnomial-time/space update algorithm and a linear-time inference algo- rithm do not seem entirelv unattractive. Given the _ theorem above, such algorithms correctly determine inheritance in the presence of exceptions. Thus, they Another discrepancy between the two approaches is that there are no strict (exception-free) links in Touretzkv’s scheme. Brachman [1985] and others have argued that this is a serious shortcoming in a knowledge representation system. We have not con- sidered the feasibility of adding such links. It is clearly possible, but we have no idea what the compu- tational cost would be. algorithm for computing with such default theories. 8. Discussion If we are suggesting that inferential distance can be used to reason with some class default theories, we should at least consider how close the relationship between the two approaches is. For example, while we showed that all conclusions reached by inferential distance lie within a single extension of the underlying default theory, those conclusions mav actuallv be more tightly constrained. In the Quakerikepublican exam- ple no conclusion is reached about Dick’s being a Paci- fist. The default theory has 2 extensions, however, one supporting each possibility. Inferential distance? in this case, returns conclusions that lie in the inter- section of the extensions. It seems that this is a general situation. Certain extensions are ruled out altogether (those correspond- ing to possible inferences clearly superceded by infer- ences associated with subclasses). It appears that con- clusions are returned that lie in the intersection of those extensions that reflect genuine ambiguities in the network. This remains to be proved. Since the default theories that represent hierar- chies are normal. it would seem that a result analo- gous to Reiter’s [1980] “semimonotonicity” theorem might be expected for networks. This result guaran- tees that adding new defaults to a normal default theory never causes extensions to “go away”, so a rea- soner committed to one extension may remain so com- mitted on discovering new default information. Unfortunately, this is not the case. While the set of extensions for the underlying theory does not contract, some that may have been preferred initially may not remain so given new defaults, and vice versa. This is not unexpected, given Touretzky’s observation that networks must be “reconditioned” after each update. 9. Conclusions We have explored a correspondence between default theories and inheritance networks with excep- tions. Using the proof-theory of default logic as minimum correctness criteria for inheritance- determination, we showed that Touretzky’s inferential distance algorithm is a satisfactory inheritance rea- soner. More importantly, we were able to turn our notion of satisfactory around and find that Touretzky’s algorithm provides a tractable proof-theory for certain classes of default theories. Such tractable algorithms are welcome not only for their own sake. but because they suggest that intractability may not be the inevit- able cost of formal adequacy in commonsense reason- ing. The formality/tractability controversy has long divided AI, with little communication (beyond epithets) between the camps. Recently there has been interest in exploring the terrain between the encamp- ments. Early reports, including this one, suggest that the ground is fertile. Perhaps the natives are even friendly! Acknowledgements I am grateful to David Touretzky, Raymond Reiter, and Robert Mercer. for helping to sharpen my insights into these problems. References AAAI [ 19861, Proc. American Assoc. for Artificial Iiztelligerzce-86, Philadelphia, PA, August 11-15. 406-410. AI [ 19801. Special issue on non-monotonic logic. Artifi- cial Intelligence 13, North-Holland. Brachman, R.J. [1985], “I lied about the trees. or defaults and definitions in knowledge representa- tion”. AZ Magazine 6(3), 80-93. Etherington, D.W. [ 19821, Finite Def arrlt Theories. M.Sc. thesis, Dept. Computer Science. University of British Columbia. Etherington, D.W. [1987a], Reasorhg from Incomplete hf ormation. Pitman Research Notes in Artificial Intelligence, Pitman Publishing Limited, London. 356 Default Reasoning Etherington. D.W. [ 1987b]. “Formalizing nonmono- tonic reasoning systems”. Art$iciai Irztelligerzce 31, North-Holland, 41-85. Etherington. D.W., and Reiter, R. [1983]. “On inheri- tance hierarchies with exceptions”, Proc. Anzericau Assoc. for Artificial Irltelligeuce-83. Washington. DC.. August 24-26. 104-108. Fahlman. S.E. [ 19791, NETL: A Swtem for Represejztirlg am Using Real-World Knowledge, MIT Press. Cam- bridge, Mass. Fahlman. SE., Touretzkv. D.S.. and van Roggen, W. [1981], “Cancellation in a parallel semantic net- work”, 13~0~‘. Seventh Interrlatioml Joiut Conf ererzce 01’1 Artificial Iiztelligence. Vancouver. B.C., Aug. 24-25, 257-263. Reiter, R. [1980], “A logic for default reasoning”. Artificial Intelligence 13, North-Holland, 81-132. Reiter, R. and Criscuolo, G. [1983]. “Some representa- tional issues in default reasoning”. Znt’l J. Comput- ers arzd Mnthenzatics 9. l-13. Touretzkv. D.S. [1983], Multiple lnheritarlce am! Excep- d tiom. Unpublished Manuscript, Department of Computer Science. Carnegie-Mellon Universitv. Touretzky, D.S. [ 19841. “Implicit ordering of defaults in inheritance systems”. Proc . Arnericm Assoc. f’m Ar-tif icial Irrtelligeuce. 322-325. Touretzky. D. [ 19851, “Inheritable relations: a logical extension to inheritance hierarchies”. Pr*oc . Theoretical Appronches to Natwal Langnnge Uuder- starding. Halifax, 28-30 May. 55-60. Touretzkv, D.S. [ 19861. The k%uhemntics of Id~eritnrzce Smterk Pitman Research Notes in Artificial Intel- ligence. Pitman Publishing Limited, London. , Etherington 357
1987
63
658
John F, Horty Richmond I% Thornason David S. Touretzky r Philosophy Department Linguistics Department Computer Science Department University of Maryland University of Pittsburgh Carnegie Mellon University College Park, MD 20742 Pittsburgh, PA 15260 Pittsburgh, PA 15213 Abstract: This paper describes a new approach to inheritance reasoning in semantic networks allowing for multiple inheritance with exceptions. The ap- proach leads to a definition of inheritance that is both theoretically sound and intuitively attractive: it yields unambiguous results applied to any acyclic semantic net, and these results conform to our own intuitions in the cases in which the intuitions them- selves are firm and unambiguous. Since, however, the definition provided here is based on an alternative, skeptical view of inheritance reasoning, it does not al- ways agree with previous definitions when it is applied to nets about which our intuitions are unsettled, or in which different reasoning strategies could naturally be expected to yield distinct results. 1. Introdpction This paper describes a new approach to inheritance rea- soning in semantic networks allowing for multiple inheritance with exceptions. Like the previous approaches of [Touretzky, 19861 and [Etherington, 19871, but unlike many others, such as [Roberts and Goldstein, 19771 or [Fahlman, 19791, the approach presented here leads to a definition of inheritance which is both theoretically sound and intuitively attractive: it yields unam- biguous results applied to any acyclic semantic net, and the re- sults conform to our intuitions in the cases in which our intuitions themselves are firm and unambiguous. Since, however, the defi- nition provided here is based on an alternative, skeptical view of inheritance reasoning, it does not always agree with these previ- ous definitions when it is applied to nets about which intuitions are unsettled, or in which different reasoning strategies could naturally be expected to yield distinct results. We do not attempt in this paper to provide any system- atic comparison of our approach to nonmonotonic inheritance either with those of [Touretzky, 19861 and [Etherington, 19871, or with other similar approaches to nonmonotonic reasoning. This project of comparison and evaluation is begun in [Tour- etzky et d, 1987a1 and [Touretsky et al, 1987b], where we set out a partial design space for the classification of inheritance sy5 tems and investigate the consequences of various design decisions. However, we will note here that while the credulous reasoners of Touretzky and Etherington may produce an exponential number of extensions from a single network, the kind of skeptical reasoner we describe always produce a unique extension. Skepticism therefore prove to be more practical in some applications. may This material is based on work supported tion under Grant No. IST-8516313. National Science Founda- 2. Notation Letters from the beginning of the alphabet (a, b, c) will repre- sent objects, and letters from the middle of the alphabet (p, q, r) will represent kinds of objects. We use letters from the end of the alphabet (u, u, w, Z, y, z) to range over both objects and kinds. An assertion will have the form x + y or z $, y, where 21 is a kind. If z is an object, such an assertion should be inter- preted as an ordinary atomic statement: a + p and b $, p, for instance, are analogous to Pa and -Pb in logic; they might rep resent statements like ‘Tweety is a bird’ and ‘Jumbo isn’t a bird’. If z is a kind, these assertions should be interpreted as generic statements: p --t q and r $t q, for example, might represent the statements ‘Birds fly’ and ‘Mammals don’t fly’. There is noth- ing in ordinary logic very close in meaning to generic statements like these, since they can be true even in the presence of ex- ceptions. In particular, ‘Birds fly’ can’t be interpreted to mean VZ[PZ I> Qz], and ‘Mammals don’t fly’ doesn’t mean anything like Vz[Rz 1 N&Z]; for detailed argumentation on this point, with supporting linguistic evidence, see [Carlson, 19821. Capital Greek letters will represents nets, where a net con- sists of a set I of individuals and a set K of kinds, together with a set of positive links and a set of negative links, both subsets of (I x K) U (K x pi). We identify the positive and negative links in a net with our positive and negative assertions. Lower case Greek letters will range over sequences of links, among which we single out for special consideration the paths, defined inductively as follows: each assertion is a path; and if u --+ p is a path, then both Q + p --t q and u + p fi q are paths. As this notation indicates, paths are special kinds of link sequences-joined, in the sense that the end node of any link in a path is identical with the initial node of the next link. It follows from their definition that paths are subject also to two further constraints. First, a negative link can occur in a path, if at all, only at the very end: a + p j+ q is a path, but a f, p --) q isn’t. Second, an individual can occur only as the initial node of a path: p ---) a + q isn’t a path. Paths will be said to enable assertions, or statements, much in the way that proofs enable their conclusions: a path of the form z + CT + y is said to enable the assertion z + y, and likewise, a path of the form z 4 u f, y is said to enable the assertion x ft y. As this suggests, it is often natural to under- stand a path-like a proof-as representing a particular chain of reasoning behind the assertion it enables. The path a --) p + q, for example, might enable the assertion ‘Tweety flies’, while rep resenting an argument like “Tweety flies because he is a bird and birds fly.” 35% Default Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 3. Inheritance Since we identify the links in a net with assertions, a net can be viewed as a set of hypotheses, or axioms. Let us say that an assertion A is supported by a net I’ if we can reasonably conclude that A is true whenever all the links in I’ are true- if the information contained in r would naturally lead to the conclusion that A. We want to know what we can conclude from a given net; so our object is to define the general conditions under which a net I’ supports an assert’ion A. l[n the context of ordinary deductive logic, we often find our- selves in a similar situation, when we want to know what state- Figure 1: I’r Figure 2: I’a. ments are deducible from a given set of hypotheses. There, it is a common practice to approach the question in a roundabout way. including that of [Touretzky, 19861, presume the topdown ap- Instead of defining the relation of deducibility directly, one first preach. They are guided, more or less explicitly, by a picture of characterizes the deductions-sequences of statements represent- inheritance according to which properties are imagined to flow ing certain kinds of arguments, or chains of reasoning-and then downward through the semantic net, from more general to more defines a statement as deducible from a set of hypotheses if those specific kinds and then finally to individuals, unless the flow is hypotheses permit a deduction of that statement. interrupted, somehow, by an exception. Formally, this ((prop Of course, the process of drawing conclusions from a set of erty flow” picture leads to the construction of compound paths hypotheses through inheritance reasoning is quite different from through the process of backward c/a&zing, according to which, at the process of drawing conclusions through deduction. Still, we the inductive step, a compound path of the form z 4 y -+ Q is find it helpful in the case of inheritance to follow a similar kind assembled by adding the direct link z + y to the path y ---f Q. of roundabout strategy in describing the consequences of a set of The present treatment, on the other hand, is intended to hypotheses. Instead of trying to specify directly the statements capture a kind of bottom-up approach to inheritance reasoning. supported by a given net, we first characterize the arguments This approach seems especially natural when one wants to push or chains of reasoning- represented, now, by paths-that are the analogy, as we do, between paths and arguments-since ar- permitted by a net. As in the case of ordinary deducibility, this guments, at least as they are usually represented (say, by proof relation between sets of hypotheses and the chains of reasoning sequences), tend to move from the beginning forward. Formally, they permit is really the central idea; and it will be the primary the bottom-up approach leads to the construction of compound focus of our attention. Once we have identified the paths that a paths through the process of forward chaining: at the inductive net permits, it is natural to describe the statements supported step, the compound path (T -+ a: --+ y is assembled by adding the by a net by stipulating that a net supports a statement just in direct link z --+ y to the path c 4 z; and likewise, the compound case it permits a path enubhg that statement. path C.T --3 z f, y is assembled by adding the direct link a: f, y to the path u 4 x. This adherence to forward chaining is one of the central principles guiding our approach. Mot only does it 4. IVIotivatiora In this section we examine several simple examples of nets and the paths they should permit, in order to illustrate the prin- ciples underlying our general characterization of the permission relation, which is then presented in Section 5. Consider, first, the simplest kind of case imaginable, a linear net I’1 (Figure 1). Just to fix an interpretation, let a = Tweety, p = Ganaries, q = Birds, and r = Flying Things. I’r explicitly contains the information, then, that Tweety is a canary, that canaries are birds, and that birds fly. Now given just this infor- mation, we would certainly want to allow a chain of reasoning along the lines of “Since Tweety is a canary, a kind of bird, and birds fly, Tweety flies” -so we want the net I’r to permit the compound path a + p --p q --* r, representing this argument. In just the same way, we want the net I’s (Figure 2), with b = embody a different metaphor for inheritance reasoning (Uargu- ment construction” instead of Uproperty flow”), but it leads also to different technical results, as illustrated by our discussion of the net l?r in Section 6, below. In our approach, then, compound permitted paths are as- sembled through forward chaining, but of course, not every path constructible through forward chaining from the materials in a given net should be permitted by that net. Conflicts can in- terfere, as in the net I’s (Figure 3). This net has come to be known as the Nixon Diamond, because of the interpretation un- der which a = Nixon, q = Quakers, r = Republicans, and p = Pacifists. What Fe tells us explicitly, under this interpretation, is that Nixon is both a Quaker and a Republican, that Quakers are pacifists, and that Republicans are not pacifists. Unrestricted forward chaining would allow us to construct from this informa- tion both the paths a --, Q --t p and a 4 r + p. But since these Jumbo, s = Royal Elephants, t = Elephants, and UL = Flying two paths conflict, enabling the contradictory statements a --t p Things, to permit the path b 4 s 4 t + U, which represents an and a f, p, we don’t want l?s to permit both these paths at once. argument something like “Jumbo is a royal elephant, a kind of Given just the information contained in I’s, we wouldn’t want to elephant, and elephants don’t fly; so Jumbo doesn’t fly.” conclude both that Nixon is a pacifist and that he isn’t. These examples illustrate some of the compound reasoning What you say about inheritance depends crucially on your paths that can be constructed by assembling the direct links treatment of nets, like this Nixon Diamond, which contain com- contained in a net, but they don’t yet tell us, when we think pound conflicting paths. One option is to suppose, although you of the construction as proceeding inductively, how these paths can’t permit both of two such paths, that it is always reasonable are to be assembled. There are, of course, two natural options to permit one or the other. In the case of the Nixon Diamond, for assembling compound paths from direct links: roughly, top for example, this strategy would lead us to the conclusion that down and bottom-up. Most treatments of inheritance reasoning, either the path a 4 q --f p or the path a -+ r f, p should be Figure 3: I’s permitted. What lies behind thii strategy is a kind of credulity or belief-hunger-the idea that it’s best to draw as many conclu- sions as possible from a given net, even at the cost of making arbitrary choices among conflicting arguments. As developed in [Touretzky, 19861, this strategy involves associating with each net containing compound conflicting paths a number of consis- tent extensions, reminiscent of the “fixed points” of [McDermott and Doyle, 19801, or the “extensiom? of [Reiter, 19801. For this reason, because they can consistently be associated with a num- ber of different extensions, nets like these are often described as ‘ambiguous.” We take a different point of view. Rather than supposing that an inheritance reasoner should try to conclude as much as possible from a given net, we adopt a broadly skeptical attitude, according to which conflicting arguments tend to neutralize each other. We begin with the idea, which will have to be explained in more detail, that a compound argument is neutralized by any conjla’cting argument which is not itseZf preempted. Given just the information in the Nixon Diamond, for example, our inheri- tance reasoner won’t conclude either that Nixon is a pacifist or that he isn’t. It won’t conclude that he is a pacifist, since the information contained in the net provides the materials for con- structing an argument to the contrary; it won’t conclude that he ‘isn’t a pacifist, since the net also provides the materials for constructing an argument that he is. Although our approach is based, generally, on the skeptical idea that such paths tend to neutralize each other, the special brand of skepticism we adopt here is restricted in two ways. First, we suppose that only compound paths can be neutralized; and second, that paths can be neutralized only by conflicting paths which are not themselves preempted. Both of these restrictions are important; we examine them in turn. As an example of a net containing non-compound conflict- ing paths, consider I’4 (Figure 4). (Again, take a = Nixon and p = Pacifists.) According to the definition we provide, I’4 will permit both the conflicting paths o + p and a + p: our reasoner will conclude from I’4 both that Nixon is a‘pacifist and that he isn’t. This may seem odd, especially in light of our cautious, skeptical approach to I’s. It may appear, from a certain point of view, that I’4 presents us with nothing but a limiting case of the phenomenon found in I’s-so that consistency of principle should lead us to conclude, if I’z doesn’t permit either the path a + q -+ p or the path a + r + p, that I’d, likewise, shouldn’t permit either of the paths o + p or a f, p. But it is also possible to isolate a point of view from which our different treatment of the conflicting paths in I’s and I’4 seems just right. Remember, we are talking about the design of an inheritance reasoner, a mechanism for drawing conclusions from a certain kind of database-a set of statements that can be represented as the set of links in a net. Now when we think of the net I’s as a database, it is, of course, consistent: in fact, under the Nixon interpretation, all of the statements contained in I’s are true. Obviously, no one would want a reasoning mechanism to draw inconsistent conclusions from consistent information; so it follows at once that I’s can’t permit both the paths a -+ q + p and a + r 7% p, since these two paths enable the contradictory statements that Nixon is a pacifist (a --) p) and that he isn’t (a f+ p). On the other hand, when you look at I’4 as a database, it already contains both of these statements; so in this case, we are faced with the problem of drawing the appropriate conclusions from information that is already inconsistent. This is a notoriously diificult problem, but we find that it is both possible and useful to adopt in the context of inheritance reasoning a proposal that was originally formulated, in [Belnap, f977a] and [Belnap, 1977b], as a guide for deductive reasoning in the presence of inconsistency. As a general principle, then, we propose that a reasoner ought to be able to conclude from a set of statements every statement actually contained in that set, at least-even if the set is inconsistent. It follows, of course, that if our inheritance reasoner were actually provided with the infor- mation contained in I’,-that Nixon both is and isn’t a pacifist- then it ought to conclude from this information both that Nixon is a pacifist and that he isn’t. Thinking of deductive reason- ing, Belnap argues that the presence of inconsistent information shouldn’t enable a mechanical reasoner to derive arbitrary con- clusions, as it would in the case of a theorem prover using clas- sical logic. We have shown in [Thomason et al., 19861, however, that this much of the motivation behind relevance logic is al- ready built into inheritance reasoning, even in the simple case of monotonic inheritance. Thus, the reasoner we describe here will conclude from P4 both that Nixon is a pacifist and that he isn’t, but it won’t then go on to draw irrelevant conclusions from this contradiction: it won’t conclude, for instance, that Nixon is a Democrat. The second restriction on our broadly skeptical outlook is the idea that even compound arguments are neutralized only by those conflicting arguments that are not themselves preempted. This idea-that certain compound arguments cau be, as we say, preempted by others- really lies at the heart of our approach, allowing us to transform a simplistic and dogmatic skepticism into something much more interesting. Again, we begin with an example, the net I’s (Figure 5). This net results from adding the link p ft r to I’r, and the interpretations of these two nets will overlap as well. Just as before, we take a = Tweety, q = Birds, and r = Flying Things; but now let’s shift the earlier interpretation so that p = Penguins, giving some plausibility to the new link p f, r. If things are like this, what should we conclude about Tweety: does he fly or not? Well, there are two paths to consider: a + p + q + r, which enables the conclusion that Tweety flies, and a + p $, r, which enables the opposite conclusion. Since both of these paths are compound, and they enable conflicting conclusions, simple skepticism would bar us from reaching any conclusion at all. But evidently, in this case, we should reach a conclusion: we should conclude, in fact, that Tweety doesn’t fly-since he is a penguin, and penguins don’t fly. The reason we are able to conclude here that Tweety doesn’t fly-even though he is a bird, and birds fly-is that penguins happen to be a specific kind of bird, so that, in case of conflicts, the information we have about Tweety in virtue of his being a penguin should override whatever we would otherwise suppose to be true of him simply because he is a bird. This illustrates the central intuition behind preemption: that 360 Default Reasoning 4 Figure 4: I’4 Figure 5: I’z information about specific kinds should be aIlowed to override information about more general kind& As we define it, a path will be preempted in a net, roughly, when the net provides the materials for constructing a conflicting argument based on more specific information. In the case of I’s, for example, we will want to say that the path a 4 p 3 q --f r (telling us that Tweety flies because he is a bird) is preempted, since: (i) the net permits the path a t p (telling us that Tweety is a penguin), (ii) p’s are a specific kind of q (penguins are a specific kind of bird), and (iii) the net contains the direct link p $, r (telling us directly that penguins don’t fly). Focusing on (ii), it is easy to see in terms of the net topology that what makes p more specific than q, according to rz, is simply the fact that this net permits a path (a direct link, in this case) from p to q. So, restating in a way that combines (i) and (ii), we can say that the path a 3 p + q 4 r is preempted in I’s -precisely because there is a certain kind, p (penguins), such that I’z both permits the path o -+ p 4 q .- - . . (telling us that Tweety is a penguin and that penguins are a specific kind of bird) and contains the direct link p $, r. In this form, the idea of preemption can easily be generalized to apply to arbitrary nets and paths. We will say that a path of the form z -+ r + u 4 y (telling us that z’s, as V’S, are y’s) is preempted in a net I’ just in case there is a node z (z # v) such that I’ both permits a path of the form z + 71 + L 4 TZ 4 t, (telling us that x’s are %‘a, a more specific kind of V’S) and contains ihe link z f, y (telling us that 28, in particular; are not y’s). With exact symmetry, we will say also that a path of the form z 4 T 4 w f, y is preempted id- if there is a node z (z # w) such that I’ both permits a path of the form x + r1 A B + & + v and contains the link z + y. 5. The definition Let’s use the symbol ‘k’ to stand for the permission rela- tion, so that “I’ + 0’ means that the net I’ permits the path 6. We have now considered the central principles underlying our approach to this idea-forward chaining, along with a certain kind of restricted skepticism. It remains only to organize these principles into a rigorous definition. Our adoption of forward chaining suggests that a bottom-up, inductive definition should be possible. In order to frame such a definition, however, we need to be able to associate with each path Q some measure of its %omplexity” in a given net I’, in such a way that it can be decided whether I’ + Q once it is known whether I’ b u’ for each path 19 less complex in I’ than u itself. The natural thing to think is that we might be able to iden- tify the complexity of a path, in this sense, with its length-but this won’t work, since shorter paths can be neutralized by longer, conflicting paths. To see what will work, we first introduce an auxiliary idea. As we recall from Section 2, a path is a joined sequence of links containing a negative link, if at all, only at the very end. Let’s say, now, that a generalized path is a sequence of links joined like an ordinary path, except that it can contain negative links anywhere, and perhaps more than one. Formally, we can catch this idea by specifying that each assertion is a gen- As it turns out, this idea of degree provides just the right notion of path %omplexity” for an inductive definition of b, the permission relation between nets and paths: it can be decided whether I’ k cr entirely on the basis of information regarding paths whose degree in I’ is less than that of Q, along with in- formation about the direct links contained in I’ itself. On the other hand, in order to assure that degr(o) should always be well-defined, we need to restrict our attention to nets which are acyclic, in the sense that they contain no generalized paths whose initial nodes are identical with their end nodes. (This is a com- mon restriction; much of the analysis in [Touretaky, 19861, for instance, also applies only to acyclic nets.) Given this idea of degree, then, and restricting ourselves to acyclic nets, we can now present our definition of the permission relation. Although the definition is inductive at heart, it has the over- all structure of a definition by cases: it deals separately with compound paths and direct links (non-compound paths). Only in the case of compound paths is there any need to resort to induction; direct links can be handled all at once, as follows. Case I: CY is a direct link. Then I’ k o iff 0 E I’. It is important to note that even if c is a direct link, it could easily turn out that degr(c) > 1, since I’ might contain a compound generalized path from the initial node of c to its end node. On the other hand, if degr(a) = 1, then the path a has to be a direct link. Thus, in addition to taking care of all the direct links at once, whatever their degree, Case I serves also as the basis clause for the induction on degree which extends the permission relation from direct links to compound paths. The inductive clause is as follows. Case II: Q is a compound path with, say, degr(o) = n. As an in- ductive hypothesis, we can suppose it is settled whether I’ + u’ whenever degr (a’) < n. There are then two subcases to consider, depending on the form of B. u is a positive path, of the form x 4 ul -B u -+ y. Then I’ k Q iff (a) r + x + u1 4 U, 04 u --) Y E r, (4 x f+ Y er r, (d) For all u such that I’ k x + T 4 v with v f, y E I’, there exists z (% # V) such that r1=x--t~~1-‘Z--,7z~wand%-ryEr. Q is a negative path, of the form x 4 ul 4 u + y. Then I’ + u iff (a) r j= x --) u1 4 U, W u % Y E r, (4 x + Y e r, Horty, Thomason, and Touretzky 361 (d) For all v such that I’ b z 4 r 4 v with v 4 y E I’, there exists z (Z # v) such that r1=5--,71424724uandzf,yEr. It should be clear that this definition of the permission re- lation accurately represents the general approach to inheritance reasoning described in Section 4. Case I tells us that any state- ment actually contained in a net should be permitted by that net. The two subcases of Case II, dealing respectively with pos- itive and negative compound paths, are perfectly symmetric. In each subcase, the clauses (a) and (b) capture the idea of for- ward chaining: compound paths are permitted by a net only if they can be constructed by adding direct liiks from the net to initial permitted segments of those paths. The clauses (c) and (d) take care of conflicts. What (d) says is that, even if a com- pound path is constructible through forward chaining, it can be permitted only if each potentially conflicting compound path is preempted. Of course, only compound conflicting paths can ac- tually be preempted, since preemption involves the intermediate nodes of path, and direct links have no intermediate nodes; but if, for skeptical reasons, we don’t want a path to be permitted which conflicts with an unpreempted compound path, we cer- tainly don’t want to permit a path that conflicts with a direct link. This is the force of the clause (c). Both the clauses (a) and (d) in the inductive step refer to other paths of a certain form permitted by the net; but this is no problem, because at any step in the induction, paths of this form will always have a degree less than that of the path being considered. 6. Some examples This definition of the permission relation yields the adver- tised results applied to the nets I’1 through I’s from Section 4. In order to highlight some of the interesting features of our defi- nition, we consider here the paths permitted by a couple of more complicated nets. We mentioned in Section 4 that credulous (or belief-hungry) inheritance reasoners would tend to associate with nets contain- ing compound conflicting paths a number of different consistent extensions, or fixed points. It is tempting, therefore, to sup pose that the set of paths permitted by a given net under the present skeptical analysis might simply be the intersection of the various extensions associated with that net according to the credulous analysis provided by [Touretzky, 19861. Nowever, nets like I’s (Figure 6)-which have the topology of faested Nixon Diamonds-show that this is not so. In this case, we have I’s k a 4 p $, q (the potentially conflicting path a 4 8 4 t 4 q poses no problem; this path is not permitted, since its initial seg- ment u 4 8 4 t is itself neutralized by the path u 4 r $, t). But the path a 4 p 4, q isn’t contained in all the Touretzky ex- tensions associated with this net; some contain instead the path O-+8-+t-+Q. The net I’7 (Figure 7) illustrates a different feature of our definition, resulting not so much from our particular brand of skepticism as from our adherence to forward chaining. Here, we have I’7 k o +p--+Q+S. The potentially conflicting path a 4 p 4 r $, 8 poses no problem since its compound initial segment o 4 p 4 r conflicts with the direct lmk a $, r. On the other hand, though I’7 permits a 4 p 4 q 4 8, and so supports the statement u 4 8, the net does not permit the path p 4 q 4 8, and indeed does not support the statement p --) 8. This kind of situation can seem a bit anomalous if one’s ideas Figure 6: I’s about inheritance reasoning are conditioned by the top-down or aproperty flow” approach, according to which individuals are supposed to inherit their properties strictly in virtue of belonging to certain classes of things-their ancestors in the network- which possess those properties. The problem is that, while I’7 supports the statement that the individual a is an s, it is unclear how o could have inherited this property. After all, the only immediate ancestor of a in the network is the node p. According to the topdown approach, then, a must have inherited all the positive properties it does inherit simply in virtue of being a p; if it possess any particular property, such as being an 8, this could only be due to the fact that p’s possess that property. But as we have seen, I’7 doesn’t support the statement that p’s are 8’s. Against the background of the bottom-up or ‘argument con- struction” view of inheritance reasoning, however, the situation presented by this example is perfectly coherent. Since I’r con- tains the materials for constructing unpreempted, compound ar- guments enabling both the conclusion that p’s are s’s and the conclusion that p’s are not s’s, our broadly skeptical point of view forces us to withhold judgment, endorsing neither of these conclusions. The individual a, though, is a particular p for which the general kind of argument enabling the conclusion that p’s are not 8’9 is blocked: that argument depends on the informa- tion that p’s are r’s, but I’7 tells us explicitly that a is not an r. Since the general argument that p’s are not s’s is explicitly blocked for this particular individual, then, it cannot conflict in the case of o with the argument that p’s are s’s; so we conclude that a is an 8. 7. Implementations The theory described here has been implemented as a Com- mon Lisp program. The algorithm is a line-by-line translation of the definition in Section 5, except that, for reasons of efficiency, the degree degr(u) of each path u in P is not actually computed. Instead, the program performs a topological sort on the graph and orders potential paths according to the number T(x) which the topological sort assigned to the last node x of each path. It is easily shown that if ur = 5~1 4 ~1 4 yr and uz = xz 4 72 4 yz and T(yr) c T(yz), then either degr(q) < degr(uz), or there is no generalized path from zi through yz to yl. Therefore, a pro- gram whose notion of path complexity is based on topological order will always produce results in agreement with our defini- tion. It may consider paths in a different order than a definition based on degree, but it will only do so in situations where this cannot affect the result. In addition, we have been exploring parallel marker prop- agation inheritance algorithms. Purely parallel nonmonotonic 362 Default Reasoning Figure 7: I’7 inheritance-skeptical or credulous-is not possible on a marker propagation machine due to the necessity of handling preemp- tion. (By ‘purely parallel” we mean in time bounded by a con- stant times the depth of the graph.) However, a marker prop- agation machine can quickly find all relevant paths and make the uncontested inferences; it can then fall back on a serial al- gorithm to handle the difficult cases. We have developed a hy- brid (parallel-serial) inference algorithm for answering particular queries about whether a net I’ supports statements of the form x 4 y or x f, y. This algorithm runs in time proportional to W⌧, Y) l 11 + N d⌧, ~11, w h ere D(x, y) is the depth of the query (the length of the longest path between x and y) and ZVc(x, y) is the number of nodes contested with respect to the query. (A contested node is any node P on a path from z to y such that paths x --t ~1 --+ z and x + rz $, z exist; they need not be permitted paths.) The algorithm will be described in detail, and its correctness proved, in [Horty et al., 19871, a more complete version of this paper. 8. ConcIm3ion We have presented in this paper a new, skeptical theory of in- heritance reasoning in nonmonotonic semantic networks. As far as we know, this theory represents the first significant alternative to the analysis of nonmonotonic inheritance reasoning presented in [Touretzky, 19861. (A 1 ess radical alternative is described in [Sandewall, 19861; although it differs in some ways from Tour- etzky’s, Sandewall’s is nevertheless a credulous theory.) The fact that there should be distinct but, perhaps, equally well- motivated accounts of correct reasoning in this context comes as something of a surprise; it is reminiscent of the situation in philosophical logic, where there are rival logic5 embodying dis- tinct conceptions of correct deductive reasoning. In the context of inheritance reasoning, the existence of these distinct approaches has a number of theoretical consequences, which we are exploring in our current research. Much of this research is focused more or less directly on inheritance theory: we are studying the relations among the different analyses of nonmonotonic inheritance reasoning [Touretzky et. al, 1987a], [Touretzky et. al., 1987b], and working to extend some of these analyses to more expressive nonmonotonic network languages. However, it is also possible that this research will shed some light on more general treatments of nonmonotonic reasoning. It has been shown in [Etherington, 19871, for example, that the de- fault logic of [Reiter, 19801 can be used to provide a specification for correct inheritance reasoning in nonmonotonic semantic net- works: Etherington establishes a close correspondence between these networks and certain kinds of default theories fUnetwork default theories”). But these results, linking default logic to nonmonotonic inheritance, presuppose a credzsZou8 analysis of inheritance reasoning; this bias towards a credulous approach to nonmonotonic reasoning is in fact built into both Reiter’s de- fault logic and the nonmonotonic logic of [McDermott and Doyle, 19801. Since, as we have shown, there turns out to be au equally well-motivated skeptical theory of nonmonotonic reasoning, at least in the case of semantic networks, it might be useful at this point to seek a weaker version of default or nonmonotonic logic, exhibiting instead a bias toward skepticism-or perhaps a more general logic that is neutral between the credulous and skeptical approaches. References [Belnap, 1977a] N. Belnap. How a computer should think. In G. Ryle (ea.), C on em t p orary aspects of phiZosophy, Oriel Press, 1977, pp. 30-56. [Belnap, 1977b] N. Belnap. A useful four-valued logic. In J. Dunn and 6. Epstein (eds.), Modern uses of muLltipple- valued logic, D. Reidel, 1977, pp. 8-37. [Carlson, 19821 G. Carlson. Generic terms and generic sentences. Journal of PhiZosophicaZ Logic, vol. 11,P982, pp. 145-181. (Etherington, 19871 D. Etherington. Formalizing nonmonotonic reasoning systems. Artificial Intelligence, vol. 31, 1987, pp. 41-85. [Fahlman, 19791 S. Fahlman. IVEZ’L: a system for representing and wing real-world knowledge. The MIT Press, P979. [Horty et al., 19871 9. Horty, R. Thomason, and D. Touretzky. A skeptical theory of inheritance in nonmonotonic se- mantic networks. Forthcoming technical report, Com- puter Science Department, Carnegie Mellon University, 1987. [McDermott and Doyle, 19801 D. McDermott and J. Doyle. Non- monotonic logic I. Artificial Intelligence, vol. 13, 1980, pp. 41-72. [Reiter, 1980] R. Reiter. A logic for default reasoning. Artificial Intelligence, vol. 13, 1980, pp. 81-132. [Roberts and Goldstein, 19771 R. Roberts and I. Goldstein. The FRL manual. AI Memo No. 409, MIT Artificial Intelli- gence Laboratory, 1977. (Sandewall, 19861 E. Sandewall. Non-monotonic inference rules for multiple inheritance with exceptions. I+oceedinga of the IEEE, vol. 74, 1986, pp. 1345-1353. [Thomason et al, 1986] R. Thomason, J. I-Iortyj and D. Tour- etzky. A calculus for inheritance in monotonic semantic nets. Technical Report CM&CS-86138, Computer Sci- ence Department, Carnegie Mellon University, 1986. [Touretzky, 19861 D. T ouretzky. The mathematics of inhe&znce systems. Morgan Kaufmann, 1986. [Touretzky et al, 1987a] D. Touretzky, J. Marty, and R. Tho- mason. Issues in the design of nonmonotonic inheritance systems. Forthcoming technical report, Computer Sci- ence Department, Carnegie-Mellon University, 1987. [Touretzky et al., 1987131 D. Touretsky, J. Horty, and R. Tho- mason. A clash of intuitions: the current state of nou- monotonic multiple inheritance systems. Forthcoming in hoceedhgs IJCA I-87. Marty, Thomason, and Touretzky
1987
64
659
Circumscriptive Theories: A Logic-Based Framework for Knowledge Representation (Preliminary Report) Vladimir Lifschitz Computer Science Department Stanford University Stanford, CA 94305 Abstract The use of circumscription for formalizing commonsense knowledge and reasoning requires that a circumscription policy be selected for each particular application: we should specify which predicates are circumscribed, which pred- icates and functions are allowed to vary, what priorities between the circumscribed predicates are established, etc. The circumscription policy is usually described either informally or using suitable metamathematical notation. In this pa- per we propose a simple and general formalism which permits describing circumscription poli- cies by axioms, included in the knowledge base along with the axioms describing the objects of reasoning. This method allows us to formalize some important forms of metalevel reasoning in the circumscriptive theory itself. 1. Introduction The logic approach to the problem of knowledge rep- resentation, proposed by John McCarthy (1960), stresses the analogy between a knowledge base and an axiomatic theory. Knowledge about the world can be expressed by sentences in a logical language, and an intelligent pro- gram should be able to derive new facts from the facts already known, as a mathematician can derive new math- ematical results from the facts already proved. Further research has shown that formal theories like those used for the formalization of mathematics are not adequate for representing some important forms of com- monsense knowledge. The facts that serve as a basis for default reasoning require more powerful representational languages. Several extensions of the classical concept of a first-order theory have been proposed to resolve this dif- ficulty. We concentrate here on one of these extensions, the concept of circumscription (McCarthy 1980, 1986). Here is a simple blocks world example illustrating McCarthy’s approach to formalizing default reasoning. If there is no information to the contrary, we assume that This research was partially supported by DARPA under Contract N0039-82-C-0250. a given block is located on the table and that its color is white. Block B is red. We want a formalization of these assumptions to allow us to conclude by default that all blocks other than B are white (since B is the only block which is known to be an exception) and that all blocks are on the table (since no information to the contrary is available). We can express the given assumptions using two abnormality predicates abl and ab2, as follows: TabI x A block x > ontable x, (1) Tab2 x A block x > white x (2) (the blocks that are not abnormal are on the table; the blocks that are not abnormal in a certain other sense are white). The axiom set will also include the formulas block B, red B, l(white x A red x). (3) Axioms (1 )-( 3) are not sufficiently strong for justify- ing the desired results about the positions and colors of blocks, because they do not say whether there are few or many “abnormal” objects. McCarthy’s method con- sists in circumscribing abl and ab2, i.e., assuming their “minimality” subject to the restrictions expressed by the axioms. There are several different minimality conditions that can be applied in conjunction with a given axiom set. Each kind of minimization corresponds to a different “cir- cumscription policy”. In the existing literature on appli- cations of circumscription, these “policies” are described either informally, or using some kind of metamathemati- cal notation, or by establishing a standard convention, as in “simple circumscriptive theories” of (McCarthy 1986). It has been suggested, on the other hand, that cir- cumscription policies may be described by axioms in- cluded in the knowledge base along with the axioms de- scribing the objects of reasoning (Lifschitz 1986), (Perlis 1987). We propose in this paper a simple but power- ful formalism for expressing such “policy axioms”, which leads us to the concept of a circumscriptive theory. Our circumscriptive theories are similar to McCarthy’s simple circumscriptive theories in the sense that such a theory is completely determined by its axioms, without any ad- ditional metamathematical specifications. But this for- malism is substantially more expressive; for instance, we 364 Default Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. can axiomatically describe priorities in the sense of (Mc- suffice. In this preliminary report we discuss only such Carthy 1986). cases.’ In this preliminary report we discuss only a special case of circumscriptive theories: theories with “propo- sitional” policies. Most applications of circumscription discussed in the literature use policies of this kind. The final version of the paper will present the general case and will also contain proofs of theorems and the discussion of methods used for determining the effect of circumscrip- tion in the examples. Accordingly, we build formulas of a circumscriptive theory from the symbols included in ;F and P and the additional propositional constants Vpc (P E P, C E .7= U P) using the symbolism of second-order predicate logic. (Thus formulas may include function and predi- cate variables of any arities; we need such variables for expressing minimality.) Vpc reads: C is varied as P is minimized. The paper does not assume familiarity with previous work on circumscription. A better understanding of the motivation behind the theory of circumscription can be gained by reading (McCarthy 1980, 1986). 2. Theories with Propositional Policies A circumscriptive theory T is defined by a finite set of formulas, axioms. We will sometimes identify 2’ with the conjunction of (the universal closures of) its axioms. Thus 2’ can be viewed as a sentence. If FUU = {Cl,.. . , Cl} then we will also write this sen- tence as T( Cl, . . . , Cl). The language of a circumscriptive theory is defined in the same way as a first-order language, i.e., by a finite set .?= of function constants and a finite set P of predicate constants. (We treat object constants as 0-ary function constants.) Consider an m-ary predicate constant P E P. In any model of a given axiom set, P is interpreted as a mapping from Urn, where U is the universe of the model, into {false, true). By the minimality of P at a point (Xl,... ,x,) we understand the impossibility of replac- ing the value of P at that point by a smaller value (i.e., changing it from true to false) without losing the prop- erties expressed in the axioms.’ To make this condition precise, we should specify which additional changes in the values of P and in the values of other predicates or func- tions, if any, we are allowed to make along with chang- ing P(zr,... , xm) to false. In other words, for any n- ary function or predicate constant C E .7= U P and any point (~1,. . . , yn>, a circumscription policy should define whether the value C(yr, . . , yn) is allowed to vary in the process of minimizing P(xi, . . . , xm). This can be done formally by introducing, for each predicate constant P E P and each constant C E .7= U P, an additional predicate constant Vpc, whose arity is the sum of the arities of P and C. “Policy axioms” will contain subformulas of the form VPC(Xl,~~-~~rn,Yl,~-- , yn) expressing that, when P is minimized at point (~1,. . . ,xm), C may be varied at (Yl,... , yn). In simple applications, however, we think of each predicate Vpc as being either identically true or identically false, so that the propositional symbol cor- responding to the universal closure of this formula will ’ This understanding of minimality is in the spirit of the “pointwise” approach to circumscription (Lifschitz 1986). 3. The Semantics of Circumscriptive Theories To define the semantics of circumscriptive theories, we will construct, for every predicate constant P E P, a sentence which expresses the minimality of P. Let Cl,.**, cl be distinct variables similar to Cr , . . . , Cl (i.e., if Ci is an Ia-ary function (predicate) constant then c; is an n-ary function (predicate) variable). Recall that P is among the constants Cl,. . . , Cl, so that there is a corre- sponding predicate variable p in the list cl, . . . , cl. Then the minimality condition for P in T is GEc1 . . . C[[P@) A 1p(-j’) where ZY is a tuple of distinct object variables. This for- mula will be denoted by Minp. It says that it is im- possible to change the interpretations of the constants Cl,..., Cl so that the value of P at some point z will change from true to false, the interpretations of the con- stants Ci which are not allowed to vary as P is mini- mized will remain the same, and the new interpretations of Cl,. . . , Cl will still satisfy T. For example, if the language of T has two unary predicate constants, P and Q, and no other predicate or function constants then Minp is dxpq[P(x) A lp(x) A (+pp 3 p = P) A (+PQ 1 q = Q> A T(P, q)l- (4’) 2 The generality afforded by treating Vpc as a predi- cate is needed for describing some circumscriptions with multiple minima, as in Example 2 from (McCarthy 1980), and for formalizing temporal minimization, as in Section 6 of (Lifschitz 1986). Lifschitz 365 A model of a circumscriptive theory 2’ is any model M of the axioms of T which satisfies Minp for each P E P. A theorem of T is any sentence which is true in every model of T. We have defined theorems in model-theoretic terms, not in terms of proofs. In view of the incompleteness of second-order logic, a definition based on deduction in second-order predicate calculus would not be equivalent to the one given above. We will develop the theory en- tirely in model-theoretic terms. In particular, the expres- sions “A implies B”, “B follows from A”, where A, B are sentences or sets of sentences, will mean that every model of A is a model of B. The definition of a model given above includes a min- imality condition for each predicate constant P in the language. This looks like a serious limitation: in applica- tions, it is often desirable to minimize only some of the available predicates. But minimizing a predicate P re- mains nominal unless the circumscription policy at least allows us to vary P itself. It is easy to see, for instance, that the assumption ~Vpp makes the first 3 conjunctive terms in (4’) inconsistent, and thus makes the whole for- mula trivially true. We can use the axiom Vpp to say that P is in fact among the predicates which we want to minimize.3 The minimality conditions Minp have a simple model-theoretic meaning. Denote the interpretation of a symbol C in a model M by M[C]. In particular, for each propositional constant Vpc, M[Vpc]l is a truth value, true or false. We are interested in the models of the axioms of T with a fixed universe U. Let Mod(T, U) be the set of all such models. For every predicate constant P E P and every 5 E Urn, where m is the arity of P, we define a reflexive and transitive relation (preorder) LP’ on Mod(T, U) as follows: Ml spt Mz if (i) Ml [VQC] = M~[VQC] for all Q E P, C E F U P, (ii) for all C E F-UP, if MI [Vpc] = f a2se then Ml [C] = M2ucn, (iii> M~UP~(t> 5 M2UPII(S). Symbol 2 in part (iii) of this definition refers to the usual ordering of truth values (false < true). Notice that, in view of (i), Ml [Vpc] in (ii) can be equivalently replaced by M2 [Vpc] . Proposition 1. A model M E Mod(T, U) is a model of T iff it is minimal in Mod(T, U) relative to each preorder <pt. - 3 This use of Vpp was suggested by John McCarthy. To formalize the troduction, we take 4. Example blocks world example from the in- 3 = (B), 7’ = (block, ontable, white, red, abl, ab2). Formulas of a circumscriptive theory with these function and predicate constants may also contain 6 x (1 + 6) = 42 propositional symbols Vpc (P E P, C E 3 U P). Let the axiom set of T consist of formulas (l)-(3) plus the axioms V abl,abl, (5) V abl,ontabIe, (6) V ab2,ab2, (7) V ab2,white, (8) V ab2,red- (9) Axioms (5) and (7) tell us that abl and ab2 are mini- mized. According to (6), the interpretation of the predi- cate constant ontabZe is allowed to vary in the process of minimizing abl. This postulate is motivated by the fact that abl is introduced for the purpose of describing the locations of blocks. Axioms (8) and (9) are motivated by similar considerations: ab2 is used for characterizing the colors of blocks. It can be proved that the formulas abl x 3 false, ab2 x s x = B (10) are theorems of T. These formulas, along with axioms (1) and (2), imply the desired conclusions: block x > ontable x, block x A x # B > white x. Remark 1. We decided for each abnormality predicate separately which predicates are varied when that partic- ular abnormality is minimized. This is different from the use of circumscription in (McCarthy 1986), where the set of varying predicates is the same for all kinds (aspects) of abnormality. Our approach appears to make formal- izations more modular. Remark 2. The predicates Vpc used in this example do not have function symbols among their subscripts P, C. Our formalism allows C to be a function; this would have been essential, for instance, if we introduced the function symbol color instead of the predicates white and red. 366 Default Reasoning 5. Generating Sets 7. Adding Axioms to a Circumscriptive Theory Formulas (10) provide, in a sense, a complete de- scription of the effect of minimization in the theory T from Section 4. Let us say that a formula of a circum- scriptive theory is V-free if it does not contain any of the symbols Vpc. We say that a set G of V-free theorems generates T, or is a generating set for it, if the union of G with the V-free axioms of 7’ implies all V-free theorems of T. Using this terminology, we can say that formulas (10) generate T. Every circumscriptive theory is generated, of course, by the set of its V-free theorems. But in this example we have a very simple generating set: a finite set of first- order (actually, even universal) formulas. Finding a sim- ple generating set for a given circumscriptive theory is important, because the predicates Vpc play an auxiliary role, and we are primarily interested in V-free theorems. Methods for computing simple generating sets for some classes of circumscriptive theories based on the ideas of (Lifschitz 1985) will be p resented in the final version of the paper. 6. Policy Axioms The axioms of a circumscriptive theory which are not V-free will be called its policy axioms. The policy axioms used in Section 4 tell us that some of the propositions Vpc are true, but say nothing about the others. We could have included the negations of any of the remaining propositions Vpc in the list of axioms, and that would not have changed the set of V-free theo- rems of ‘7. This is a special case of the following theorem: Proposition 2. If V pc does not occur in the axioms of a circumscriptive theory T then the circumscriptive theory obtained from T by adding 1Vpc to the axiom set has the same V-free theorems as T. Thus only “positive” information about Vpc is es- sential. We will include only such information in axiom sets. The following notation is useful for specifying policy axioms. If M c P and C c 3 U P then V[M : C] stands for the conjunction APE-, cEc Vpc (expressing that the predicates and functions in C are varied when the predicates in M are minimized). For instance, axioms (5)-(g) can be written in this notation as The set of theorems of a circumscriptive theory T depends on the set of its axioms non-monotonically: some theorems of T may be lost if axioms are added to T. For instance, if we add the formula Tontable B to the axioms of the theory from Section 4 then the first of theorems (10) will be lost ( 1 g a on with its corollary ontable B). In this extended theory abl, like ab2, is equivalent to x = B. There is an important special case when adding an axiom makes the set of theorems bigger, as in first-order theories. We say that a policy axiom is pure if it contains no symbols from 3 U P. For instance, axioms (5)-(g) are pure policy axioms. Proposition 3. If a circumscriptive theory T2 is ob- tained from a circumscriptive theory Tl by adding pure policy axioms then (i) every model of T2 is a model of Tl, (ii) every theorem of Tl is a theorem of TX. Add, for instance, Vblock,&,& to the axiom set of the theory T from Section 4. The new axiom expresses our intention to minimize the predicate block. The new the- ory has some theorems that are not theorems of T, such as block x = x = B. But, according to Proposition 3, no theorems of T are lost. 8. Priorities In some cases, the axioms imply a “negative corre- lation” between two minimized predicates, so that one of them can be minimized only at the price of increasing the values of the other. It may be desirable to establish relative priorities between the tasks of minimizing such “conflicting” predicates (McCarthy 1986). For example, when circumscription is used for describing an inheritance hierarchy with exceptions, it may be necessary to assign a higher priority to minimizing exceptions to “more spe- cific” default information. In the formalism of this paper, assigning a higher priority to P E P than to Q E P can be expressed by the axiom V[P : Q] (i.e., T/p*). With this axiom, minimiza- tion guarantees that no change in the interpretation of Q would make it possible to change a value of P from true to false. Consider, for instance, the following facts about the ability of birds to fly (McCarthy 1986). Things in general, normally, cannot fly; birds normally can. But ostriches are birds which normally cannot fly. Symbolically, V[abl : abl, ontable], (11) lab1 x > Tflies x, (13) V[ab2 : ab2, white, red], (12) Tab2 x A bird x 3 flies x, (14) (we drop the braces around the elements of M and C). ostrich x > bird x, (15) Lifschitz 367 Tab3 x A ostrich x > Tflies x. We would like to get the theorems abl x G bird x A -ostrich x, (16) l(happened A A happened B). (22) The circumscriptive theory with axioms (19)-(22) plus the policy axioms V[abl : abl, happened], (23) ab2 x s ostrich x, (17) V[ab2 : ab2, happened], (24) ab3 x s false. Natural candidates for the policy axioms are V[abl : abl, flies], V[ab2 : ab2, flies], V[ab3 : ab3,fZiesl. (18) But these axioms do not lead to the desired result, be- cause there is a conflict between minimizing ab2 on the one hand and minimizing abl and ab3 on the other. This can be fixed by assigning different priorities to the abnor- mality predicates .* We assign the highest priority to ab3, because the corresponding axiom, (16), gives the “most specific” information, and the lowest priority is given to abl. Formally, (18) is replaced by the following set of policy axioms: V[abl : abl, flies], V[ub2 : abl, ab2, flies], V[ub3 : abl, ub2, ab3, f lies]. (18’) This makes the theory stronger (Proposition 3). The cir- cumscriptive theory with axioms (13)-( 16) and (18’) has the desired property: it is generated by formulas (17). 9. Reasoning about Priorities All policy axioms in the examples above are (con- junctions of) atoms. The use of more complex policy axioms allows us to formalize some forms of metalevel reasoning in the circumscriptive theory itself. Consider the following example. Imagine that we have two sources of information about the world, and that we assume by default that any event reported by any of the sources has in fact happened: Tab1 x A reported1 x > happened x, (19) -mb2 x A reported2 x > happened x. Two announcements made by different sources contradict has models of two kinds: in some of them happened A is true, in the others happened B. Giving a higher priority to one of the predicates ubl, ub2 would allow us to arrive at a definite conclusion about which event has actually happened. If, for instance, we consider the first source more reliable then we can add V[abl : ub2] to the axiom set. In the extended theory we can prove happened A and Ihappened B. The reasoning leading to the choice of a prioritiza- tion can be formalized in the following way. Using the propositional symbols pref erred1 and pref erred2, we can describe our approach to establishing priorities in this example by the axioms preferred1 > V[ubl : ub2], (25) preferred2 > V[ub2 : abl]. (26) In the theory with the axioms (19)-(26) we can prove pref erred1 > happened A, pref erred2 > happened B. Adding the axiom preferred1 would make happened A and 7 happened B provable. In this formulation, the choice of priorities is established by logical deduction. Acknowledgements I am grateful to Benjamin Grosof and John Mc- Carthy for comments and constructive criticism. References Lifschitz, V., Computing circumscription, Proc. IJCAI- 85 1, 1985, 121-127. Lifschitz, V., Pointwise circumscription: Preliminary re- port, Proc. AAAI-86 1, 1986, 406-410. McCarthy, J., Programs with common sense, in Proceed- ings of the Teddington Conference on the Mechanization of Thought Processes, Her Majesty’s Stationery Office, London, 1960. McCarthy, J., Circumscription - a form of non- monotonic reasoning, Artificial Intelligence 13 (1980), 27-39. each other: McCarthy, J., Applications of circumscription to formal- reported1 A, reported2 B, (21) izing commonsense knowledge, Artificial Intelligence 28 (1986), 89-118. 4 Another approach is to use “cancellation of inheri- Perlis, D., Circumscribing with sets, Artificial Intelli- tance” axioms (McCarthy 1986). gence 31 (1987), 201-211. 368 Default Reasoning
1987
65
660
bracing Causality in al Reasoning* t Judea Pearl Cognitive Systems Laboratory, UCLA Computer Science Department, Los Angeles, CA. 90024 Abstract The purpose of this note is to draw attention to certain aspects of causal reasoning which are pervasive in ordinary disconrse yet, based on the author’s scan of the literature, have not received due treatment by logical formalisms of common-sense reasoning. In a nutshell, it appears that almost every default rule falls into one of two categories: expectation-evoking or explanation-evoking. The former describes association among events in the outside world (e.g., Fire is typically accompanied by smoke.); the latter describes how we reason about the world (e.g., Smoke normally suggests fire.). This distinction is consistently recognized by peo- ple and serves as a tool for controlling the invocation of new de- fault rules. This note questions the ability of formal systems to reflect common-sense inferences without acknowledging such distinction and outlines a way in which the flow of causation can be summoned within the formal framework of default logic. Let A and B stand for the following propositions: A -- Joe is over 7 years old. B -- Joe can read and write. Case 1: Consider a reasoning system with the default rule defB: B+A. A new fact now becomes available, er -- Joe can recites passages from Shakespeare, together with a new default rule: def 1: el+B. Case 2: Consider a reasoning system with the same default rule, defB: B+A. A new fact now becomes available, e2 -- Joe’s father is a Professor of English, together with a new default rule, def 2: e2-+B. (To make def 2 more plausible, one might add that Joe is known to be over 6 years old and is not a moron.) *This work was supported in part by the National Science Foundation, Grant DCR 864493 1. Joe is over 7 years old. Joe is over 7 years old. Joe’s fafher Lr an English professor. Joecanrea%ef ,v Joe recites Shakespeare. e , CASE 1 Figure I CASE 2 Common sense dictates that Case 1 should lead to con- clusions opposite to those of Case 2. Learning that Joe can re- cite Shakespeare should evoke belief in Joe’s reading ability (B ) and, consequently, a correspondingly mature age (A ). Learning of his father’s profession, on the other hand, while still inspiring belief in Joe’s reading ability, should NOT trigger the default rule B +A because it does not support the hypothesis that Joe is over 7. On the contrary; whatever evi- dence we had of Joe’s literary skills could now be partially at- tributed to the specialty of his father rather than to Joe’s natur- al state of development. Thus, if a belief were previously com- mitted to A, and if measures of belief were permitted, it would not seem unreasonable that e2 would somewhat weaken the belief in A . From a purely syntactic viewpoint, Case 1 is identical to Case 2. In both cases we have a new fact triggering B by default. Yet, in Case 1 we wish to encourage the invocation of B +A while, in Case 2, we wish to inhibit it. Can a default- based reasoning system distinguish between the two cases? The advocates of existing systems may argue that the proper way of inhibiting A in Case 2 would be to employ a more elaborate default rule, where more exceptions are stated explicitly. For example, rather than B +A, the proper default rule should read: B +A I UNLESS e2. Unfortunately, this cure is inadequate on two grounds. First, it requires that every default rule be burdened with an unmanageably large number of conceivable exceptions. Second, it misses the intent of the default rule defB : B +A, the primary aim of which was to evoke belief in A whenever the truth of B can be ascertained. Unfortunately, while correctly inhibiting A in Case 2, the UNLESS cure would also inhibit A in many other cases where it should be encouraged. Pearl 369 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. For example, suppose we actually test Joe’s reading ability and find out that it is at the level of a lo-year old child, une- quivocally establishing the truth of B . Are we to suppress the natural conclusion that Joe is over 7 on the basis of his father being an English professor? There are many other conditions under which even a 5-year-old boy can be expected to acquire reading abilities, yet, these should not be treated as exceptions in the default-logical sense because those same conducive con- ditions are also available to a seven-year old; and, consequent- ly, they ought not to preclude the natural conclusion that a child with reading ability is, typically, over 7. They may lower, somewhat, our confidence in the conclusion but should not be allowed to totally and permanently suppress it. To summarize, what we want is a mechanism that is sensitive to how B was established. If B is established by direct observation or by strong evidence supporting it (Case l), the default rule B +A should be invoked. If, on the other hand, B was established by EXPECTATION, ANTICIPA- TION or PREDICTION (Case 2), then B +A should not be in- voked, no matter how strong the expectation. The asymmetry between expectation-evoking and explanation-evoking rules is not merely that of temporal ord- ering, but is more a product of human memory organization. For example, age evokes expectations of certain abilities not because it precedes them in time (in many cases it does not) but because the concept called “child of age 7” was chosen by the culture to warrant a name for a bona-fide frame, while those abilities were chosen as expectational slots in that frame. Similar asymmeties can be found in object-property, class- subclass and action-consequence relationships. Consider the following two sentences: 1. 2. Joe seemed unable to stand up; so, I believed he was injured. Harry seemed injured; so, I believed he would be un- able to stand up. Any reasoning system that does not take into account the direction of causality or, at least, the source and mode by which beliefs are established is bound to conclude that Harry is as likely to be drunk as Joe. Our intuition, however, dic- tates that Joe is more likely to be drunk than Harry because Harry’s inability to stand up, the only indication for drunken- ness mentioned in his case, is portrayed as an expectation- based property emanating from injury, and injury is a perfectly acceptable alternative to drunkenness. In Joe’s case, on the other hand, not-standing-up is described as a primary property supported by direct observations, while injury is brought up as an explanatory property, inferred by default. Note that the difference between Joe and Harry is not attributed to a difference in our confidence in their abilities to stand up. Harry will still appear less likely to be drunk than Joe when we rephrase the sentences to read: 1. 2. Joe showed slight difficulties standing up; so, I be- lieved he was injured. Harry seemed injured, so, I was sure he would be un- able to stand up. Notice the important role played by the word “so.” It clearly designates the preceding proposition as the primary source of belief in the proposition that follows. Natural languages con- tain many connectives for indicating how conclusions are reached (e.g., therefore, thus, on the other hand, nevertheless, etc.). Classical logic, as well as known versions of default log- ic, appears to stubbornly ignore this vital information by treat- ing all believed facts and facts derived from other believed facts on equal footing. Whether beliefs are established by external means (e.g., noisy observations), by presumptuous expectations or by quest for explanation does not matter. But even if we are convinced of the importance of the sources of one’s belief; the question remains how to store and use such information. In the Bayesian analysis of belief net- works [Pearl 19861, this is accomplished using numerical parameters; each proposition is assigned two parameters, n: and h, one measuring its accrued causal support and the other its accrued evidential support. These parameters then play de- cisive roles in routing the impacts of new evidence throughout the network. For example, Harry’s inability to stand up will accrue some causal support, emanating from injury, and zero evidential support, while Joe’s story will entail the opposite support profile. As a result, having observed blood stains on the floor would contribute to a reduction in the overall belief that Joe is drunk but would not have any impact on the belief that Harry is drunk. Similarly, having found a whiskey bottle nearby would weaken the belief in Joe’s injury but leave no impact on Harry’s. These inferences are in harmony with intuition. Harry’s inability to stand up was a purely conjectural expecta- tion based on his perceived injury, but it is unsupported by a confirmation of any of its own, distinct predictions. As such, it ought not to pass information between the frame of injury and the frame of drunkenness. The mental act of imagining the likely consequences of an hypothesis does not activate oth- er, remotely related, hypotheses just because the latter could also cause the imagined consequence. For an extreme exam- ple, we would not interject the possibility of a lung cancer in the context of a car accident just because the two (accidents and cancer) could lead to the same eventual consequence -- death. The causal/evidential support parameters are also in- strumental in properly distributing the impact of newly- observed facts among those propositions which had predicted the observations. Normally, those propositions which generat- ed strong prior expectations of the facts observed would re- ceive the major share of the evidential support imparted by the observation. For example, having actually observed Harry un- able to stand up would lend stronger support to Harry’s injury 370 Default Reasoning than to Harry’s drunkenness. Harry’s injury, presumably sup- ported by other indicators as well, provides strong predictive support for the observation, which Harry’s drunkenness, un- less it accrues additional credence, cannot “explain away.” Can a non-numeric logic capture and exploit these nuances? I think, to some degree, it can. True, it can not ac- commodate the notions of “weak” and “strong” expectations, nor the notion of “accrued” support, but this limitation may not be too severe in some applications, e.g., one in which be- lief or disbelief in a proposition is triggered by just a few de- cisive justifications. What we can still maintain, though, is an indication of how a given belief was established -- by expecta- tional or evidential considerations, or both, and use these indi- cations for deciding which default rules can be activated in The distinction between the two types of rules can be demonstrated using the following example. (See Figure 2). PI -- “It rained last night” Pz -- “The sprinkler was on last night” Q -- “The grass is wet” R, -- “The grass is cold and shiny” R, -- “My shoes are wet” Figure 2 any given state of knowledge. Let each default rule in the system be labeled as either C-def (connoting “causal”) or E-def (connoting “evidential”). The former will be distinguished by the symbol jc, as in “‘FIRE jc SMOKE ,‘I meaning “FIRE causes SMOKE ,” and the latter by +e, as in “SMOKE je FIRE ,” meaning “SMOKE is evidence for FIRE .” Correspondingly, let each believed proposition be labeled by a distinguishing symbol, “_sc” or “+=.” A proposition P is E-believed, written je P , if it is a direct consequence of some E-def rule. If, however, all known ways of establishing P involve C-def rule as the final step, it is said to be C-believed, i.e., support- ed solely by expectation or anticipation. The semantics of the C-E distinction are captured by the following three inference rules: (a> P%Q @I P+cQ w P-e12 -v -bp +t?p -bQ jCQ +t?Q Note that we purposely precluded the inference rule: +t?Q which led to counter-intuitive conclusions in Case 2 of Joe’s story. These inference rules imply that conclusions can only attain E-believed status by a chain of purely E-d@ rules. jc conclusions, on the other hand, may be obtained from a mix of C -def and E -def rules. For example, a E -def rule may (viz., (c)) yield a -+e conclusion which can feed into a C-def rule (viz., (b)) and yield a +c conclusion. Note, also, that the three inference rules above would license the use of loops such as A +B and B +A without falling into the circu- lar reasoning trap. Iterative application of these two rules would never cause an C-believed proposition to become E-believed because at least one of the rules must be of type C. Let P 1, P 2, Q , R r, and R 2 stand for the propositions: P 1” “It rained last night” P2--“The sprinkler was on last night” Q -- “The grass is wet” R r--“The grass is cold and shiny” R2--“My shoes are wet” The causal written: relationships between these propositions would be PI-% Q Q +ePl P2-)c Q Q +eP2 Q +A RI+, Q Q -32 R2--)e Q If Q is established by an E -def rule such as R 1 je Q then it can trigger both P 1 and R2. However, if Q is established merely by a C -def rule , say P 2 jc Q , then it can trigger R 2 (and R r) but not P 1. The essence of the causal assymmetry stems from the fact that two causes of a common consequence interact dif- ferently than two consequences of a common cause; the form- er COMPETE with each other, the latter SUPPORT each oth- er. Moreover, the former interact when their connecting pro- position is CONFIRME D, the latter interact only when their connecting proposition is UNCONFIRMED. Let us see how this C-E system resolves the problem of Joe’s age (See Fig.1.). defB and def 1 will be classified as E -def rules, while def2 will be proclaimed an C -def rule. All provided facts (e.g., el and e2) will naturally be E-believed. In Case 1, B will become E-believed (via rule (c)) and, subsequently, after invoking defB in rule (c), A , too, will become E-believed. In case 2, however, B will only be- come C-believed (via rule (b)) and, as such, cannot invoke def B, leaving A undetermined, as expected. The C-E system in itself does not solve the problem of retraction; that must be handled by the mechanism of ex- ceptions. For example, if in case 1 of Joe’s story we are also told that e3- “Joe is blind and always repeats what he hears” we should be inclined to retract our earlier conclusion that Joe can read and write, together with its derivative, that Joe is over Pearl 371 7 years old. However, the three inference rules above will not cause the negation of B unless we introduce e3 as an excep- tion to def r, e.g., e I --->= B I UNLESS e3. In the next sec- tion, we will touch on the prospects of implementing retrac- tion without introducing exceptions. Can we a ne Non-n st E-believed status is clearly more powerful than C-believed status. The former can invoke both C-def and E-d@ rules, while the latter, no matter how strong the belief, invokes only C-def rules. The question may be raised whether one shouldn’t dispose of this inferior, ‘ ‘C-rated” form of belief altogether and restrict a reasoning systey to deal with beliefs based only on genuine evidential support . The answer is that C-d@ rules, as weak as they sound, serve two functions essential for common-sensical reasoning: predictive planning and implicit censorship. Planning is based on the desire to achieve certain ex- pectations which can be predicted from one’s current knowledge. The role of C-def rules is to generate those pred- ictions from current C-believed and E-believed proposi- tions. For example, if we consider buying Joe a birthday gift and we must decide between a book or a TV game, it would obviously be worth asking if we believe Joe can read. Such belief will affect our decision even if it is based on inferior, “C-rated” default rules, such as “If person 2 is over 7 years old, then 2 can read” or, even the weaker one yet: “If Z ‘s fa- ther is an English professor, then Z can read.” Prediction fa- cilities are also essential in interpretive tasks such as language understanding because they help explain behavior of other planning agents around. Such facilities could be adequately served by the C-E system proposed earlier. However, the prospect of using C-def rules as impU- tit censors of E-def rules is more intriguing because it is pervasive even in purely inferential tasks (e.g., diagnosis), in- volving no actions or planning agents whatsoever. Consider the “frame problem” in the context of car-failure diagnosis with the E-w rule: “If the car does not start, the battery is probably dead.” Obviously, there are many exceptions to this rule, e.g., “... unless the starter is burned,” “... unless some- one pulled the spark plugs,” “... unless the gas tank is emp- ty,” etc., and , if any of these conditions is believed to be true, people would censor the invocation of alternative explanations for having a car-starting problem. What is equally obvious is that people do not store all these hypothetical conditions expli- citly with each conceivable explanation of car-starting prob- lems but treat them as unattached, implicit censors, namely, conditions which exert their influence only upon becoming ac- tively believed and, when they do, would uniformly inhibit every E-d@ rule having “car not starting” as its sole an- tecedent. 1 In Mycin [Shortliffe, 19761, for exatnple, rules a~ actually restricted in this way, leading always ~&XII evidence to hypotheses. But if the list of censors is not prepared in advance, how do people distinguish a genuine censor from one in dis- guise (e.g., “I hear no motor sound”)? I submit that it is the causal directionality of the censor-censored relationship which provides the identification criterion. By what other cri- terion could people discriminate between the censor “The starter is burned” and the candidate censor “My wife testifies, ‘The car won’t start’ ?” Either of these two inspires strong belief in “the car won’t start” and “I’ll be late for the meet- ing;” yet, the burned-out starter is licensed to censor the con- clusion “the battery is dead,” while my wife’s testimony is licensed to evoke it. It is hard to see how implicit censorship could be realized, had people not been blessed with clear dis- tinction between explanation-evoking and expectation-evoking rules. So, why blur the distinction in formal reasoning sys- tems? Note how convenient such a censorship scheme would be. No longer would we need to prepare the name of each po- tential censor next to that of a would-be censored, the connec- tion between the two will be formed “on the fly,” once the censor becomes actively believed. The mere fact that a belief in a proposition B is established by some C-def rule would automatically and precisely block all the rules we wished cen- sored. More ambitiously, it could also lead to retracting all conclusions drawn from premature activation of such rules as in Truth-Maintenance Systems [Doyle, 19791. True, to imple- ment such a scheme we would need to label each believed pro- position with the name of its (active) justifications and to aug- ment our inference rules with instructions to correctly handle propositions which are both E-believed and C-believed. For example, Q could be C-believed due to P r and later become E-believed due to at, in which case (unlike purely E- believed propositions in inference-rule (c)), no Q +e P2 rule should fire. However, this extra bookkeeping would be a meager price to pay for a facility that inhibits precisely those rules we wish inhibited and does so without circumscribing in advance under what conditions would a given proposition con- stitute an exception to any given rule. This is one of the com- putational benefits offered by the organizational instrument called causation and is fully realizable using Bayesian infer- ence. Can it be mimicked in non-numeric systems as well? Unfortunately the benefit of implicit censorship is hin- dered by a more fundamental issue, and it is not clear how it might be realized in purely categorical systems which preclude any sort of symbols for representing the degree of support that a premise imparts to a conclusion. Treating all C-d@ rules as implicit censors would be inappropriate, as was demonstrat- ed in the starting theme of this note. In case-l of Joe’s story, we correctly felt uncomfortable letting his father’s profession inhibit the E-d@ rule CAN-READ(JOE) -->= OVER-7(JOE), while now we claim that certain facts (e.g., burned starter), by virtue of having such compelling predictive influence over other facts (e.g., car not starting), should be allowed to inhibit all E-def rules emanating from the realization of such predic- tions (e.g., dead battery). Apparently there is a sharp qualita- tive difference between strong C-dqf rules such as 372 Default Reasoning NOT-IN (Z, SPARKPLUGS) +c WON’T-START (Z) and weak C-d@ rules such as ENGLISH-PROFESSOR (father (Z)) +c CAN-READ (Z) or IN (Z, OLD-SPARKPLUGS) +c WON’T-START (Z). Strong C-def rules, if invoked, should inhibit all E -def rules emanating from their consequences. On the other hand, weak C -def rules should allow these E -def rules to fire (via rule (c)). This distinction is exactly the role played by the parameter n which, in Bayesian inference, measures the ac- crued strength of causal support. It is primarily due to this strong vs. weak distinction that Bayesian inference rarely leads to counter-intuitive conclusions, and this is also why it is advisable to consult Bayes analysis as a standard for abstract- ing more refined logical systems which incorporate both de- grees of belief and causal directionality. However, the pur- pose of this note is not to advocate the merits of numerical schemes but, rather, to emphasize the benefits we can draw from the distinction between causal and evidential default rules. It is quite feasible that with just a rough quantization of rule strength, the major computational benefits of causal rea- soning could be tapped. Conclusion The distinction between C-believed and E-believed proposi- tions allows us to properly discriminate between rules that should be invoked (e.g., case 1 of Joe’s story) and those that should not (e.g., case 2 of Joe’s story), without violating the original intention of the rule provider. While the full power of this distinction can, admittedly, be unleashed only in systems that are sensitive to the relative strength of the default rules, there is still a lot that causality can offer to systems lacking this sensitivity. Acknowledgments I thank H. Geffner, V. Lifschitz, D. McDermott, J. Minker, D. Perlis and C. Petrie for their comments on an earlier version of this paper. References [Doyle, 19791 Jon Doyle. “A Truth Maintenance System,” Artificial Intelligence, 121231-273, 1979. [Pearl, 19861 Judea Pearl. “Fusion, Propagation and Structur- ing h Belief Networks,” Artificial Intelli- gence, 29(3):241-288, September 1986. [Shortliffe, 19761 Edward, H. Shortliffe. Computer-Based Medical Consultation: MYCIN, Elsevier, 1976. Pearl 373
1987
66
661
ties Judea Pearl & Thomas Verma Cognitive Systems Laboratory UCLA Computer Science Department, Los Angeles, CA 90024-1600 Abstract Data-dependencies of the type “x can tell us more about y given that we already know z ” can be represented in various formal- isms: Probabilistic Dependencies, Embedded-Multi-Valued Dependencies, Undirected Graphs and Directed-Acyclic Graphs (DAGs). This paper provides an axiomatic basis, called a semi- graph& which captures the structure common to all four types of dependencies and explores the expressive power of DAGs in representing various types of data dependencies. It is shown that DAGs can represent a richer set of dependencies than undirected graphs, that DAGs completely represent the closure of their specification bases, and that they offer an effective computational device for testing membership in that closure as well as inferring new dependencies from given inputs. These properties might ex- plain the prevailing use of DAGs in causal reasoning and seman- tic nets. The notion of relevance or informational dependency is basic to human reasoning. People tend to judge the 3-place relation- ships of mediated dependency (i.e., x influences y via z ) with clarity, conviction and consistency. For example, knowing the departure time of the last bus is considered relevant for assess- ing how long we are about to wait for the next bus. Yet, once we learn the current whereabouts of the next bus, the former no longer provides useful information. These common- sensical judgments are issued qualitatively and reliably and are robust to the uncertainties which accompany the assessed events. Consequently, if one aspires to construct common- sensical reasoning systems, it is important that the language used for representing knowledge should facilitate a quick detection of mediated dependencies by a few primitive opera- tions on the salient features of the representation scheme. Making effective use of information about dependen- cies is a computational necessity, essential in any reasoning. If we have acquired a body of knowledge z and now wish to as- sess the truth of proposition x, it is important to know whether it would be worthwhile to consult another proposition y, which is not in z . In the absence of such information, an infer- ence engine would spend precious time on derivations bearing no relevance to the task at hand. A similar necessity exists in * This work was supported in part by the National Science Foundation Grant #DCR 85-01234. 374 Default Reasoning truth maintenance systems. If we face a new piece of evi- dence, contradicting our previously held assumptions, we must retract some of these assumptions and, again, the need arises of quickly identifying those that are relevant to the con- tradiction discovered. But how would relevance information be encoded in a symbolic system? Explicit encoding is clearly impractical because the number of (X , y , z ) combinations needed for reasoning tasks is astronomical. Relevance or dependencies are relationships which change dynamically as a function of the information available at any given time. Acquiring new facts may destroy existing dependencies as well as create new ones. For exam- ple, learning a child’s age destroys the dependency between the size of his shoes and his reading ability, while learning that a patient suffers from a given symptom creates new dependencies between the diseases that could account for the symptom. What logic would facilitate these two modes of reasoning? B. my Logic? In probability theory, the notion of informational relevance is given precise quantitative underpinning using the device of conditional independence, which successfully captures our in- tuition about how dependencies should change with learning new facts. A variable x is said to be independent of y) given the information 2, if P(x,y Iz)=P(x Iz)P(y lz). (1) Clearly, x and y could be marginally dependent (i.e., depen- dent, when z is unknown) and become independent given z, and, conversely, x and y could be marginally independent and become dependent only upon learning the value of z. These dynamics are also captured by the qualitative notion of Em- bedded Multivalued Dependencies (F&MD) in relational data- bases. Thus, in principle, probability and database theories could provide the machinery for identifying which proposi- tions are relevant to each other with any given state of knowledge. Yet, it is flatly unreasonable to expect people or machines to resort to numerical equalities or relational tables in order to extract relevance information. FIuman behavior suggests that such information is inferred qualitatively from the organizational structure of human memory. Accordingly, it would be interesting to explore how assertions about From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. relevance can be inferred qualitatively and whether assertions similar to those of probabilistic or database dependencies can be derived logically without references to numbers or tables. Preliminary work related to probabilistic dependencies has been reported in [Pearl and Paz, 19861 and is extended in this paper to the qualitative notion of EIVIVD. Having a logic of dependency might be useful for testing whether a set of dependencies asserted by an expert is self- consistent and might also allow us to infer new dependencies from a given initial set of such relationships. However, such logic would not, in itself, guarantee that the inferences re- quired would be computationally tractable or that any se- quence of inferences would be psychologically meaningful, i.e., correlated with familiar mental steps taken by humans. To facilitate this latter feature, we must also make sure that most derivational steps in that logic correspond to simple local operations on structures depicting common-sensical associa- tions. We call such structures dependency graphs. The nodes in these graphs represent propositional vari- ables, and the arcs represent local dependencies among conceptually-related propositions. Graph representations are perfectly suited for meeting the requirements of explicitness, saliency and stability, i.e., the links in the graph permit us to qualitatievely encode dependence relationships, and the graph topology displays these relationships explicitly and preserves them, in fact, under any assignment of numerical parameters. It is not surprising, therefore, that graphs constitute the most common metaphor for describing conceptual dependen- cies. Models for human memory are often portrayed in terms of associational graphs (e.g., semantic networks woods, 19751, constraint networks [Montanari, 19741, inference nets muda, Hart and Nilsson, 19761 conceptual dependencies [Schank 19721) and conceptual graphs [Sowa, 19831). Graph-related concepts are so entrenched in our language (e.g. “threads of thoughts,” “lines of reasoning,’ ’ “connected ideas,” “far-fetched arguments” etc.) that one wonders whether people can, in fact, reason any other way except by tracing links and arrows and paths in some mental representa- tion of concepts and relations. Therefore, a natural question to ask is whether the intuitive notion of informational relevancy or the formal notions of probabilistic and database dependen- cies can be captured by graphical representation, in the sense that all dependencies and independencies in a given model would be deducible from the topological properties of some graph. Despite the prevailing use of graphs as metaphors for communicating and reasoning about dependencies, the task of capturing dependencies by graphs is not at all trivial. When we deal with a phenomenon where the notion of neighborhood or connectedness is explicit (e.g., family relations, electronic circuits, communication networks, etc.), we have no problem configuring a graph which represents the main features of the phenomenon. However, in modeling conceptual relations such as causation, association and relevance, it is often hard to distinguish direct neighbors from indirect neighbors; so, the task of constructing a graph representation then becomes more delicate. The notion of conditional independence in probability theory provides a perfect example of such a task. For a given probability distribution P and any three variables x, y , z , while it is fairly easy to verify whether knowing z renders x independent of y , P does not dictate which variables should be regarded as direct neighbors. In other words, we are given the means to test whether any given element z intervenes in a re- lation between elements x and y, but it remains up to us to configure a graph that encodes these interventions. We shall see that some useful properties of dependencies and relevan- ties cannot be represented graphically and the challenge remains to devise graphical schemes that minimize such deficiencies. Ideally, we would like to represent dependency between ele- ments by a path connecting their corresponding nodes in some graph G. Similarly, if the dependency between elements x and y is not direct and is mediated by a third element, z, we would like to display z as a node that intercepts the connection between x and y , i.e., z is a cutset separating x from y . This correspondence between conditional dependencies and cutset separation in undirected graphs forms the basis of the theory of Markov fields [Lauritzen, 19821, and has been given an ax- iomatic characterization in [Pearl and Paz, 19861. The main weakness of undirected graphs stems from their inability to represent nontransitive dependencies; two in- dependent variables will end up being connected if there exists some other variable that depends on both. As a result, many useful independencies remain unrepresented in the graph. To overcome this deficiency, one can employ directed graphs and use the arrow directionality to distinguish between dependen- cies in various contexts. For instance, if the sound of a bell is functionally determined by the outcomes of two coins, we will use the network coin 1 + bell t coin 2, without connecting coin I to coin 2. This pattern of converging arrows is inter- preted as stating that the outcomes of the two coins are nor- mally independent but may become dependent upon knowing the outcome of the bell (or any external evidence bearing on that outcome). This facility of directed graphs forms the basis of causal networks which have a long tradition in the social sciences [Kenny, 19791, and have also been adopted for evi- dential reasoning tasks [Pearl, 19861. This paper treats directed graphs as a language of ex- pressing dependencies. Section II presents formal definitions for two models of data dependencies (Probabilistic and EMVD) and two models of graphical dependencies (undirect- ed and directed). An axiomatic definition is then provided for a relational structure called semi graphoid which covers all four models, thus formalizing the general notion of mediated dependence. Section III compares the expressive power of directed graphs to that of undirected graphs and shows the su- periority of the former. Section IV, explores the power of directed graphs to cover data dependencies of the type pro- duced by probabilistic or logical models. The main contibu- Pearl and Verma 375 tion of the paper lies in showing that directed acyclic graphs (DAGs) are powerful tools for encoding and inferring data dependencies of both types, identifying the scource of that power, and highlighting its limitations. Definition: An Undirected Graph Dependency model (UGD) & is defined in terms of an undirected graph G . If X, Y and 2 are three disjoint subsets of nodes in G then by definition Z(X, 2, Y)o iff every path between nodes in X and Y contains at least one node in Z. In other words, 2 is a cutset separating X from Y. A complete axiomatization of UGD is given in [Pearl and Paz, 19861. Definition: A Directed Acyclic Graph Dependency model Definition: A Dependency Model M over a set of objects U is (DAGD) Mo is defined in terms of a directed acyclic graph any subset of triplets (X , 2, Y) where X , Y and Z are three dis- (DAG) G . If X, Y and Z are three disjoint subsets of nodes in joint subsets of U. The triplets in M represent independen- G , then by definition Z(X, Z, Y), iff there is no bi-directed ties, that is, (X, Z, Y) E M asserts that X and Y interact only path from a node in X to a node in Y along which every node via Z, or, “X is independent of Y given Z”. This statement is with converging arrows either is or has a descendent in Z and also written as Z(X, Z, Y) with an optional subscript to clarify every other node is outside Z . the type of the dependency when necessary. Definition: A Probabilistic Dependency model (PD) Mp is defined in terms of a probability distribution P over some set of variables U, i.e. a function mapping any instantiation of the variables in U to a non-negative real number such that the sum over the range of P is unity. If X, Y and Z are three subsets of U and x, y and z any instantiation of the variables in these subsets, then by definition I (X , Z , Y), iff P(x Izy)=P(x lz) (2) The latter condition corresponds to ordinary cutset separation in undirected graphs while the former conveys the idea that the inputs of any causal mechanism become depen- dent once the output is known. This criteriorn was called d- separation in [Pearl, 19861. In Figure 1, for example, X = 12) and Y = 13) are d -separated by Z = {l) (i.e. (2,1,3) E MG ) be- cause knowing the common cause 1 renders its two possible consequences, 2 and 3, independent. However X and Y are not d-separated by Z’ = {l, 51 because learning the value of the consequence 5, renders its causes 2 and 3 dependent, like opening a pathway along the converging arrows at 4. This definition is equivalent to that given in (1) and conveys the idea that, once Z is fixed, knowing Y can no longer influence the probability of X . pawid, 19791. Definition: A dependency model iV is said to be in PD, M E PD, if there exists a probability distribution P such that the definition above @q.(2)) holds for every triplet (X, Z, Y) in M . Thus, PD (and, similarly, PD-, UGD, DAGD, and SG defined below) represents a class of dependency models, all sharing a common criterion for selecting triplets in M . Definition: A Non-Extreme Probabilistic Dependency model (PO-) is any model Mp in PD where the range of P is restrict- ed to the positive real numbers, (Le., excluding O’s and l’s). Figure 1. A DAG displaying d-separation: (2,1,3) E 1Mc while (2,{1,51,3) d & Definition [Fagin, 19771: An Embedded Multivalued Depen- Definition: An I-map of a dependency model M is any model dency model (EMVD) MR is defined in terms of a database R 11$ such that M’ c_ 1~. For example, the undirected graph over a set of attributes U, i.e. a set of tuples of values of the X&YisanZ-mapoftheDAGX+ZtY. attributes. The notation <aI a2 * . + a,> is conventionally used to denote that the tuple is in the relation R. If X , Y and Z are three disjoint subsets of U and x1, x2, yl,y2, z any instantia- tions of the corresponding attributes in X , Y and Z, then by definition Z (X , Z, Y), iff Definition: A D-map of a dependency model M is any model M’ such that M’ 1 M. For example, if a relation R contains all tuples having non-zero probability in P then MR is a D -map of IM,. CXlYlZ >& <x2yzz > 3 cx1y2z > (3) In other words, the existence of the subtuples c x t y I z > and cx2y2z > guarantees the existence of cx1y2z >. EMVD is a powerful class of dependencies used in databases and it con- veys the idea that, once Z is fixed, knowing Y cannot further restrict the range of values permitted for X. This definition was also used by Shenoy and Shafer [1986] to devise a “‘qual- itative” extension of probabilistic dependencies. Definition: A Perfect-map of a dependency model M is any model M’ such that M’= M. For example, the undirected graph X-Z-Y is a perfect map of the DAG X + Z + Y . We will be primarily interested in mapping data depen- dencies into graphical structures, where the task of testing connectedness is easier than that of testing membership in the original model M. A D -map guarantees that vertices found to be connected are, indeed, dependent; however, it may occa- sionally display dependent variables as separated vertices. An Z-map works that opposite way: it guarantees that vertices 376 Default Reasoning found to be separated always correspond to genuinely in- dependent variables but does not guarantee that all those shown to be connected are, in fact, dependent. Empty graphs are trivial D -maps, while complete graphs are trivial I-maps. AxIoMmc CHARAC3ERIZAllDNS Definition: A semi-graphoid (SG) is any dependency model M which is closed under the following properties: DATA DEPENDENCIES Symmetry: (X,Z,Y)E M <=> (Y,Z,X)E M I Decomposition (X,Z, YlV) E M =$ (X,Z, Y) E M Weak Union (X,Z,Yw)E M * (X,zw,Y)E M GRAPfllCAl. Contraction (X,ZY, W) & (X,Z, Y) E M * (X,Z, YW) E M(4) DEPENDENCIES E It is straight forward to show that all the specialized classes of dependency models presented thus far are semi-graphoids, and in view of this generality, these four properties are selected to represent the general notion of mediated dependence between items of information. With the exception of UGD, none of the specialized dependency classes possesses complete parsimonious ax- iomatization similar to that of semi-graphoids. EMVD is known to be non-axiomatizable by a bounded set of Horn clauses [Parker, 19801, and a similar result has recently been reported for DAGD [Geiger, 19871. PD is conjectured to be equivalent to SC (i.e., M E PD <=> M E SG) but no proof (nor disproof) is in sight. Figure 2.Hierarchy among six classes of dependency models. Definition: Let M be a dependency model from some class 44 of dependency models. A subset B GM of triplets is a M-basis of M iff every model M’ E M which contains $3 -also contains M. Thus, a basis provides a complete encoding of the information contained in M; knowing B and M enables us, in principle, to decide what triplets belong to ~4. One of the main advantages of graphical representa- tions is that they posses extremely parsimonious bases and ex- tremely efficient procedures for testing membership in the cloy sures of these bases. For example, to encode all dependencies inferable from a given undirected graph G = (V, E) we need only specify the set of neighbors N(x) for each node x in G, and this corresponds to specifying a neighborhood basis: BN =i(x,N(x), U -x -N(x)),Vx E v) (6) Definition: A graphoid is any semi-graphoid M which is also closed under the following property: Intersection: (X,zY,W)h (X,zw,Y)E M =s(X,Z,Yw)E M (5) Testing membership of an arbitrary triplet (X , Z, Y) in the clo- sure of BN simply amounts to testing whether Z is a cutset of G separating the nodes in X from those in Y. It is straight forward to show that classes PD-, UGD, and DAGD are all graphoids. Only EMVDs and pure PDs do not comply to this axiom. DAGs also possess efficient bases; to encode all dependencies inferable from a given DAG G, we need only specify the parents PA(x) for each node x E G. To encode those in the form of a basis we arrange the nodes in any total - in UG, and permit the construction of graphical I-maps from The most important properties of graphoids [Pearl and local dependencies. By connecting each variable x to any sub- set of variables which renders x conditionally independent of Paz, 19861 are that they possess unique edge-minimal I-maps all other variables in U, we obtain a graph that is an I-map of u. Such local construction is not guaranteed for semi- graphoids. The reason this paper focuses on semi-graphoids is to include dependency models representing logical, functional and definitional constraints; such constraints are excluded from PD-. In Section IV, we will show that the use of DAG’s provides a local construction of I-maps for every semi- graphoid. The relationships between the six classes of depen- dency models are shown in the hierarchy of Figure 2, where order xi,. . . B =i(xi,PA(Xi),~Xxlt..., Xi-I]-PA(Xi)) I i=l,..., n), (7) ,x, consistent with the arrows of G and construct the stratified set of triplets: stating that PA (Xi) d-separates xi from its other predecessors. B is a DAG-basis of G since the closure of B coincides with the independencies displayed by G . One would normally expect that the introduction of direc- tionabty into the language of graphs would render them more expressive, capable of portraying a more refined set of depen- dencies, e.g., non-transitive. Thus, it is natural to ask: arrows stand for set inclusions. 1. Are all dependencies representable by undirected graphs also representable by a DAG? 2. How well can DAGs represent the type of data depen- dencies induced by probabilistic or logical models? Pearl and Verma 377 while The second question will be treated in Section IV the answer to the first question is, clearly, negative. For instance, the dependency structure of the diamond-shaped graph of Fig.3(a) asserts the two independencies: Z(A,BC,D)andZ(B,AD,C). NoDAGcanexpressthesetwo relationships simultaneously and exclusively. If we direct the arrowsfromA toD,wegetZ(A,BC,D)butnotZ(B,AD,C); if we direct the arrows from B to C, we get the latter but not the former. This limitation will always be encountered in non- chordal graphs, i.e., graphs containing a chordless cycle of length 24 [Tarjan & Yannakakis, 19841; no matter how we direct the arrows, there will always be a pair of non-adjacent parents sharing a common child, a configuration which yields independence in undirected graphs but dependence in DAGs. This problem does not exists in chordal graphs and, conse- quently, we have Theorem 1: UGD and DAGD intersect in a class of depen- dency models representable by chordal graphs. Non-chordal graphs represent the one class of depen- dencies where undirected graphs exhibit expressiveness supe- rior to that of DAGs graphs. However, this superiority can be eliminated by the introduction of auxiliary variables. Consider the diamond-shaped graph of Figure 3(a). Introducing an aux- iliary variable E in the manner shown in Figure 3(b) creates a DAG model on five variables which also asserts Z (C , B , D ). Figure 3. Expressing dependencies of undirected graph (a) - - by a DAG (c) using auxiliary nodes. If we “clamp” the auxiliary variable E at some fixed value E = e i, as in-Figure 3(c), the dependency structure projected on A, B, C, D is identical to the original structure of Figure 3(a),i.e.,Z(A,BC,D)andZ(B,AD,C). In general, since every arc C-D in an undirected graph is equivalent to the bi-directed path C +YtD (with E “clamped”), we have: Theorem 2: Every dependency model expressible by an un- directed graph is also expressible by a DAG, with some auxili- ary nodes. I-V. s? Suppose someone (e.g., an expert) provides us with a list L of positive and negative triplets, representing a set of indepen- dencies and dependencies in some (undisclosed) dependency model M, of a known class. Several questions arise: 1. How can we test whether L is consistent and/or non redundant? 2. How can we deduce all the implications of L, or, at least test whether a given triplet is logically implied by L? 3. What additional triplets are required to make the model completely specified? These questions are extremely difficult to answer if M does not possess a convenient basis or if L does not coincide with that basis. Even in a neatly axiomatized system such as semi-graphoids the answers to these questions involve intract- able proof procedures. Graph representations can be harnessed to alleviate these difficulties; we construct a graph model that entails L and draw inferences from G instead of L. The quality of inferences will depend, of course, on how faithfully G cap- tures the closure of L. The following results (see [Verma, 19871 for proofs) uncover the unique powers of DAGs in per- forming this task. Let Ua(“) represent the set of elements smaller than n under some total ordering 8 on the elements of U, i.e., {u E u I e(u) < e(n)]. efinition: A stratified protocol LO of a dependency model M is any set of pairs such that (-GWE Le~(X.SX,uqx)-Sx)E M (9) Intuitively, LB lists, for each x E U, a set of predecessors S, of x which renders x conditionally independent of all its other predecessors (in the order 0). In causal modeling, Le specifies the set of direct causes of event x. For example, the causal model of Figure 1 is specified by the protocol: L, = fo)), (2, ill), (3, fU), (4,129 31), (59 141) Stratified protocols were used in [Pearl 19861 to con- struct DAG representations (called Bayesian Networks) of probabilistic dependencies by connecting the elements in S, as direct parents of x . The following results justify this construc- tion and generalize it to any semi-graphoid, including, in par- ticular, the qualitative dependencies of EIMVD. The first result states that the DAG constructed in this fashion can faithfully be used to infer dependency informa- tion; any independence inferred from that DAG must be true in M and, furthermore, every independence which is implied by the protocol will be displayed in the DAG. Theorem 3: If M is any semi-graphoid then the DAG gen- erated from any stratified protocol L, of M is an Z -map of M . 378 Default Reasoning Corrollary: If L, is any stratified protocol of some depen- dency model M , the DAG generated from L, is a perfect map of the semi-graphoid closure of L,. Another interesting corollary of Theorem 3 is a gen- eralization of the celebrated Mar&v-chain property. It states (informally) that if in a sequence of variables Xl,X,,*..,Xi *.* each Xi “shields” its successor Xj+l from the influence of its predecessors, then each Xi is “shielded” from all other variables by its two nearest neighbors, Xisl and xi+l- (The converse holds only in graphoids). This property has been used extensively in probability theory and Theorem 3 permits its application to qualitative dependencies as well. Note that, since the topology of the DAG depends only on the set of child-parents pairs contained in the protocol, the order 0 used in generating Lo need not be known; Theorem 3 holds for any generating order, and the only consistency re- quirement on the structure of L is that 10, , x) I y E S,) consti- tutes a partial order. The second result states that every independence in a semi-graphoid cau be inferred from at least one stratified pro- tocol. Theorem 4: If M is any semi-graphoid then the set of DAGs generated from all stratified protocols of M is a perfect map of Iki. (The criterion for separation relative to a set of DAGs is that d-separation must exist in at least one of the DAGs.) Thus, even though every triplet in a stratified protocol asserts an independency relative to a singleton element; the sum total of such triplets is sufficient to encode all the set-to- set independencies embedded in the semi-graphoid. This paper demonstrates that directed acyclic graphs (DAGs) possess powerful inferential properties. If an input set of dependencies is given in the form of a stratified protocol, then all implications of this input can be deduced efficiently, by graphical manipulations, instead of logical derivations. No equivalent protocol of similar parsimony is known to work for undirected graphs, unless the generating model is a full graphoid, namely, unless logical, functional and definitional constraints are excluded from the model. Thus, DAGs appear to provide powerful inference tools for handling data dependencies of the type encountered in both probabilis- tic and logical reasoning. This feature helps explain the pre- vailing use of DAGs in causal models and semantic nets. pawid, 19791 A.P. Dawid. Conditional Independence in Sta- tistical Theory, J.R. Statist.B., 41 (l):l-33, 1979. [Duda et al., 19761 Richard.0. Duda, Peter.E. Hart and Ni1s.J. Nilsson. Subjective Bayesian IMethods for Rule-Based Inference Systems, Proceedings, 1976 National Computer Conference (AFIPS Conference Proceedings), 45: 1075- 1082,1976. [Fagin, 19771 Ronald Fagin. Multivalued Dependencies and a New Form for Relational Databases, ACM Transactions on Database Systems, 2, 3:262-278, September 1977. [Geiger, 19871 Daniel Geiger. The Non-Axiomatizability of Dependencies in Directed Acyclic Graphs, Technical Report R-83, Cognitive Systems Laboratory, UCLA, 1987. [Kenny, 19791 David A. Kenny. Correlation and Causality. John Wiley and Sons, 1979. [Lauritzen, 19821 S. L. Lauritzen. Lectures on Contingency Tables, University of Aalborg, Aalborg, Denmark, 1982. [Montanari, 19741 Ugo Montanari. Networks of Constraints, Information Science, 7:95- 132, 1974. [Parker and Parsay, 19801 Stott Parker and Kamran Parsay. Inferences Involving Embedded Multivalued Dependencies and Transitive Dependencies, In Proceedings International Conference on Management of Data (ACM-SIGMOD), pages 52-57, 1980. [Pearl, 19861 Judea Pearl. Fusion, Propagation and Structur- ing in Belief Networks, Artificial Intelligence, 29, (3):241- 288, September 1986. [Pearl and Paz, 19861 Judea Pearl and Azaria Paz. GRA- PHOIDS: A Graph-based Logic for Reasoning about Relevance Relations, In Proceedings, ECAI-86, Brighton, United Kingdom, June 1986. [Schank, 19721 Roger Schank. Conceptual Dependency: A Theory of Natural Language Understanding, Cognitive Psychology, 3(4), 1972. [Shenoy and Shafer, 19861 Prakash P. Shenoy and Glen Shafer. Propagating Belief Functions with Local Computa- tions” IEEE Expert, 1(3):43-52, 1986. [Sowa, 19831 John F. Sowa. Conceptual Structures: Infor- mation Processing in Mind and Machine, Addison-Wesley, Reading, Massachusetts, 1983. [Tatjan and Yannakakis, 19841 Robert. E. Tarjan and M. Yau- nakakis. Simple Linear-Time Algorithms to Test Chordal@ of Graphs, Test Acyclic@ of Hypergraphs and Selectively Reduce Acyclic Hypergraphs,” SIAM J. Computing, 13:566-579, 1984. [Ve=% 19871 Thomas S. Verma. Causal Networks: Seman- tics and Expressiveness, Technical Report R-65, Cognitive Systems Laboratory, UCLA, 1987. woods, 19751 William A. Woods. What’s in a Link? Foun- dations for Semantic Network, Bobrow and Collins (Eds.), Representation and Understanding, Academic Press, 1975. Pearl and Verma 379
1987
67
662
ision Clrern H. Se& Information Techno ute National Computer 71 Science Pa& Drive Singapore 0511 Abstract The thesis of this paper is that default wasouing can be accomplished rather naturally if au appro- priate strategy of belief revision is employed. The ideaisb on the premise that new beliefs iu- traduced into a situation change the structure of current beliefs to accomodate the new belie& as exceptions. It is easy to characterise these excep- tions in beliefs if we extend the belief language to include some modal operator and prefix the ex- ceptions with the operator. This serva to m the exceptions syntactically explicit, which can then be processed in a routine way by a default reasoning theorem prover. I. Introduction Default ing tries to model the phenomenon of hu- man reasonmg that makes us jump to conclusions that are typical of what we know. Paraphrasing a classical ex- ample, a case of default reasoniug Twee@is a bii, and that birds in are led to conclude that Tweety can dence of an exception such ae the fact that Tweety might be a penguin. There are many approaches that have been taken to ch the following three: mott and Doyle, MO], t and the circumscription Belief revision basically concerns the maintenance of alcno base to reflect changes made to it. Some mqjor in this field are the truth (or reason) main- tenance systems of [Doyle, 19791 and [de Kleer, 19&S], where the emphasis is on justifying belief& In this paper OUF emphasis is on devising a strategy of modifyi lie& (given as a set of seutences) syntactically in a that will support default reasoning. The thesis of this paper is that default reasoning can that new belie& introduc changes the structure of a situatiou (or workI) beliefk to accomodate their approaches, exceptions are not directly le fkom defaults in the set of sentences denot- A broad description of our appro 380 Default Reasoning From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. m the previous woklld. to world corntimes indefi Benget lie, and 80 we sh the paper. We now turn our atten guage and how default reasoning b Seet 381 3. A ~-term oy, that has no more f&e variables is re- moved tim the clause if, by ‘spawning” another res- olution refutation process to prove p from B, the prove fails. If rzp was the last item in the clause and the proof failed, then as usual the resolvent is the empty clause, q say, which denotes the success of the refutation. The rationale for this strategy is that we try to re- fute terms like o(p, i.e., disprove B b p, in the same way that we try to refute ary liteM through unification. Hence the terms may be viewed as “liter&’ where the %nification” process is the spawning of a separate reso- lution refutation process. hampIe: The resolvent c13y(Qay A +yB) v Rab ob- tained above can be reduced to Rab if we can refute B t- 3g@ay A +yb). l%mmple: Let B be these beliefs: Bird(z) A -ahqyuin(z) ---) Fly(z) Penguin(z) -+ -Fly(z) Penguin(z) -+ Bird(a) Bird(Tweetg) Penguin(Penn~) Then strict(B) is the same theory but with the Burst clause replaced by Bird(z)l\+enguin(z) + F@(z). We then have the following (f&e variables assumed univer- sally quantiled): (a) B k Bird(z) A +enguin(z) + F@(z) Birds that are not penguins can fly (b) B if Bird(z) -+ Ply(z) We cannot conclude that all can fly, or, strictly s-g, not all can fly (c) B o I- Bird(z) -+ Fly(z) By default, all birds can fly, or, generally g, all bii cau fly (4 B Y Jw~~~~) Strictly speaking, we cannot conclude that Tweeay can fly (e) B 0 t- Fly(2’weety) By default, (or probably) Z’ureety can fly 0 B O v w~fwd We caunot conclude that by default Peorng can fly (b) B 0 lj+” Fly(CX We cannot conclude that by default Chim can fly We now explain the derivation of those cases above involving 0 I-. (c). To prove B o I- Bird(a) -+ Plg((z) by res- olution refutation, we is a skolem constant) with the firs resolvent eii%aguira(h). NQ~ we &ion process to refute B I- Pen (note that the skolem con& Penny). SQ the original r&vent ~~~~~(~~ is re- duced to the empty clause R, and we have thus proved B o t- Bird(z) + Fly(z). case (e). To prove B o I- Flgl(Tweety), we IT+ solve 4Q@‘weeQ) with the I&& clause of B to get -Bird(Tweety) v aBen B 0 y Fly(Penny). ame ($1. Similwly to the above two cases, we en- counter the resolvent =-Bi This reduces to -Bird( we can refute B I- Penguin(Chirpy). But te -rBird(Chiqy). Hence B o If Fly(Chirpg). In section I, we mentioned that belief revision occurs when the new belief @ is inconsistent with the current beliefs Bc. In the context of o t-, belief revision occurs if B U (8) 0 -Q for some formula 9. We shall policy for belief revision for the remark on a more relaxed need some definitions. The unification con&ion of a e unification condition of +&b and $ep: is (z=oAb=z). 382 Default Reasoning For the new belief @ = +%i a the new belief fl, and * cl eaiv Seet 383 -Bird(z) v Fly(s) ~tienguc’ lOetrich(2) v -Fly(z) v easu O&ich(Ossie) +hperbreed(z) V Fly(z) eed(z) US in the future world to deduce that os- llQt fly: BF o I- Ostrich(z) ---) “Fly(z), explicit the theorem- approaches to default unless they are superbreed , which can fly: BF I- Superbreed -+ FZg(z). The second point is that the strat ml ‘undo” property. Su BC if Oetrieh( g) 4 lFlg(z), i.e., we are from BC our past belief that, strictly sp cannot fly. Now, BF b -Bird(z) V Fly(r) V oPenpin V crSuperbreed(z) ~O&ich(z) v -Fly(z) v a+%perbreed(z) Oetrich(Oesie) +3u~erkeed(a) V Fly(z) V nSuperbreed(z) -Superbreed v --Fly(z) where the last clause is j3, and the first and fotih clam are weakened f&m the corresponding clanses in Bc. It can be shown that Bp t- O&icA(a) + -Fly(z), i.e., we have recovered onr past belief that, strictly spe ostriches cannot fly. My th tQ tb43 tWQ aaseful critique of an earlier d modal operator M, McCarthy”s approach of designating a predicate to stand for abnormal circnmstances (circnm- scription), and R&&s default logic. The fundamental difference is that in our approach, the exceptions to de- faults are syntactically explicit. [de Kleer, 1988] 384 Default Reasoning
1987
68
663
Y W#odek W. .Zadro~ng, IBM T.J.Watson Research Center PAX Box 704 Yorktown Heights, NY 10598 We propose a theory of default reasoning satisfying a list of natural postulates. These postulates imply that know- ledge bases containing defaults should be understood not as sets of formulas (rules and facts) but as collections of partially ordered theories. As a result of this shift of per- spective we obtain a rather natural theory of default rea- soning in which priorities in interpretation of predicates are the source of nonmonotonicity in reasoning. We also prove that our theory shares a number of desirable prop- erties (completeness, soundness etc.) with the theory of normal defaults of R. Reiter. We limit our discussion to logical properties of the proposed system and prove some theorems about it. Modal operators or second order formulas do not appear in our formalization. Instead, we augment the usual, two-part logical structures consisting of a metale& and an object level, with a third level - a referential level. llxe referential level is a partially ordered collection of de- faults; it contains a more permanent part of a knowledge base. Current situations are described on the object level. e metalevel is a place for rules that can eliminate some the models permitted by the object level and the refer- ential level. We begin by introducing and justifying a list of five postulates we believe a theory of default reasoning should satisfy. -nIElE POSTuLATES : (Dl) A theory of default reasoning should take into account the fact that predicates have different inter- pretations in different situations. The number of such interpretations is potentially infinite, but not all of them are equally plausible. (D2) The structure of defaults should be compatible with a hierarchical organization of a knowledge base. Jn particular, it should admit inheritance of properties and exception handling. (D3) The structure of defaults should allow existence of coarse and subtle versions of the same problem; the passage from coarse to subtle versions should be possible by effectively computable rules. (D4) The theory should distinguish between local and global consistency of a knowledge base. This means it should postulate a structure of defaults such that an inconsistency does not imply any formula. (D5) Interpretations of data should be effectively computable. These postulates are natural. We argue briefly for Dl-D4, and then discuss effective computability (DS). We take an interpretation (or a meaning) of a fact to be a set of its logical consequences in a certain context. Since we want to investigate logical properties of default reasoning, we naturally assume a context to be given as a collection of formulae (i.e. a theory). Then Dl should be assumed since defaults, which are supposed to express what is normal in a given situation, are not all equally plausible. (Cf. also the arguments of D.Marr, 1977; and Reiter,1980 p.130). D2: A hierarchical organization and inheritance of properties make knowledge representation systems more efficient. It is also recognized that any general rule must have exceptions. Since standard logic does not provide means of expressing exceptions efficiently, nonmonotonic mechanisms have been proposed to deal with this prob- lem. D.Marr (1977, and 1982 pp.335 - 361) argued for D3. We believe that the coarse and subtle versions of the same problem should depend not only on syntactic or efficiency considerations like number of resolution steps or depth of search, but also on semantic properties, like plausibility or importance. That is, a coarse version should have less facts than a subtle one, but it should have important facts. Effective uxmzputability A nrinimal formal assumption assuring effective comput- ability of default conclusions is A ciplle of Finitism : All considered theories have finite models. This is not a radical postulate, because (a) it is possible to base a semantics of a large fragment of natural language on finite Herbrand models, (cf. Kamp, 198 1). Zadrozny 385 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. (b) Ehrenfeucht et al. (1972) prove that if a first order sentence is classically consistent, then it has a *-model whose domain is finite. Also, the theory FIN of Mycielski ( 198 1) is strong enough to develop mathematical analy- sis, yet each finite part of it has finite models. (c) R.N. Johnsoqg-Laird’s (1980,1983) mental models are finite structures. Finiteness of the universes makes the default provability decidable. Intractability of classical impli- cation means that finding an interpretation of a fact can- not depend on the all facts and all defaults. Hence further restrictions on default theories are needed if we want to have a practicable theory of reasoning. We believe that two ideas may prove useful: the “vivid representations” of Levesque (1986), and the restrictions on expressive power of the language in which defaults and object theo- ries are formulated, (cf. Levesque, 1984; Levesque and Bra&man ,1985 ; Frisch ,1985 ; Patel-Schneider,l985). Current theories of default reasoning The nonmonotonicity of default reasoning is usually cap- tured by extending the set of inference rules of classical logic. This is true for the standard formalizations of de- fault reasoning : the nonmonotonic logic of J.Doyle and D.McDermott(l980), (cf. also McDermott, 1982 ) , the logic of default reasoning of R.Reiter (1980), and the circumscription of J.McCarthy (1980,1986). To differ- ent degrees postulates Dl, and D2 are satisfied in all these systems. Circumscription, for instance, makes it possible to represent exceptions by declaring them abnormal; minimizing abnormality has the effect of saying “the general rule is correct, except for these special cases”. Touretzky (1984) argues that the default logic of Reiter cannot handle exceptions in a proper way. Etherington (1987) argues in the other direction. Effective computability (D5) could be adressed in these systems by restricting the classes of formulae dealt with, e.g. to universal, function-free sentences. It would be difficult however to express in any of these systems the fact that one default is more plausible than another one (Dl) . Similarly, we do not see any natural extensions of these systems which would allow distinction between lo- cal and global inconsistency (postulate D4). Neither do we see how semantic distinctions between coarse and subtle versions (D3) could be incorporated into them. In effect, we conclude that there is no clear way of ex- tending the discussed default logics to satisfy Dl-D5. eories We plan now to derive a model theoretical structure of defaults from the postulates Dl - D5 . We do this ex- plicitly in a series of observations and conclusions. The &nclusions make Dl - D5 more precise. We don’t main- tain however that they are the only ones possible to draw. The arguments for D4 (cf. Levesque, 1984) imply that a large knowledge base cannot be considered as a col- lection of facts and rules. An intuitively appealing alter- native is then to treat large bodies of knowledge as collections of theories . Each theory should be consistent, but they may contradict each other. Conclusion ries. 1. Knowledge bases are collections of theo- In other words, instead of KB c Sent we have KB c @( Sent ) ; KB means ‘a knowledge base’ , Sent stands for all sentences in a given formal language, ZP is the standard powerset operator . After this change a knowledge base KB may be only locally consistent, i.e. all its elements (the subtheories) are consistent, and it doesn’t have to be consistent as a whole. But the diffi- culty is now in deciding when a theory should apply to a situation. Moreover we need a definition of derivability, i.e. of the meaning of KB C 9. Butweknow already that such a provability relation must be nonmonotonic. Let’s make an analysis of defaults then. Whatever they are, (by Dl) they are not equally plausible. But, if a de- fault dl is more plausible than d2 then d2 is not as plau- sible as dl. Also, plausibility is transitive. Thus A plausibility relation on defaults is a Par- Conclusion 2. tial ordering. We argued that knowledge bases should be considered sets of theories. But a description of a situation is a the- ory. This difference in set theoretical types is one of log- ical reasons to separate the level of object theory from the level of reference, which is a collection of theories that constitute a more permanent part of an agent’s body of knowledge. Then the only place for defaults can be on the referential level. Notice that defaults should not be a part of a metalevel, since the metalevel formalizes knowledge about knowledge - autoepistemic knowledge, for instance. Defaults work not because they are about what is known, but because predicates expressing know- ledge about a current situation actually refer to them as to a background information and use them to eliminate logically possible but implausible interpretations (cf. Doyle, 1985). Conchdsion 3. There exists a separate logical level - & referential level - which contains a relatively permanent part of knowledge in the form of a partially ordered col- lection of theories. Defaults constitute this level. We are now in a position to give a technical definition of defaults. The function of a default is to provide addi- tional, but often only conjectural information. We express this function by assuming that, for a formula 4, a default is a theory T+ , which can be added to a logical de- scription of a current situation whenever 9 appears. This is expressed as 9 + T+ - From this and Con- clusions 1 - 3 we get: 386 Default Reasoning DEFINITION. A referentiai level (or - a referential ode2 ) is a structure = f ( , < + ) : $ f Formulae ) where, for each 1c, , <+ is a partially ordered ( by a relation of plausibility) collection of defaults (i.e. of $ + T+ ‘s ) for $ . We assume also that all sentences have the least preferred empty inteyetation 0 . We also suppose that interpretations are additionally ranked according to the canonical partial ordering on subformulas. This provides a natural method of dealing with exceptions, like in the case of finding an interpreta- tion of a&P&P with R containing b--Y), wd+--Y) where -y would be preferred to y - if both are consistent, and both de- faults are equally preferred. ing set of theories may be a part of a referential It is easily seen that R is only ZocaZ& consistent. We will use this example later to explain the notions of a default proof and a default model. 9 * T+ adult(x) 9 employed(x) & married(x) adult(x) & VempZoyed(x) -* dropout(x) adult(x) & -,empZoyed(x) + student(x) adult(x) & yempZoyed(x) -+ ,has(x,car) empZoyed(x) -, adult(x) & has(x,car) employed(x) -* taxpayer(x) dropout(x) + T studen t(x) student(x) * -employed(x) & 2% adult(x) & -married(x) & -dropout(x) student(x) -) employed(x) & & married(x) & -, dropout(x) (al) (a (a3 WI W) (e2) W) W) WI The partial ordering is given by the figure below ; one should also remember that we have supposed that special cases are preferred to general rules. al a2 a4 I I I el sl s2 I 1 / 7’ 0 F3/’ e2 0 0 I 0 0 and proofs In this section we continue the development of a theory of default reasoning that satisfies postulates Dl - D5. We define the notion of a model (extension), and proof procedures for deciding whether a formula is a conse- quence of a system of defaults. We have already discussed the structure and ontology of defaults. In effect we have decided to augment the usual, two-part logical structures consisting of a metalevel and ~II object level, with a third level - a referential level. The referential level is a collection of defaults. Thus in- stead of formal structures of the form ( , T, I- 1 , where is a metarule (e.g. = “formula circumscription” T is an object theory to which is applicable, ( some “simple abnormality theory” - for instance ) , and I- is a provability ssibly extends classical provability by us- We follow the exposition of Reiter (1980) since there are some similar features in both systems, and from now on, we will abbreviate his logic as RDE . To define a semantics of default models we need some logical notions : DEFINITION. 0 A theory is a finite conjunction (or - equivalently - a finite set ) of formulae. e A deductive closure operator is a function Th : @‘(Sent) 4 B(Sent) (a) T c Th(T) , for any T (W TWWN = TNT) (c) Th(T) is finite, for finite T. 0 A theory T is consistent if there is no formula + such that both # and -+ belong to nt(n . We do not require Th(T) to be closed under modus ponens and substitution instances of tautologies. This allows us to consider deductive closures with respect to nonstandard logics, (cf. Levesque,l984; Frisch,1985; Batel-Schneider, 1985). Moreover, since we are interested only in theories which have finite models, the deductive closure of a first order theory can be identified with ground disjunctions which are provable in this theory; and up to subsumption there are only finitely many of these. DEmON. Let T be an object theory, R a set of partially ordered defaults. A consistent theory M is an extension of T if 1. TcM 2. If$cM, reR, r=($, <,) ,and J/ + T+ is a most preferred, consistent with M el- ement of <+ , then +-bT+ EM. ( In other words, if a most preferred piece of infor- mation about a formula JI is consistent with M , then it must have been already assumed. ) 3. M is deductively closed. Zadrozny 387 4. No subtheory of M satisfies 1 - 3 . ( This assumption isn’t really necessary, but it allows us to eliminate complicated, and interesting, situ- ations in which some default information cannot be used because of a method of representing facts in T orR. ) ]it is easily seen &at the definition is similar to this one of Reiter, except that in our case defaults have to be chosen according to the partial orderings. Also, a default is not applied if it leads to an inconsistency. This allows us to obtain as a direct consequence of the definition : PROPOSITION 3.1. ( Soundness) Any consistent ob- ject theory has an extension. Proof procedure and basic Lqyical rem&s We present now a construction of partial models PM,(T) of an object theory T which, as we prove, converge to a default model (a set of extensions) DM(T) . The method of constructing the models PM, and DM is similar to this of RDL, except that the partial orderings on associated theories are taken into account. This new structure changes the mechanism of default reasoning. DEPINTTION. A partial model of a formula consisting of a sequence of subformulas (possibly one element) is a conjunction of their most preferable interpretations. It must be however consistent. More formally, let $I be a formula and +i, llilm, its subformulas. For each i, let <i be a partially ordered collection of theories of J/i : <i. = ((pi * Tb, pi* T’,,..., Jli -P T: ), <i). Let wd4 = <i ilrn = ( f : f(i) = #i * T; , where i 2 m and I 5 ni ) , fk+) = IfEw@): A f(i) is i<m consistent with + I. Let < be the partial order induced on fi(+) by the orderings of associated defaults and the canonical order- ing of subformulas. We define then the partial models PM(+) of a formula #B as the most likely theories of 9 given by bk#), < ) : PM(+) = ( (P &@ : @ = A f(i) and f is a &m minimal element of (fi(+), < ) 1. The partial models pick up from the referential level the most obvious, or - perhaps - most important, information about + . This immediate information may be insuffi- cient to decide the truth of the formulae of (9 . For in- stance, if cp = bird(Tweety) ,md PM(+) = (bird(x) + has(x,wings) ) u ( $ ) , but only PM ( I’M ( + 1 1 contains the formula has(x,wings) + flies(x) , then iteration of the PM op- eration is needed to decide whether Tweety flies. DEFINITION. Let t , t, , . . . , tk be theories. Then PM,(t) = PM(t) PM(( tl , . . . , tk 1) = PM(t,) u . . . u PM(t,) PMn+,W = PM 1 Th(m) : m E PMn(t) )) Pv&) = U (PM,(t) : n < 00 ) . PMJt) is a set of many models that interpret t . It will be infinite even if all the PM”(t) are one element sets. Clearly, we are interested in those elements of PM,(t) which contain maximum of information. DEFlINITION. We define the default models of t DM(t) = (m E PM,(t) : m is maximal under c ) . It is easy to check that PROPOSITION 3.2. DM(t) is a collection of least fixpoints of PM. We are now in a position to define two notions of provability : a weak provability corresponding to provability isl RDL, and strong provability, which is more like the classical one. The notion of a default proof of a formula cy from T and is defined similarly to Reiter( 1980). The difference is that the set of prerequisites of D is defined for most preferred sets of defaults D ,only. Also we re- quire that all available information be used. Notice that, under the definition below, given (4) + 4 <* (G * 8) 9 with a preferred as a de- fault for cp, /3 doesn’t have a default proof unless cy is inconsistent with default consequences of the object theory. DEFINITION. (weak provability) T *hp- + iff there exists a sequence m, , . . . + n., and a sequence DO, . . . , Dk such that 1. + E Th(T u DJ 2. -OPT , m, 6 PM(T) 3. Di+l = mi , mi+l E PM@0 Results parallel to these of RDL can be proven. The proofs extend Reiter’s techniques by taking into account our new definitions. We use I= to denote the classical satisfaction in Hintikka or Herbrand models. Then m b q, iff d-m , when m is deductively closed. 388 Default Reasoning PROPOSEITON 3.3 (completeness of weak provability) T *IT-- + iff there exists m E D&f(T) such that m j= $ . THEOREM 3.4. (cf. Theorem 2.1. of RDL). Let E be a set of sentences. Let EO = T, E. r+l = Th(Ei) U U ( w : (a, O)E the most preferred element of <a , aEEi , and o is consistent with E 1. Then, E is an extension of T iff E is the union of Ei’s . OREM 3.5. E is an extension of T iff E is one of default models DIM(T) of T. As a corollary to Theorem 3.5. we obtain : OREM 3.6. (default completeness) AZ1 facts in an extension are weakZy provable. The class of provable formulas, as defined above or in RDL, corresponds to a set of beliefs an agent may en- tertain about a situation T given defaults R . These beliefs may be inconsistent. But it is possible to define a stronger notion of provability, according to which no two inconsistent formulae are provable. Since all our models are finite and there are only finitely many of them for fi- nite R’s , we can express the strong provability as follows: DE ON. (strong provability ) T I- 9 iff there exists a k such that for QnY sequence m, , . . . , mk-, , where m, E PM(T) and mi+r E PWmJ , there exists a sequence DO, . . . , Dk such that Do=T , Di+l c mi ,and (seTh(TuDk) . PROPOSITION 3.7 (completeness of strong provability) T I- C$ iff 9 is true in ail models m e DM(T). Changed preferences and metwdes 9 We need also a definition of provability with metarules. DE ON. We define T b- + iff m IT- d , for all models m E DM(T) . I.e. + is provable from R and T under the metarule M, if M applied to any default model yields cp . We have defined the basic notions of our theory. We explain them now using the example from Section 2. We show how changed preferences modify default theories and are a source of nonmonotonicity in our formalization of default reasoning. We will also see that the strong and the weak provability differ. Finally, we say a few words about the metalevel. EXAMPLE (continued) Consider the following two object theories: U. aduZt(John) & - employedJohn) S. student(John). Their partial models are described below 1 : (u) 6) , where U, = (u, a2, el ) , and U. = (u,a4, el ). where Ui = U, u (HI) , and U;= U, u (dl,a4). where U.. = U2 u (a2) . PM( Ui) = (U3 ) , where Us = v,’ u (dl) = UF . PM,(u) = ( 47; u (sl), u; u {sl), u; u (s2), v: u WI, v, 1 - Also PM,(U) = PM,(U) , for k 2 4 . DM(U) = ( {u9a2,e1,d1,a4,s1) , (u, a2, el, dl, a4, ~2)) . DAY(S) = ((sl,dl,a3,a4,eP9s) , (s2,dB,al,el,s]). The following facts hold (assuming the standard Th ) : S I- has(John, car) & empZoyed(John) \/ V - has(John, car) & - empZoyed(John). S *}- has(John, car) and s *br -has(John, car) . It is possible to think of S as information complementing U. In this case: DM(U+ S) = ((s,s1,a3,e1,a4,u9d1) ) u+s I- - has(John, car) & -dropout(John) and U I- dropout(John) , while u+s I- -, dropout(John) . We observe then the no e of t theory : a theory ( U + S ) does not prove all theorems of its subtheory (U) , although the same set of preferences serves as a referential model. As expected, metarules like the generalized CWA (Minker, 1982) allow us to prove stronger results than a combination of an object theory with a referential level alone : it is not true that w7r Aas(John,car) , but we have w+L%~) I= - has(John,car) . 1 Assuming that the theories (al) ..* (~2) constitute the whole referential level. Zadrosny 389 We have shown that it is possible to develop a natural theory of default reasoning based on the separation of the referential leve? from the object level and the metalevel; in this theory defaults are logical theories partially ordered by a relation of plausibility. We’ve demonstrated how priorities in interpretation of predicates on the level of reference can be the source of nonmonotonicity in rea- soning. ‘We’ve also proven that our theory shares a num- ber of desirable properties with the theory of normal defaults. But additionally it satisfies the five postulates D l-D5. Namely, in our approach, consistency of a knowledge base is checked quite often but only with re- spect to a small part of it; a knowledge base may contain incompatible information (global inconsistency), but contradictions shall not appear in the same default model (local consistency). Exception handling is particularly easy - an exception is just another theory; adding an ex- ception means adding a new theory to the referential level. The differences between coarse (PM) and subtle (DM) versions of a problem are semantically justifiable: one can expect that - due to the ordering of defaults - important information will appear in the very first iter- ations of PM. Moreover, existence of different theories of the same situation supports the principle of finitism. The existence of a referential level is a very natural postulate. Collections of relational databases can be “vivid” referential levels for knowledge based systems; natural language in the form of (on-line) dictionaries, grammars, etc. can be taken as the referential level for commonsense reasoning (Zadroiny,l987). The “ubiquity of preference rule systems” (Jackendoff , 1983; Rock, 1983 ) also gives psychological plausibility to the proposed model. Acknowleclgements. I want to thank Ken McAloon and Van Nguyen for their comments on an earlier draft, and the referees for suggestions which led to the restruc- turing of this paper. R.J.Brachman, H.J.Levesque (eds.) , Readings in Know- ledge Representation, Morgan Kaufmann,Los Altos,l985. J.Doyle, Circumscription and Implicit DefinabiZi@, Journal of Automated Reasoning, 1 , 1985, pp. 391 - 405. A.Ehrenfeucht, J.Geiser, C-Gordon and D.H.J. de Jongh, *- models: A semantics for noniterated local observation, Journal of Symbolic Logic, 37, 1972 , pp. 779-780. D.W. Etherington, Formalizing Nonmonotonic Reasoning W.Zadro&ry, Intended models, circumscription and Systems, Artificial Intelligence 31, 1987, pp. 41 - 85. commonsense reasoning, lproc. UCAI-87, 1987. A.Frisch, Using Model Theory to Specify AI Programs, Proc. JJCAI-85 , AAAJ ,1985, pp. 148 - 154 . R.Jackendoff, Semantics and Cognition, m Press, 1983. P.N. Johnson-Laird, Mental Models in Cognitive Science, Cognitive Science 4, 1980. P.N. Johnson-Laird, Mental Models, Cambridge Univer- sity Press, 1983. H.Kamp, A theory of truth and semantic representation , in: J.A.G. Groenendijk et al. (eds.) , Formal Methods in the Study of Language, 1, Mathematisch Centrum, Amsterdam , 198 1, H.J.Levesque, A Logic of Implicit and Explicit Beliefs, Proc. AAAI-84, AAAI 1984, pp.198 - 202. H.J. Levesque, Making Believers out of Computers, Arti- ficial Intelligence 30, 1986, pp. 81 - 108. H. JLevesque and R. J.Brachman, A Fundamental Tradeoff in Knowledge Representation and Reasoning, in: R.J.Brachman,H.J.Levesque(eds.), 1985. DMarr, Artificial Intelligence - A Personal View, Artifi- cial Intelligence 9 , 1977, pp.37 - 48. D.Marr, Vision , W.H.Freeman and Co., San Francisco, 1982. J. McCarthy , Circumscription- A Form of Non-Monotonic Reasoning, Artificial Intelligence , 13, 1980, pp. 27-39 . J. McCarthy , Applications of Circumscription to Formalization of Common-Sense Knowledge, Artificial Intelligence ,28,1986, pp. 89 - 116. D.V. McDermott and J.Doyle , Non-Monotonic Logic I, Artificial Intelligence , 13, 1980, pp. 41 - 72. D. McDermott , Non-Monotonic Logic II : Nonmonotonic Modal Theories, Journal of the ACM , 29, 1982, pp. 33-57. J.Mi&er,On indefinite data bases a& Closed World As- sumption, Proc. 6-th Conference on Automated Deduction,Springer, 1982. J. Mycielski,AnaZysis without actual infinity , Journal of Symbolic Logic,46, No. 3,1981,pp. 625 - 633 . P.S. Patel-Schneider, A Decidable First Order Logic for Knowledge Representation , Proc. JJCAI - 85 , AAAI, 1985, pp. 455 - 458 . R.Reiter, A Logic For Default Reasoning, Artificial In- telligence ,13,1980, pp. 81 - 132. H.Rock, The Logic of Perception, ha]TT Press, 1983. D.S.Touretzky, Implicit Ordering of Defaults in Inheritance Systems, Proc. -84, AAAI, 1984, pp.322 - 325. 390 Default Reasoning
1987
69
664
A Multiprocessor Architecture for Production System Matching Michael A. Kelly, Rudolph E. Seviora Department of Electrical Engineering University of Waterloo Waterloo, Ontario, Canada ABSTRACT This paper presents a new, highly parallel algorithm for OPS5 production system match- ing, and a multiprocessor architecture to sup- port it. The algorithm is based on a parti- tioning of the Rete algorithm at the com- parison level, suitable for execution on an array of several hundred processing elements. The architecture ’ provides an execution environment which optimizes the algorithm’s performance. Analysis of existing production systems and results of simulations indicate that an increase in match speed of two orders of magnitude or more over current implemen- tations is possible. 1. Introduction The recent popularity of expert systems in a variety of application areas demonstrates their value as problem solving tools. This technology, however, does not come without a price; executing expert systems is computa- tionally very expensive. Expert systems are often written using production languages such as OPS5[Forg81] and OPS83[Forg83]. Production system execution consists of repeatedly matching the conditions of IF-THEN rules to the prob- lem solving state and firing the most appropriate rule - which in turn alters the problem solving state. A majority of the computational expense is in the match- ing phase of the production cycle. Efforts have been made to define customized pro- cessors to speed matching but invariably bus bandwidths and device speeds limit their performance. Several multiprocessor designs have been put forward to deal with the amount of computation that matching requires. They too provide limited benefit, but more due to algorithmic considerations than the boundaries imposed by physics; the inherent granularity of the match operation does not allow effective use of more than a small number of processors. The paper describes a new partitioning of the esta- blished matching algorithm for OPS5, leading to a much higher potential for parallel execution than previous versions. An architecture to support this new algorithm is presented along with some initial simula- tion results. The simulations show that a high degree of parallelism can be effectively exploited. 2. Requirements of Matching - The Rete Algorithm The match phase of a production cycle consists of a many-pattern/many-object matching of rules to prob- lem state information. The condition pattern of a rule is a set of, possibly interdependent, condition elements. The problem state is represented by a set of indepen- dent working memory elements. Matching results in a set (the conflict set) of rule instantiations which are rules whose conditions have been satisfied by a partic- ular set of working memory elements. Conflict resolu- tion consists of choosing the most appropriate rule instantiation for firing in the act phase of the produc- tion cycle. A very efficient matching algorithm, the Rete Algorithm[Forg82], takes advantage of two observed characteristics of expert system execution to speed matching. One is that the set of rules for a particular application will have many similarities in their condi- tion patterns. The other is that the problem state changes slowly, i.e. firing a rule changes only a small subset of the working memory. The efficiency of this algorithm is the reason it was chosen as the basis for the parallel matching algorithm described in the next section. The Rete algorithm requires compiling the condition patterns of a system’s rules into a network of condition and memory nodes. An example of this compilation is shown in Figure 1. Working memory elements are fed into one end of the network and filter through to emerge from the other end as entries to the conflict set. The memory nodes store partial match information as the match proceeds. This means that when a rule is fired, only the changes it makes in the working memory need be presented to the network. A new conflict set results from applying the changes indicated by the network’s output to the set which existed before the rule was fired. Matching, as described, consists of a set of node activations. Each node activation means receiving and storing a token (representing a piece of partial match information) generated by a previous node activation. 36 Al Architectures From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Rule Conditions: 1) (Cl ^attrl 12 ^attr6 <= 7) (C2 ^attr2 > 5) (C4) 2) (Cl *attrl 12 ^attr2 <X>) (C3 ^attr3 <X>) ( w 3) (C2 (C4 * attr2 A attrl > 5 ^attr3 <Y>) <Y> ^attr3 >= <Y>) because individual node activations can represent a considerable amount of processing. Data from [Gupt83) shows that a two-input node activation involves comparing a token with an average 10 - and up to 870 - others. The above discussion suggests that, if the advan- tages of the Rete algorithm are to be exploited, a par- tition involving entities smaller than nodes is required. A possible solution is to partition at the level of token-token comparisons. A node is split into several Rete Network: working memory changes copies, each associated with a single token from the original memory nodes. Each node copy and its single token, from either a left side or right side memory, represents a separate process. As a group, the independent copies perform the same match function as the complete network. As individuals, they can be distributed over a large set of independent processing elements. This distribution poses some unique com- munication problems but offers a high degree of paral- lelism, in the order of 350 (the number of node activa- tions expected times the average number of token com- parisons to be done for each). Partitioning at the comparison level is being con- sidered (in somewhat dissimilar forms) in other research projects, [Gupt86] and [Ramn86], both of which agree that substantial speedup is possible. conflict set changes Nodes : 0 one input 0 memory 0 two input Figure 1: Example Rete Network 0 terminal The relatively high potential for comparison level partitioning is the reason it was chosen for the archi- tecture described here. A difficulty of this approach is the definition of a partitioning algorithm that both preserves match correctness and keeps communication An attribute value is extracted from the new token overhead to a manageable level. Also of concern when and is compared against corresponding attribute values dealing with multiprocessors is the problem of load extracted from each of the complementary tokens already stored at that node. Successful comparisons, as defined by the node condition, result in the genera- tion of new partial match information, in the form of tokens, sent to subsequent nodes. balancing. The next section presents the partitioned match algorithm and a dynamic load balancing algorithm. 4, A Fine Grain Match Algorithm 3. Sources of Parallelism One potential source of parallelism in the execution of production systems is in the act phase, which directly affects match execution. Normally, working memory changes caused by a rule firing are introduced to the matching mechanism separately. If these changes, typ- ically two or three, are processed in parallel, an added degree of parallelism of two to three can be realized. Two algorithms are presented in this section. The first algorithm is a distributed Rete algorithm based on a discrimination network slightly different from the one employed by the original matching algorithm. It dis- tributes the network condition information at the node level, and the match state information at the token level. This results in ‘comparison level’ distribution of match processing, suitable for a system containing a large number of processing elements. A second, more extensive source of parallelism is in the match phase via a partitioning of the Rete algo- rithm. One method considered is to execute node activations in a Rete network as individual processes. The degree of parallelism available is roughly estimated to be equal to the number of two-input node activations caused by a working memory change, about 35 [Gupt83], since these activations represent the bulk of the processing required. A scheme involving this approach is discussed in [Sto184] and [Gupt84]. The overall expected speedup from this proposal is small The second algorithm operates in cooperation with the first to manage the workload at each processing element. It redistributes portions of the match process between match phases to- ensure a balanced load over the entire set of processing elements. 4.1. A Distributed Rete Algorithm This algorithm is targetted for a machine consisting of a large number of processing elements, each with its own local storage. The one input nodes in the network can be distributed over a set of processors without Kelly and Seviora 37’ alteration since they use no stored data. A process will cause the generation of a new node copy to which representing a one input node receives working the new token is attached. One of the existing node memory changes and transmits tokens representing the copies, the ‘generative’ copy, is made responsible for ones that have passed the node’s constant tests. In this so that only one new copy is generated. (The gen- order to partition the rest of the network, it is neces- erative node copy is not deleted if a negative of its sary to provide separate memories for each node. An token arrives, so that node information is not lost.) example netvork where this has been done is shown in Figure 2. For this network’, a two input node and its Processing of Not nodes, which have one positive input and one negative input, is a little more difficult because of the asymmetry involved, and the fact that node responses are based on information about the entire contents of the token memory on the negated side. This dependence on information that does not exist at any one node copy is overcome by making a stipulation about the communication system over which tokens will be transported. The stipulation is that tokens will always reach their destinations in the order they were generated. There are several ways this could be performed; one is to force all tokens over a single, linearizing channel. The architecture chosen includes this characteristic, as is discussed in the next section of this paper. Figure 2: Network from Figure 1 with Split Memory Nodes two associated memories can be processed indepen- dently. To isolate individual tokens for comparison, the two input nodes are separated into a number of node copies, each associated with a single token from one of the original memory nodes. A node copy con- tains comparison information for a left- or right- handed test, destination information for new tokens it forms, and some status information. Figure 3 shows how a single node with its left and right memories is split up. Duplication and splitting of memory nodes increases the storage required by a factor of approxi- mately three. rtl rt2 rt3 rt4 I 4 It1 1t2 It3 lt4 0 #7 0 #7 & \ 0 #7 \ 0 #7 & Node operation is as follows: node copies storing tokens from the positive side of the original two input node operate similarly to the node copies from nodes where both inputs are positive. The exception being that new tokens are generated with the opposite polar- ity to the ones received i.e. the addition of a new token on the negative side causes the deletion of a token previously generated by the positive side, and vice versa. To deal with the negated input, node copies storing tokens from the negative side of the ori- ginal node are placed in the path of tokens generated by their positive counterparts. If a positive-side node copy generates a new positive token, the negative-side copy will receive it and has the option of generating a cancelling token if it contains information that negates it. The negating token will always arrive at destina- tions later than the positive token since it was gen- erated later. This ensures the correctness of the can- celling operation. It1 lt2 lt3 lt4 \I - rtl rt2 rt3 rt4 0 #7 J @’ 0’ @’ @’ J 4 J J Figure 3: Example of Node Splitting Processing of the node/token pairs resulting from two input nodes with two positive inputs is done as one of two operations : 1) If a token arrives at a node on the opposite side to its stored one, the node test is performed and may result in the generation of a new token. 2) If a token arrives at a node on the same side as the store-d token, the node’s response will depend on whether the token represents the addition (positive) or deletion (negative) of a piece of partial match informa- tion. A negative of the stored token will cause the node/token pair to be deleted. A new positive token Terminal nodes can be eliminated since their func- tion is effectively one of addressing rule instantiations to the conflict set, which can be performed by the two input nodes at the bottom of the Rete network. This algorithm has the inherent property that it is relatively unaffected by the ratio of program size to data size. That is, activity over a large number of Rete network nodes involving few tokens each is simi- lar to activity concentrated at a few nodes associated with many tokens each. Another advantage is that node copies involve only one token each and so they represent similar amounts of processing. Nodes are also self managing in that the sizes of their images (in terms of the number of copies which exist) change to reflect the total amount of processing they require. 4.2. A Dynamic Load Balancing Algorithm An architecture involving a large number of processing 38 Al Arckitectures elements implies that each is small; local memory areas can become easily overloaded. The small size of pro- cess entities, node/token pairs, allows them to be moved between processing elements to alleviate this problem. The following algorithm describes a method for spreading node copies over a large set of processors while insuring that a correct match takes place: When a generative node copy receives a positive token on the same side as its own stored token, it gen- erates a new, marked, copy of itself to hold the new token. The marked copy becomes immediately active. At the end of the match phase, the marked copy is passed to another processor. The first new copy created by the generative ‘copy becomes the new gen- erative copy at the beginning of the next production cycle, while the original copy ceases to be generative. This action causes active nodes in the Rete network to continually diffuse away from busy areas of the proces- sor array. If a processing element becomes filled with too many marked copies before the match phase is complete, it can force a premature end-of-cycle. This process involves halting the match until all marked node copies are passed between processors. This alleviates the memory shortage at the over-full proces- sor. The Match phase continues after this pause as if a new production cycle was starting. The frequency of these forced end-of-cycle pauses is related to the load- ing of processing elements, resulting in a graceful decline in performance of the match phase as loading increases. 5. Architecture Two issues must be addressed in defining an appropri- ate architecture for the algorithms of the previous sec- tion. One is the organization of processing elements to be used. This is based on the communication required between processors for the algorithms described. The second consideration is the internal structure of a pro- cessing element. These are influenced by the require- ments of both the match operation and the communi- cation systems defining the organization of processors. 5.1. Organization of Processing Elements After considering several alternatives, a processor organization consisting of a single uniform layer of pro- cessing elements was chosen. There are three main reasons for this choice. First, a single array of process- ing elements allows effective response to node activa- tion requests which occur simultaneously at various depths in the network. This means a fast response time to widely used match information. Secondly, the amount of data to be stored at any one node in the Rete network varies from system to system, and dynamically as a system runs. Allowing all node infor- mation to be spread over all processing elements avoids the memory balancing problems that a segmented sys- tem might incur. And thirdly, nodes involving negated rule conditions imply examining the node’s entire set of tokens before a response can be made. Using a sin- gle array of processing elements, and the communica- tion system described below, makes a solution to this problem compatible with non-negated node activity. An effective way to send match information to all processing elements quickly is with a broadcast system in the form of a word-width distribution tree. Such a system provides a high bandwidth path but avoids high fanout at any one node. It also allows some asyn- chronism of data flow between various parts of the tree. This async hronism helps accommodate varying processing speeds at different processing elements. A similar structure with data flowing from leaves to root can be used to collect responses (new tokens) from the processing elements. Individual processing elements will respond infrequently but ,as a whole, the array will produce responses roughly equal in volume to the original input. This means that the collection tree requires a root with bandwidth equal to that of the broadcast tree but limb bandwidths can reduce toward the processing elements, terminating in serial connec- tions. Responses may be required by other processing elements and so the roots of the two trees are joined, creating a data path loop through the processing ele- ments. Responses that are meant for the host, i.e. conflict set changes, are separated out and redirected at this joining point. The broadcast/collection system described provides a path for match information flow as well as a point of contact for the host which executes the Conflict Reso- lution and Act phases of the production cycle. The load balancing algorithm also requires communication between processing elements, but the broadcast system is inappropriate for this. Unlike match information which must be sent to all processing elements, a packet carrying a new node/token pair is only required at one - the one that will store it. The broadcast sys- tem is not suited to this type of transfer since it treats all destinations equally. Also, node/token packets have no particular destination; short trips over a local, low bandwidth communication network will suffice. The design considered here uses a square network, con- necting a processing element to each of its four closest neighbours (this degree of interconnect may be recon- sidered based on simulation results). Figure 4 shows an example of the organization of processing elements chosen for 16 elements. (The local communication mesh is completed with links between processing ele- ments on opposite edges, forming a uniform toroid.) The broadcast system uses a word-width path while the local links are serial. The number of local links, and the expected number of links traversed by a node/token packet before seating, give the local com- munication system an effective bandwidth roughly equal to the broadcast system bandwidth. 5.2. Processing Elements Each processing element must perform a number duties during a production cycle. They include: of Kelly and Seviora 39 I- A Filter/Interface Node B Broadcast Network C PE Array and Local Network D Collection Network Figure 4: Processor Organization 1 1 A B I C ! D 1) receiving and filtering match information from the broadcast system 2) performing match and stored tokens operations on received token 3) transmitting new match information lection half of the broadcast system 4) receiving and storing (or passing on) node/token packets from the local communication system, and to the col- 5) generating sary new node/token packets when neces- This set of responsibilities varies widely in process- ing requirement. It also contains few constraints on simultaneous execution. For these reasons, the struc- ture chosen for the processing elements is a central processor (and ROM) which performs the various algo- rithm operations, and a set of state machines to handle data transfers over the I/O ports. Figure 5 shows a block diagram of a processing element. The state machines can perform data transfers independently but broadcast local links Figure 5: Processing Element Contents are coordinated by the central processor. The central processor also performs memory management of a com- mon RAM area using a two level interrupt system. A bus arbiter controls access to the common RAM bus on a priority basis. The BSM is a state machine connected to the broadcast port. Its responsibility is to store relevant incoming tokens into the local RAM. A CAM, accessi- ble to both the central processor and the BSM, con- tains the IDS of nodes which have copies in local RAM. The CAM supplies present/not-present responses to the BSM. The central processor has full access to the CAM which also contains, as data, pointers to the node copies in RAM. The QSM, when triggered by the central processor, transmits new match information onto the collection half of the broadcast system. The LOSM, when triggered by the central processor, transmits new node/token packets onto one of the local communication links. The LISMs either store incoming node/token packets, or pass them through the processing element, depending on the amount of free space available. It is estimated that a processing element, less memory blocks, is about the same complexity as a sim- ple 8-bit microprocessor. 6. Simulations A first pass design of the architecture has been com- pleted and a register transfer level simulator has been written. The purpose of simulations is two-fold : 1) The detail of simulation allows the verification of the algorithms, using a small problem. They also provide some initial performance values, for the small problem considered. 2) The simulations also provide accurate timing information for more elaborate simulations involving larger problems, which will not be done at the regis- ter transfer level. 6.1. The Simulator The form of the simulator allows the simulation of 1, 2, 4, 8, and 16 processing element arrays. (The 4x4 array was the largest simulated due to the computa- tional expense of such detailed simulations.) The fairly mature CMOS technology available at the University is assumed: Processor clock speed is 10 MHz. ROM, RAM and CAM access times are 200, 250, and 500 nS, respectively. In the simulations, the conflict resolution and act phases do not take place. Changes to the working memory are fed into the array as if a rule firing had taken place, and response time is observed. The simulation subject is a program loop performed by a single production. The production contains four condition elements and two actions; the condition ele- ments contain two constant values along with the class type, and one or two variables each (all typical values). 40 Al Architectures The characteristics of execution are : 1) A rule firing causes 2 working memory changes. 2) Each working memory change causes cessful) one-input node activations. 3 (all suc- 3) Each working memory node act ivat ions. And change causes 6 two input 4) The match conflict set. phase generates two changes to the Simulations were performed using RAM area to avoid memory shortages. a large enough 6.2. SirnuPation Resullts Simulations for arrays using 1, 2, 4, 8, and 16 proces- sors were performed until the cycle time stabilized - a steady state was reached. Table 1 shows the match times recorded for these simulations. The match phase execution time for the problem simulated running on a VAX 11/780 using an OPS83 compiler is 3.26 mS. (The cycle time on the VAX was 3.62 mS. The con- flict resolution and act phases of the cycle consisted of a simple ‘for’ loop containing a ‘fire 1’ statement. It was assumed that this would take a maximum of 10% of the cycle time. This corresponds to a match time of approximately 3.26 mS.) r & of Processors I Execution Time (mS) 1 1 16 1 1.000 I Table 1: Match Execution Times 7. Discussion The results of these simulations show that the level of parallelism exploited by the distributed Rete algorithm is approximately four. The theoretical limit is 6 to 8 (from the number of tokens at each level), ignoring the effects of the one input nodes. Node level parallelism for the simulated rule is 1 to 2 (from the widths of the two layers of two input nodes in the network), again ignoring the effect of the one input nodes. The advan- tage of the distributed Rete algorithm increases with program size. As discussed, the level of parallelism available in an average production system is in the order of 300. Another consideration is variation in the number of tokens stored at the nodes in a network. This has little impact on the distributed Rete algo- rithm but has a strong effect on the execution of an algorithm based on node level parallelism. Another matter to be considered in analyzing the results of the simulations is the Det of technology parameters used. They are based on a particular pro- cess available at the University, and trail the leading edge by a factor of about 3. The simulation serves to compare the effects of varying the processor array size; the comparison of match times shown versus the VAX processing time could be improved by using more advanced technology parameters. Also available from the simulator is an all but com- plete set of software for a processing element. This has allowed a second analysis of instruction set require- ments. It is apparent that some improvement is possi- ble in this area as well. With some additions to the model, further simula- tion and analysis will determine the effects of such things as larger overall system size, and the possibility of multiple rule firing systems. These will impact on the size of processing array required, and so the requirements of the communication systems - particu- larly the broadcast/collection system. Also to be determined is the performance penalty associated with production systems that approach the available memory size. Investigations are also under way to extend the matching capabilities of the architecture to those of OPS83 which includes simple function calls. One approach that may be taken is the use of a non- homogeneous array, using the processing elements described along with arithmetic processing elements for the execution of mathematical functions. This will add to the flexibility of the programming environment at a minimum of cost in match execution time. 8. References [ForgSl] Forgy, C.L. “OPS5 Users Manual”, Technical CMU-CS-81-135, Carnegie-Mellon University 1981. Report [Forg82] Forgy, C.L. “Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem”, Artificial Intelli- gence 19, pp. 17-37, September 1982. Forg85] Forgy, C.L. “OPS83 User’s Manual and Report”, Produc- tion Systems Technologies Incorporated, 1985. [Gupt83] Gupta, A., Forgy, C.L. “Measurements on Production Systems”, Technical Report CMU-CS-83-167, Carnegie-Mellon University, 1983. [Gupt84] Gupta, A., “Implementing OPS5 Production Systems on DADO”, Technical Report C&lU-CS-84-115, Carnegie-Mellon University 1984. [Gupt86] Gupta, A., Forgy, C.L., Newell, A., Wedig, R., “Parallel Algorithms and Architectures for Rule-Based Systems”, Proceed- ings 13th International Symposium on Computer Architecture, pp. 28-37, 1986. [Ramn86] Ramnarayan, R., Zimmermann, G., Krolikoski, S., “PESA-1: A Parallel Architecture For OPS5 Production System”, Proceedings of the Nineteenth Annual Hawaii International Conference on System Sciences, pp. 201-205, 1986. [Sto184] Stolfo, S. J., “Five Parallel Algorithms for Production Sys- tem Execution on the DAD0 Machine”, Proceedings National Conference on Artificial Intelligence, pp. 300-307, 1984. kelly and Seviora 41
1987
7
665
Intention = Choice + Commitment1 Philip. 33.. Cohen Artificial Intelligence Center and CSL12 SRI International Menlo Park, CA 94025 Abstract This paper provides a logical analysis of the concept of intention as composed of two more basic concepts, choice (or goal) and commitment. By making ex- plicit the conditions under which an agent can drop her goals, i.e., by specifying how the agent is conznzit- ted to her goals, the formalism provides analyses for Bratman’s three characteristic functional roles played by intentions [Bratman, 19863, a.nd shows how agents can avoid intending all the foreseen side-effects of what they actually intend. Finally, the analysis shows how intentions can be adopted relative to a background of relevant beliefs and other intentions or goals. By relativizing one agent’s intentions in terms of beliefs about another agent’s intentions (or beliefs), we derive a preliminary account of interpersonal commitments. By now, it is obvious to all interested parties that autonomous agents need to infer the intentions of other agents-in order to help those agents, hinder them, com- municate with them, and in general to predict their be- havior. Although intent and plan recognition has become a major topic of research for computational linguistics and distributed artificial intelligence, little work has addressed what it is these intentions are. Earlier work equated in- tentions with plans [Allen and Perrault, 1980, Cohen and Perrault, 1979, Schmidt et al., 1978, Sidner and Israel, 19811, and recent work [Pollack, 19861 has addressed the collection of mental states agents would have in having a plan. However, many properties of intention are left out, properties that an observer can make good use of. For example, knowing that an agent is intending to achieve something, and seeing it fail, an observer may conclude that the agent is likely to try again. This pa,per provides a formal foundation for making such predictions. lThis research was made possible in part by a gift from the Sys- tems Development Foundation, in part by support from the Natural Sciences and Engineering Research Council of Canada, and in part by support from the Defense Advanced Research Projects Agency un- der Contract N00039-84-K-0078 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the authors and should not be interpreted as representative of the official policies, either expressed or implied, of the Defense Ad- vanced Research Projec.ts Agency, the United States Government, or the Canadian Government. An expanded version of this paper appears in Reasoning. about Actions and Plans: Proceedings of the 1986 Work- shop. at Timberline Lodge, (sponsored by AAAI), Morgan Kaufman Publishers, Inc., 1987. 2Center for the Study of Language and Information. 3Fellow of the Canadian Institute for Advanced Research. Elect or J . Levesque3 Department of Computer Science University of Toronto Toronto, Ontario, Canada M5S lA4 I. Intention as a Composite We model intention as a composite concept specifying what the agent has chosen and how the agent is committed to that choice. First, consider agents as choosing from among their (possibly inconsistent) desires those they want most.4 Call what that follows from these chosen desires, loosely, goals. Next, consider an agent to have a persistent goal if she has a. goal that she believes currently to be false, and that remains chosen at least as long as certain conditions hold. Persistence involves an agent’s internnl commitment over time to her choices.” In the simplest case, a “fanatic” will drop her commitment only if she believes the goal has been a.chieved or is impossible to achieve. Finally, intentior is modelled a.s a kind of persistent goal-a persistent goal to do an action, believing one is about to do it. Both beliefs and goals are modelled here in terms of possible worlds. Thus, our formalism does not deal with the actual chosen desires of an agent directly, but only with what is true in all chosen worlds, that is, worlds that are compatible with those desires. As usual, this type of coarse-grained model will not distinguish between logically equivalent goals (or beliefs). Moreover, we assume that these chosen worlds are all compatible with the beliefs of an agent, which is to say that if she has chosen worlds in which p holds, and she believes that p implies q, then she has chosen worlds in which q holds. Despite these severe closure conditions, a crucial prop- erty of intention that our model does capture is that an agent may or ma.y not intend the expected consequences of her intentions. Consider the case of taking a drug to cure an illness, believing that as a side-effect, one will up- set one’s stomach. In choosing to take the drug, the agent has surely deliberately chosen stomach distress. But that was not her intention; she is not committed to upsetting her stomach. Sl~oulcl she take a new and improved ver- sion of the drug that does not upset her stomach, all the better.6 A system that cannot distinguish between the two cases is likely to be more of a hindrance than a help. In the next sections of the paper we briefly develop el- ements of a forma.1 theory of rational action, leading up to 4Chosen desires are ones that speech act theorists claim to be con- veyed by illocutionary acts such as requests. 5This is not a social commitment. It remains to be seen if the latter can be built out of the former. 61f the agent were truly committed to gastric distress, for instance as her indicator that the drug was effective, then if her stomach were not upset after taking the drug, she would ask for a refund. 410 Knciwledge Representation From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. a discussion of persistent goals. Then, we discuss the logic of persistent goal and define a notion of intention. We also extend the concept of a persistent goal to a more general one-one in which the dependencies of the agent’s com- mitments can depend on arbitrary propositions. Finally, we lead up to a theory of rational interact ion and com- munication by showing how agents can have interlocking commitments. ements of a al eory Below, we give an abbreviated description of the theory of rational action upon which we erect a theory of intention. Further details of this logic can be found in [Cohen and Levesque, 19871. A. Syntax The language we will use has the usual connectives of a first-order language with equality, as well as operators for the propositional attitudes and for talking about sequences of events: (BEL x p) and (GOAL x p) say that x has p as a belief or goal respectively; (AGT x e) says that x is the only agent of the sequence of events e; else2 says that ei is an initial subsequence of e2; and finally, (HAPPENS a) and (DONE a) say that a sequence of events describable by an action expression a will happen next or just happened respectively. An action expression here is built from vari- ables ranging over sequences of events using the constructs of dynamic logic: a;b is action composition; alb is nonde- terministic choice; p? is a test action; and finally, a* is rep- etition. The usual programming constructs like IF/THEN actions and WHILE loops can easily be formed from these. We will use e as a variable ranging over sequences of events, and a and b for action expressions. For simplicity, we adopt a logic with no singular terms, using instead predicates and existential quantifiers. How- ever, for readability, we will often use constants. The inter- ested reader can expand these out into the full predicative form if desired. B. Semantics We shall adapt the usual possible-worlds model for belief to goals and events. Informally, a possible world is a string of events temporally extended infinitely in the past and future, and characterizing a possible way the world could have been and could be. Because things will naturally change over a course of events, the truth of a proposition in our language depends not only on the world in question, but on an index into that course of events (roughly, a time point). 5, B(o,z, n, a*) holds if the world O* is compatible with what x believes in world o at point n (and similarly for G and goals). Turning this around, we could say that a G- accessible world is any course of events that an agent would be satisfied with, and that goals are just those propositions that are true in all such worlds (and analogously for be- liefs). Finally, to complete the semantic picture, we need a domain of quantification D that includes all people and finite sequences of events, and a relation @, which at every world and index point assigns to each k-place predicate symbol a k-ary relation over D. These sets, functions, and relations together make up a semantic structure. Assume that M is a semantic structure, a one of its possible worlds, n an integer, and v a set of bindings of vari- ables to objects in D. We now specify what it means for M, 0, v, n to satisfy a wff p, which we write as M, 0, v, n /= p. Because of formulas involving actions, this definition depends on wha.t it means for a sequence of events de- scribed by an action expression a to occur between index points n and m. This, we write as M,o, v, n[a]m, and is itself defined in terms of satisfaction. The definitions are as follows:7 1. M,o,v,n F (BEL x p) iff for all LT* such that B(a, 44, n2, o*), M, CT*, v, n I= P. 2. M,o, v,n /= (GOAL x p) iff for all O* such that ~bv(xhv*), M,o*,v,n I= P. 3. M,(~,v,n k (AGT x e) iff v(e) = e1e2.. . e, and for every i, Agt(e;) = v(x). Thus x is the only agent of e. 4. M, O, v, n + (ei 5 e2) iff v(ei) starts v(e2). 5. M,a,v,n k (HAPPENS a) iff 3m, m > n, such that M, o, v, nI[a]m. That is, a describes a sequence of events that happens “next” (after n). 6. M,c,v,n /= (DONE a) iff 3m,m 5 n, such that M, CT, v, m[a]n. That is, a describes a sequence of events that just happened (before n). Turning now to the occurrence of actions, we have: 1. 2. 3. 4. 5. M, 0, v, n[e]n + m (where e is an event variable) iff v(e) = ele2...em and a(n + i) = e;, 1 5 i < m. Intu- itively, e denotes some sequence of events of length m which appears next after n in the world O. M,o,v, n[a; b]m iff Elk, n 5 k < m, such that M, O, v, n[a]k and M, g, v, kl[blm. The action de- scribed by a and then that described by b occurs. M,v,nUa I bl m iff M, 0, v, nUa]m or M, O, v, n[b]m. Either the action described by a or that described by b occurs within the interval. M,a,v,n[p?jn iff M, 0, v, n /= p. The test action, p?, involves no events at all, but occurs if p holds, or “blocks” (fails), when p is false. M,a,v,n[a*]m iff 3n1,. . . ,nk where ni = n and n/, = m and for every i such that 1 5 i 5 k - 1, M, O, v, ni[a]ni+l .The iterative action a* occurs be- tween n and m provided only a sequence of what is described by a occurs within the interval. A wfT p is satisfiable if there is at least one M, world cr, index n, and assignment v such that M,a,v,n+ p. A wfF 7For conciseness, we omit that part of the definition the constructs of first-order logic with equality. that deals with Cohen and Levesque 411 p is valid, iff for every M, world O, event index n, and assignment of variables v, M, o‘, v, n /= p. We will adopt the following abbreviations: Actions: (DONE x a) !Zf (DONE a) A (AGT x a) and (HAPPENS x a) !Jif (HAPPENS a) A (AGT x a). Eventually: OP%~ 3e (HAPPENS e;p?). Op is true if there is something that happens (includ- ing the n&l action) after yhich p holds, that is, if p is true at some point in the future.’ Later: (LATER p) dgf up A Op. Always: q p %if -0-p. q p means that p is true throughout the course of events from now on. Before: (BEFORE p q) !Zf Vc (HAPPENS c;q?) > 3a (a < c) A (HAPPENS a;p?). The wff p will become true no later than q. Know: (KNOW x p) gf p A (BEL x p). Competence: (COMPETENT x p) dgf (BEL x p) > p. Agents that are competent with respect to some proposition have only correct beliefs about it.g C. Properties and Assumptions It is not too difficult to establish that action expressions as defined here have their dynamic logic interpretation. For example, /= (HAPPENS p?;(blc)) s [p A ((HAPPENS b) V (HAPPENS c))]. So a test action followed by a nondeterministic action hap- pens iff the test is true and one of the two actions happens next. Moreover, HAPPENS and DONE interact, as in k (HAPPENS a) z (HAPPENS a;(DONE a)?) b (DONE a) s (DONE (HAPPENS a)?;a). So, for example, if an action happens next, then immedi- ately afterwards, it is true that it just happened. Note that there is a sharp distinction between action expressions and primitive events. Examples of the latter might include moving an arm, exerting force, and utter- ing a word or sentence. Action expressions are used to characterize sequences of primitive events that satisfy cer- tain properties. For example, a movement of a finger may result in a circuit’s being closed, which may result in a light’s coming on. We will say that one primitive event happened, which can be characterized by various complex action expressions. Turning now to the attitudes, they can be shown to satisfy the usual closure conditions: t= (BEL x p) A (BEL x (p > q)) > (BEL x q). If k p then /= q (BEL x Up). (and similarly for GOAL). In addition we make the following assumptions:l’ “Note that Op and O-p are jointly satisfiable. ‘It is reasonable to assume that agents are competent with respect to their’ own beliefs, goals, and their having done primitive events. deal with semantic structures where these loIn other propositions words, we only come out true. Agents Know: j= (HAPPEN x e) > (BEL x (HAPPEN e)). A primitive event performed by an agent will occur only if its agent realizes it will. Accidental or unantic- ipated events are possible, but these are considered to happen to an agent. Note that this assumption does not apply to arbitrary action expressions here, since an agent may obviously achieve some state of affairs unknowingly. Consistency: k (GOAL x p) > -(GOAL x -p). There is always at least one world compatible with the goals of an agent. Because of realism below, this also applies to belief. Realism: b (BEL x p) > (GOAL x p). No Every chosen world is compatible with an agent’s be- liefs. This is not to say that an agent ca.nnot simulta- neously believe that p is false and want p to be true at some later point; however, if an agent (that does not engage in wishful thinking) believes that p is false now, her chosen worlds all reflect this fact. infinite deferral: j= O-(GOAL x (LATER p)). Agents eventually drop all “achievement” goals- goals they believe are currently false but want to be true later. These either become “maintenance” goals-goals the agent believes are currently true and need only be kept true-or are dropped completely (for example, if the agent comes to believe they are unachievable). Together, these assumptions imply that achievement goals must be consistent, compatible with all beliefs about the future, and of limited duration. At this point, we are finished with the foundational level, having briefly described agents’ beliefs and goals, events, and time. Further discussion can be found in [Co- hen and Levesque, 19871. III. ersistent oals To capture one grade of commitment (fanatical) that an agent might have toward her goals, we define a persistent goal, P-GOAL, to be one that the agent will not give up until she thinks it has been satisfied, or until she thinks it will never be true.ii Specifically, we have Definition 1 (P-GOAL x p) !?if (GOALx (LATER p)) A (BEL x “p) A (BEFORE [(BEL x p) v (BEL x q ~p)] -(GOAL x (LATER p))). Notice the use of LATER, and hence 0, above. P-GOALS are achievement goals; the agent’s goal is that p be true in the future, and she believes it is not currently true. As soon as the agent believes it will never be true, we know the agent must drop her goal (by Realism), and hence her persistent goal. Moreover, as soon as an agent believes p is true, the belief conjunct of P-GOAL requires that she drop the persistent goal that p be true. Thus, these condi- tions are necessary and sufficient for dropping a persistent goal. However, the BEFORE conjunct does not say that an agent must give up her simple goal when she thinks it is satisfied, since agents may have goals of maintenance. 412 Knowledge Representation Thus, achieving one’s persistent into maintenance goals. goals may convert them A.. The Logic of The logic of P-GOAL is weaker than one might expect. We have the following: 1. k(P-GOAL x pAq) z (P-GOAL x p)r\(P-GOAL x q) 2. k(P-GOAL x pvq) ; (P-GOAL x p)v(P-GOAL x q). 3. /=(P-GOAL x “p) > -(P-GOAL x p) First, (P-GOAL x pAq) does not imply (P-GOAL x q) because, although the antecedent is true, the agent might believe q is already true, and thus cannot have q as a P-GOAL.*’ Conversely, (P-GOAL x p) A (P-GOAL x q) does not imply (P- GOAL x pAq), b ecause (GOAL x (LATER p)) A (GOAL x (LATER q)) does not imply (GOAL x (LATER pAq)); p and q could be true at different times. Similar analyses can be given for the other properties of P-GOAL. We now give a crucial theorem: Theorem 1 From persistence to eventualities-If some- one has a persistent goal of bringing about p, p is always within her area of competence, and the agent will only be- lieve that p will never occur after she drops her goal, then eventually p becomes true: /= (P-GOAL y p) A 0 (COMPETENT y p) A -[BEFORE (BEL y o-p) -(GOAL x (LATER p))] > Op. If an agent who is not competent with respect to p adopts p as a persistent goal, we cannot conclude that eventually p will be true, since she could forever create incorrect plans. If the goal is not persistent, we also cannot conclude Op since she could give up the goal without achieving it. If the goal actually is impossible for her to achieve, but she does not know this and commits to achieving it, then we know that eventually, perhaps after trying hard to achieve it, she will come to believe it is impossible and give up. As the formalism now stands, once an agent has adopted a persistent goal, she will not be deterred. For example, if agent z receives a request from agent y , and decides to cooperate by adopting a persistent goal to do the re- quested act, y cannot “turn x off.” This is clearly a defect that needs to be remedied. The remedy depends on the following definition: Definition 2 (P-R-GOAL x p q) !Zf (GOAL x (LATER p)) A (BEL x wp) A (BEFORE [(BEL x p) v (BEL x q wp) v (BEL x w-q)] -(GOAL x (LATER p))). That is, a necessary condition to giving up a P-R-GOAL is that the agent believes it is satisfied, or believes it is impossible to achieve, or believes -9. Such propositions q form a background that justifies the agent’s intentions. In many cases, such propositions constitute the agent’s reasons for adopting the intention. For example, an agent could adopt the persistent goal to buy an umbrella relative to her belief that it will rain. That agent could consider 12For example, achieving q itself. I may be committed to your knowing 4, but not to dropping her persistent goal should she come to believe that the forecast has changed. One can prove a theorem analogous to Theorem 1: If someone has a persistent goal of bringing about p, relative to q, and, before dropping her goal, p remains within her area of competence, and the agent will not believe that p will never occur or believe that q is false, then eventually p becomes true. At this point, we are ready to define intention. There are two forms of intention-intending actions and intend- ing to achieve some state of affairs. For this brief paper, we only present the former; see [Cohen and Levesque, 19871 for the latter. Typically, one intends to do actions. Accordingly, we de- fine INTEN D1 to take an action expression as its argument. Definition 3 (INTEND1 x a) gf (P-GOAL x [DONE x (KNOW x (HAPPENS a))?;a]). Let us examine what this says. First of all, (fanatically) intending to do an action a is a special kind of commitment (i.e., persistent goal) to have done a. However, it is not a commitment just to doing a, for that would allow the agent to be committed to doing something accidentally or unknowingly. It seems reasonable to require that the agent be committed to believing she is about to do the intended action, and then doing it. Secondly, it is a commitment to success-to having done the action. As a contrast, consider the following in- adequate definition of INTENDI: (INTEND1 x ~+)~g’ (P-GOAL x (KNOW x (HAPPENS x a))). This would say that an intention is a commitment to being on the verge of doing a (knowingly). Of course, being on the verge of doing something is not the same as doing it; any unforeseen obstacle could permanently derail the agent from ever performing the intended act. This would not be much of a commitment. Just as we refined our analysis of persistent goal to al- low the commitment to be relative to the agent’s believing arbitrary states-of-affairs, so too can we extend the above definition of intention: Definition 4 (INTEND1 x a q) ‘!Zf (P-R-GOAL x [DONE x (KNOW x (HAPPENS a))?;a] q). In this section we show how various properties of the com- monsense concept of intention are captured by our anal- ysis based on P-GOAL. First, we consider how our defini- tions characterize the functional roles that intentions are thought to play in the mental lives of agents [Bratman, 1984, Bratman, 19861 Intentions normally pose problems for the agent; the agent needs to determine a way to achieve them. If the agent intends an action as described by an action expression, then she knows in general terms what to do. However, the action expression may have disjunc- tions or conditionals in it. Hence, she need not know at the time of forming the intention exactly what will be done. But unless she comes to believe the action Cohen and Levesque $13 is unachievable, she must sooner or later correctly be- lieve that she is about to do something that accom- plishes the action (by Theorem 1). Now by the Know- ing Agents assumption, a primitive event will occur only if its agent believes it will. So sooner or later, the agent will decide on a specific thing to do. Thus, agents are required to convert non-specific intentions into specific choices of primitive events to that end. 2. Intentions! provide a “screen of admissibility” for adopting other intentions. If an agent has an intention to do b, and the agent (always) believes that doing a prevents the achievement of b, then the agent cannot haye the intention to do a;b, (or even the intention of doing a before doing b): Theorem 2 b (INTEND1 x b) A U(BEL x [(DONE x a) > q -(DONE x b)]) 3 N(INTEND~ x a;b). Thus our agents cannot intentionally act to make their persistent goals unachievable. For example, if they have adopted a tim,e-limited intention, they cannot intend to do some other act knowing it would make achieving that time-limited intention forever false. 3. Agents %ack” the success of their attempts to achieve intentions. In other words, agents keep their inten- tions after failure. Assume an agent has an intention to do a, and then does something, b, thinking it would bring about the doing of a, but then comes to believes it did not. If the agent does not think that a can never be done, the agent still have the intention to do a: Theorem 3 f= (BEL x +-(DONE x a)) A w(BEL x O-(DONE x a)) A (DONE x [(INTEND1 x a) A (BEL x (HAPPENS x a))]?;b) 3 (INTEND1 x a). Because an agent cannot give up an intention until it is believed to have been achieved or to be unachievable, the agent here keeps the intention. Other writers have proposed that if an agent intends to do a, then 4. The agent does not believe she will never do a. This principle is embodied directly in the assumptions of Consistency and Realism. If an agent forms the inten- tion to do a, then in her chosen worlds, she eventually does a. But this is not realistic if she believes she will never do a. 5. The agent believes that a can be done. We do not have a modal operator for possibility, but we do have the previous property which may be close enough for current purposes. 6. Sometimes, the agent believes she will in fact do a. This is a consequence of Theorem 1, which states con- ditions under which a P-GOAL will eventually come to be true. So given that an agent believes both that she has the intention to do a and that these conditions hold, she will also’believe O(DONE x a). 7. Agents need not intend the expected side-e$ects of their intentions. Recall that in an earlier problem, an agent intended to cure an illness believing that the necessary medicine would upset her stomach. The fact We that the agent knowingly chooses to upset her stom- ach without intending to do so is accomodated in our scheme since (INTEND1 x a;p?) A (BEL x q (p > q)) does not imply (INTEND1 x a;q?). The reason is that al- though there is a belief that p is inevitably accom- panied by q, this belief could change over time (for example, if the agent finds out about new medicine). Under these circumstances, although p remains a per- sistent goal, q can now be realistically dropped. Thus, q was not a truly persistent goal after all, and so there was no intention. However, with El(BEL x q (p 1 q)) as the initial condi- tion, q can no longer be dropped, and so our formalism now says that q is intended. But this is as it should be. If the agent always believes, no matter what, that stomach upset is required by effective treatment, then in her commitment to such treatment, she will indeed be committed to upsetting her stomach, and track her attempts at that, just like any other intention. can also demonstrate that our notion of intention avoids McDermott’s “Little Nell” problem [McDermott, 19821, in which an agent drops her intention precisely be- cause she believes it will be successful. The problem can occur with any concept of intention (like ours) that satisfies the following two plausible principles: 1. An intention to achieve p can be given up when the agent believes that p holds. 2. Under some circumstances, an intention to achieve p is sufficient for the agent to believe that p will eventually be true. The problem is when the intention p is of the form Oq. By the second principle, in some cases, the agent will believe that eventually Oq will be true. But OOq is equivalent to Oq, and so, by the first principle, the belief allows the intention to be given up. But if the agent gives it up, Oq need not be achieved after all! Our theory of intention based on P-GOAL avoids this problem because an agent’s having a P-GOAL requires that the goal be true later and that the agent not believe it is currently true. In particular, an agent never forms the intention to achieve anything like Oq: because (LATER Oq) is always false, so is (P-GOAL x Oq). Finally, our analysis supports the observation that in- tentions can (loosely speaking) be viewed as the contents of plans (e.g., [Bratman, 1986, Cohen and Perrault, 1979, Pollack, 19861). Although we have not given a formal anal- ysis of plans here (see [Pollack, 19861 for such an analysis), the commitments one undertakes with respect to an ac- tion in a plan depend on the other planned actions, as well as the pre- and post-conditions brought about by those actions..’ If x adopts a persistent goal p relative to (GOAL x q), then necessary conditions for x’s dropping her goal include her believing that she no longer has q as a goal. Thus, (P-R-GOAL x p (GOAL x q)) characterizes an agent’s having a persistent subgoal p relative to the supergoal q. An agent’s dropping a supergoal is now a necessary (but not sufficient) prerequisite for her dropping a subgoal. Thus, with the change to relativized persistent goals, we open up the possibility of having a complex web of interdependen- ties among the agent’s goals, intentions, and beliefs. We always had the possibility of conditional P-GOALS. Now, we have added background conditions that could lead to a 414 Knowledge Representation revision of one’s persistent goals/intentions. v, Conclusiom Autonomous agents need to be able to reason not only about the plans that other agents have, but also about their state of commitment to those plans. If one agent finds out that another has failed in attempting to achieve something, the first should be able to predict when the other will try again. The first agent should be able to rea- son about the other agent’s intentions and commitments rather than be required to simulate the other agent’s plan- ning and replanning procedures. This research has developed a formal theory of inten- *tion that shows the intimate relationship of intention to commitment. Whereas other logics have related belief and knowledge to action, we have explored the consequences of adding another modality for goals, and have examined the effects of keeping goals over time. The logic of intention de- rives from this logic of persistent goal, and is finer-grained than one might expect from our use of a possible-worlds foundation. It provides a descriptive foundation for rea- soning about the intentions of other agents, without yet making a commitment to a reasoning strategy. Finally, it serves as the foundation for a theory of speech acts and communication [Cohen and Levesque] . VI. Acknowledgements James Allen, Michael Bratman, Jim des Rivieres, Joe Halpern, David Israel, Joe Nunes, Calvin Ostrum, Ray Perrault, Martha Pollack, and Moshe Vardi provided many valuable suggestions. Thanks to you all. References [Allen and Perrault, 19801 J. F. Allen and C. R. Perrault. Analyzing intention in dialogues. Artificial Intelli- gence, 15(3):143-178, 1980. [Bratman, 19841 M. Bratman. Two faces of intention. The Philosophical Review, XCIII(3):375-405, 1984. [Bratman, 19861 M. Bratman. Intentions, plans, and prac- tical reason. 1986. Harvard University Press, in preparation. [Cohen and Levesque] P. R. Cohen and H. J. Levesque. Rational interaction as the basis for communication. In preparation. [Cohen and Levesque, 19871 P. R. Cohen and H. J. Levesque. Persistence, Intention, and Commitment. Technical Report 415, Artificial Intelligence Center, SRI International, Menlo Park, California, February 1987. Also appears in Proceedings of the 1986 Timber- line Workshop on Planning and Practical Reasoning, Morgan Kaufman Publishers, Inc. Los Altos, Califor- nia. [Cohen and Perrault , 19791 P. R. Cohen and C. R. Per- rault . Elements of a plan-based theory of speech acts. Cognitive Science, 3(3):177-212, 1979. Reprinted in Readings in Artificial Intelligence, Morgan Kaufman Publishing Co., Los Altos, California, B. Webber and N. Nilsson (eds.), pp. 478-495., 1981. [McDermott, 19821 D. McDermott. A temporal logic for reasoning about processes and plans. Cognitive Sci- ence, 6(2):101-155, April-June 1982. [Pollack, 19861 M. E. Pollack. Inferring Domain Plans in Question Answering. PhD thesis, Department of Computer Science, University of Pennsylvania, 1986. [Schmidt et al., 19781 C. F. Schmidt, N. S. Sridharan, and J. L. Goodson. The plan recognition problem: an intersection of artificial intelligence and psychology. Artificial Intelligence, 10:45-83, 1978. [Sidner and Israel, 19811 C. Sidner and D. Israel. Recog- nizing intended meaning and speaker’s plans. In Pro- ceedings of the Seventh International Joint Conference on Artijkial Intelligence, Vancouver, B. C., 1981. Cohen and Levesque 41.5
1987
70
666
Thomas Y. Galloway USC-Information Sciences Institute1 4676 Admiralty Way Marina de1 Key, CA 90292-6695 Telephone: 213-822-1511 GALLC WAY @ VAXA.ISI.EDU Abstract The task of constructing knowledge bases is a dif- ficult one due to their size and complexity. A use- ful aid for this task would be a system which has both knowledge about a particular knowledge rep- resentation scheme and tools with which to ma- nipulate the representation’s components. Such a system would be a knowledge maniupulation sys- The remainder of the paper discusses taxonomic repre- sentation, TAXI’s design goals, its component tools, and possible applications. tern (KMS). This paper describes a KMS called TAXI which is used to manipulate knowledge in the form of a taxonomic knowledge representation scheme. The particular taxonomic representation used is dis- cussed, along with support for the usefulness of a KMS for this particular representation scheme. Tools provided in the TAXI system are described, as are possible applications for the system. The taxonomic representation scheme was selected for sev- era1 reasons. Taxonornies are useful for organizing declar- ative knowledge, which is used in many knowledge based systems, particularly diagnostic systems. Constructing a taxonomy creates a classification pro- cess for a domain so that each domain object is uniquely defined by its attributes. A set of objects representing a domain are defined by associating a value with each of a set of attributes for each object. The domain is further subdivided into classes of objects which have the same Thii paper describes TAXI, a first version of a Knowl- value for each of a given set of attributes. edge Manipulation System (KM!?) for taxonomic repre- sentation. A KMS is a system which contains knowledge about some representation scheme and its components, along with tools allowing users to manipulate and explore domain knowledge represented by that scheme. The final taxonomy will be the result of numerous de- cisions about attributes, values, application order of at- tributes, etc. Only by participating in these decisions will a user, whether human or machine, be fully aware of the implicit knowledge behind the final taxonomic structure. In order to most effectively utilize large amounts of Thus, the construction process will provide the construc- domain knowledge, it is essential that the knowledge be tor with a better understanding of the utilized domain. organized in some manner. In addition, the organization process enables a person to better understand the domain Except for already well defined and understood do- and relationships among the knowledge being organized. mains, it is currently not possible to fully automate tax- The usefulness of this meta-knowledge and experimen- onomy construction [Swartout, 19811. This is due to tax- tation within a domain while trying to organize its in- onomy construction being a meta-classification problem, formation is shown in the work of Swartout and Papert requiring that large amounts of domain knowledge be pos- [Swartout, 19811 [Papert, 19791. sessed before construction begins. Automatic construc- tion is also not practical since users will not obtain the better understanding of the domain from performing the construction. lThis work was done while the author was at the University of However, due to the needs of detecting generalities Pennsylvania. It was supported by the author while on a teaching among objects, keeping track of dependencies, determin- or amistantship at all. aud by Penn via use of machines. No grants contracts 416 Knowledge Representation From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. ing useful classification attributes, and constantly reform- ing the taxonomy, large taxonomies are difficult for people to construct. Thus, the hybrid approach of a KMS, used as a taxo- nomic assistant in this case, appears appropriate. A per- son can contribute the domain knowledge and control the construction process while the assistant can keep track of dependencies, perform the grunge work of bookkeeping, and detect generalities in the knowledge uncovered by use of the taxonomic representation. Taxonomy construction is thus easier, more efficient, and allows for more exper- imentation during the design phase. Taxonomy editing, which is required unless dealing with a static domain, will also be easier. Finally, a taxonomic KMS is useful for learning about a domain. By using the provided tools to alter the struc- ture of an existing taxonomy, users can examine a domain from many perspectives, discovering the consequences of different classifications and obtaining a better understand- ing of how objects and classes are interrelated than can be obtained via a static taxonomy such as the biological taxonomy used in schools. epresentation theme Taxonomies can be represented in several ways. For ex- ample, a powerful representation results from the use of a directed acyclic graph or lattice as the base data struc- ture. This representation allows nodes to have several direct subsumers, and is used by the KL-ONE and NIKL representation languages [Bra&man, 19851. A simpler data structure for taxonomic representation is the discrimination tree; a set of nodes partiahy ordered by a subsumption relation to form a tree structure. Each node can have only one subsumer (parent). Attributes and their values are used as discriminators to divide sets of objects into subsets. There are four reasons why TAXI uses the discrimina- tion tree structure. 1. Computational simplicity: For the first version of a taxonomic assistant, it was desirable to have an easily implemented representation. Many design de- cisions involving user interface design, tool selec- tion, and which taxonomic components are impor- tant would be the same no matter what representa- tion was used. Conceptual simplicity: Among other aims, TAXI is intended to be used as a learning tool. The dis- crimination tree representation is easier to compre- hend for most users, making it easier to perceive taxonomy component interrelationships and depen- dencies. Familiarity: Many people are already familiar with the discrimination tree structure from their expo- sure to the biological taxonomy in school, and thus will already understand the underlying data struc- ture. Having to explain a more complex representa- tion scheme to a user will serve as one more obstacle to system use. Graphical representation: Discrimination trees are simple to represent graphically. A graphic based user interface is important because it conveys infor- mation in a direct and easily understood manner. Using a discrimination tree, most users already pos- sess the intuitive understanding that objects/&sees which are close together are relatively similar, and that the further down a tree a class is, the less gen- eral it is. . TAXI’s design paradigm is similar to a good text ed- itor’s, It contains knowledge about discrimination tree structure, taxonomy components, their interrelationships and dependencies, and tools for structure and domain ma- nipulation. The final result of a taxonomy construction should be a tree where each object in the domain set occupies a unique leaf node. Objects are distinguished by defining them in terms of attributes and associated values. Attributes are used as discriminators at levels in the tree. Taxonomy construction is essentially a trial and error process, and is both incremental and non-modular. Deci- sions must be made as to which attributes are “best” for the desired taxonomy, and in what order they are applied as discriminators. In order to experiment with these de- cisions, the taxonomy must be reformed with each new experiment. However, since TAXI automatically reforms the taxonomy for users, experimentation is much easier. TAXI possesses knowledge of six taxonomic components; Objects, Attributes, Ii?$pes, Classes, Discriminators, and the Tazonomgl itself. TAXI has tools for the creation and editing of each component, as well as other tools for ma- nipulating overall taxonomic structure and interrelation- ships. While individual tools are not very powerful, the Galloway 417 entire set (currently about 40) provides users with control over the construction, editing, exploration, and experi- mental processes for discrimination tree taxonomies. In this section, I discuss the structure of a TAXI taxonomy, and some of the more powerful and useful tools. A taxonomy’s initial state is a single class, containing all domain objects, which is the root of the tree. This inital class/node is divided into subclasses which are the nodes at the next tree level of the tree by selecting an attribute and creating a subclass for each possible value of the attribute. In the case of attributes which can have an large, or even infinite number of possible values (ones which take numeric values, for example), the subclasses are limited to values currently used in the taxonomy’s objects. Objects consist of a name and a definition. The def- inition is the set of defined attributes and their values for that object. Classes contain objects as members, and are defined as the set of attributes and associated values which are the same for all members. Attributes and their values define classes and objects. For computational efficiency and to let users easily de- termine possible values, each attribute has a type which defines all the possible values for the attribute. Types can be associated with more than one attribute. Untyped attributes have been suggested due to dif- ficulty in anticipating all possible values when initially defining an attribute [Silverman, 19841. However, TAXI provides a tool to easily edit type definitions. The useful- ness of types for formalization of attribute definitions and detection of invalid values overcomes the minor inconve- nience of editing or examining a type definition. TAXI currently has two meta-types, numeric and non- numeric, of which all types are one or the other. Non- numeric types may contain numeric values, but numeric types can only contain numbers. Meta-types exist only for implementation reasons. It is difficult to represent in a menu infinite or very large numbers of possible values, such as are possible for at- tributes which possess numeric values. By limiting the possible values for an attribute to numbers, it is possible to ask the user to merely type in a number when assigning a value, as opposed to trying to display an infinite number of possible values in a menu. Numeric types may be re- stricted to allow values only between user-defined ranges; for example a type might allow only values between 1 and 100 as well as between 3000 and 4000. 41% Knowledge Representation Types automatically have *NONE* included as one of their values. *NONE* is used when an attribute is not ap- plicable to an object, such as the attribute hair-color for the object Yul-Brenszer. Of course, TAXI has knowledge about the meaning of the *NONE* value. The current implementation of TAXI requires that every object in the taxonomy have a value for each attribute used as a dis- criminator in the tree structure. Changes to type definitions can have large impacts on taxonomies. For example, if a value is deleted from a type, all objects with that value for an attribute of that type will be affected, as will all classes based on that at- tribute/value. Users are prompted for a new value for each instance of an affected attribute, and affected objects are automatically reclassified. If a type is deleted, the effects can propagate. Users are asked whether to also delete the attributes of that type, and if not, to provide a new type(s). This will fur- ther affect the taxonomy, as discussed later. Types can be defined using other types by taking a subset of values from an existing type, or via an intersec- tion or union operation on a set of existing types. New values can be added to the result of these methods as well. Types are assigned during attribute definition. A new type may be declared when the first attribute of the type is declared. Immediately after an attribute is declared, users are queried about the attribute’s value for each ex- isting object. The attribute/value is incorporated into the existing object definitions. When a new attribute is defined, users are asked if it should be incorporated immediately into the taxonomy as a discriminator. A user is free to decline. A list of defined attributes which are not used as discriminators is maintained by TAXI and can be examined at any time. If an attribute is used in the taxonomy as a discrim- inator, changes to attribute values will cause TAXI to automatically reclassify affected objects, which will alter the membership of the appropriate classes. Whether an attribute is used as a discriminator, and at what position in the tree it is used, is usually left to users, unless assistance is requested as discussed later. When an attribute is used, the user must specify the level it is to be used at in the tree. TAXI then splits all classes on the chosen level into subclasses based on the member objects’ associated value for that attribute. The taxonomy is then reformed by applying the discriminators used at levels be- low the newly added discriminator to these subclasses in the same order in which they were previously applied. The order in which attributes are used as discrimina- tors determines whether objects will be contained in the same class(es) at points other than at the top of the tree. For example, if the objects man and ostrich are both in a taxonomy, and the first discrimination is Number-of-Legs, then they’ll be in the same class on level 2, while if the dis- crimination is Has-Feathers, they would not. Since ideally each object in the taxonomy should be the sole class mem- ber in a leaf node of the tree, and since people intuitively believe that the closer two discrimination tree nodes are together, the more similar they are, the choice of what order to apply discriminators can have have a large ef- fect on how users will percieve the relative similarity of represented objects. Therefore, TAXI allows users to change the level at which an attribute is used as a discriminator. By alter- ing the discriminator levels, users can observe taxonomies from several perspectives and notice which objects tend to share class membership (or which don’t), learning which objects are relatively similar. Since attributes define objects and classes, their se- lection is important. TAXI provides a tool for attribute selection called Attribute Aid based on Personal Construct Theory [Boose, 19841. Attribute Aid attempts to encour- ages users to consider small subsets of the object set. In one form, users are presented with a randomly selected set of three objects on which Attribute Aid has not been pre- viously used. They are then asked to define an attribute which would distinguish one object from the others; one of the objects would have a different value for that attribute than the other two. By remaining in Attribute Aid, such an attribute can be determined for every object. It is also possible to continue the process until all objects are uniquely defined, and as such are all represented as the sole member of a class which is a terminal node in the discrimination tree. Attribute Aid can also present users with a class con- sisting of more than one object which is a leaf node in the tree. In other words, a class consisting of objects which have yet to be uniquely defined. Until all objects in the class are so distinguished,. Attribute Aid will query the user for an attibute which will distinguish an object from the other members of that class. TAXI can also suggest a “best” attribute for a level. Users may request either an euen or skewed distribution for the next level. For even distribution, TAXI determines which attribute which has not been used above that level as a discriminator will cause the least standard deviation among subclass size if so used. For skewed, TAXI finds the attribute which would produce the largest standard deviation. Even distributions create relatively balanced trees, while skewed distributions are a quick method to cause singular terminal nodes to form quickly, but which form an unbalanced tree. TAXI also allows users to begin with a set of objects, and then order TAXI to continue to find either even or skewed discriminations until either all objects are uniquely defined, or all attributes have been used as discriminators. It is also possible to define dependencies among objects or attributes such that changes to the definition of one will affect the definition of the other. Users are informed of such changes, although they occur automatically. The preceding has been an abbreviated description of some of the tools provided by TAXI for manipulation of the taxonomic structure supplied by a discrimination tree. While each tool’s power is limited, the combined power of the entire set allows users to quickly and easily construct, modify, or experiment with a taxonomy represented in the form of a discrimination tree. TAXI’s research goals were to design and build a knowl- edge manipulation system which could &ve as a taxo- nomic assistant 0 The system would heI; users increase their knowledge about a domain by enabling them to ma- nipulate elements of the taxonomic representation, in ad- dition to assisting in the.t&xonomic construction process. In this section of the paper, we discuss possible applica- tions for a taxonomic assistant. TAXI has several practical applications. The most ob- vious would be for it to be used as a tool in the construc- tion or reformulation of large taxonomies. One such area in which it would be quite useful would be the current ef- fort to redefine the biological taxonomy in terms of genetic material and differences [Francis, 19851. TAXI could also be used to determine previously un- realized similarities between objects in a domain. This information could then be incorporated into knowledge- based diagnostic systems for the domain. For example, a taxonomy of the intended domain for a diagnostic system could be constructed using TAXI. Then, by reforming the taxonomy by using different orderings or combinations of attributes, those diseases which are often located close to- gether in the taxonomy are those which are likely to be confused with each other. A knowledge engineer could incorporate this information in the system in the form of special procedures for distinguishing between the discov- ered similar cases. Galloway 419 Finally, TAXI’s exploratory tools could improve knowledge engineers’ domain understanding. A domain expert could use TAXI to create a taxonomy of various domain concepts. A knowledge engineer could then use TAXI’s tools to examine, explore, and play with that tax- onomy, enabling him/her to obtain a better intuitive un- derstanding of the domain and how its objects interact before beginning work of the knowledge-based system. 6 TAXI is a first version taxonomy assistant/KM% There are several improvements planned for future versions, in- cluding the removal of the Numeric/Non-Numeric type distinction, the ability to compare two similar taxonomies, the ability to define and explore taxonomies for domain subsets in depth and then incorporate findings into the main taxonomy, and many other relatively minor refine- ments to TAXI’s tools. Most importantly, a second generation TAXI should be able to handle more sophisticated representation schemes such as the ML-ONE representation and an ex- tended discrimination tree which allows objects to have multiple values for its attributes. 7 Conclusion This paper has described TAXI, a knowledge manipu- lation system for taxonomic knowledge representations which use a discrimination tree as their data structure. TAXI’s KMS paradigm appears successful in allowing users to obtain the benefits of constructing a taxonomy without having to deal with many of the difficulties in- herent to the task which are irrelevant to the goals of constructing a usable taxonomy, or obtaining a better un- derstanding of the domain. It is also successful in allow- ing users to experiment with taxonomies, allowing them to view domain knowledge from several perspectives by means of altering the definitions of objects and classes. It is hoped that TAXI will prove to be of use to knowl- edge engineers, biologists, and perhaps most importantly, to students seeking to improve their understanding of var- ious domains. 8 References 1. Boose, J., Personal Construct Theory and the Transfer of Human Ezpertise, AAAI Conference Proceedings, 1984. 2. Brachman, R., On the Epistemological Status of Semantic Net- works, in Readings in Knowledge Regresenta- tion, Brachman R. & Levesque H. (eds), Morgan Kaufman Publishing Company, 1985. 3. Francis, K., Personal communication, 1985. 4. Papert, S., Mindstorms, Houghton-Mifflen, 1979. 5. Silverman, D., An Interactive, Incremental Classifier, Technical Report MS-CIS-84-19, University of Pennsylvania, 1984. 6. Swartout WV., Explaining and Justifying Expert Consulting Pro- grams, IJCAI Conference Proceedings, 1981. 420 Knowledge Representation
1987
71
667
Complexity in Classificatory Reasoning Ashok Gael, N. Soundararajan, and B. Chandrasekaran Department of Computer and Information Science The Ohio State University Columbus. Ohio 43210 Abstract Classificatory reasoning involves the tasks of concept evaluation and classification, which may be performed with use of the strategies of concept matching and concept activation, respectively. Different implementations of the strategies of concept matching and concept activation are possible, where an implementation is characterized by the organization of knowledge and the control of information processing it uses. In this paper we define the tasks of concept evaluation and classification, and describe the strategies of concept matching and concept activation. We then derive the computational complexity of the tasks using different implementations of the task-specific strategies. We show that the complexity of performing a task is determined by the organization .of knowledge used in performing it. Further, we suggest that the implementation that is computationally the most efficient for performing a task may be cognitively the most plausible as well. I. Introduction Classificatory reasoning is a type of knowledge-using reasoning that deals with performance of the classification task, and has received significant attention in research on knowledge-using problem-solving systems [Clancey, 1985; Gomez and Chandrasekaran, 1984i. Given a taxonomy of concepts in a domain, and a set of data describing a situation in the domain, the classification task is to determine which concepts are present in the situation. In diagnosing a device in some situation for instance, the classification task is to determine which device malfunctions are present in the situation, while in assessing an event in some situation the classification task may be to find which threats to some system are present in the situation. A task may be performed with use of a knowledge-using strategy appropriate for the task. The classification task may be performed with-use of the strategy of concept activation, which is to activate concepts in the taxonomy for evaluation of their presence in a given situation. Concept evaluation is a task by itself since it may “occur” not only in classificatory reasoning but also in other types of knowledge-using reasoning such as plan selection. Given a concept in a domain and a set of data describing a situation in the domain, the task of concept evaluation is to determine whether the concept is appropriate for the situation. The sense in which a concept is appropriate for a situation depends on the type of the concept; if the concept is a device malfunction for instance, then the concept is appropriate for the situation if it is present in it, and if the concept is a plan to thwart a threat then the concept is appropriate for the situation if it is applicable to it. Concept evaluation may be performed with use of the strategy of concept matching, which is to match a knowledge structure for the concept with the description of the situation, and determine a likelihood that the concept is appropriate for the situation by the degree of the match [Berliner and Ackley, 1982; Bylander and Johnson, 19871. A strategy might be implemented in more than one way, where an implementation may be characterized by the organization of knowledge and the control of information processing it uses. An important issue in classificatory reasoning is the computational complexity of performing the tasks of classification and concept evaluation. The complexity of performing a task depends on the implementation of the strategy used to perform it. In this paper we derive the computational complexity of concept evaluation and classification for different implementations of concept matching and concept activation, respectively. We show that the complexity of performing a task is determined by the organization of knowledge used in performing it. Further, we suggest that the implementation that is computationally the most efficient for performing a task may be cognitively the most plausible as well. 2. Complexity of Concept Evaluation 2.1. Definitions Let c be a concept in a given domain. Let ci be a set of p discrete values, where p is some small integer. A value u E U represents the likelihood that the concept c is appropriate for a specific situation in the domain. A high likelihood value implies that c is appropriate for the situation; a low value implies that c is not appropriate for the situation; and middle-range values imply various levels of uncertainty. Goel, Soundararajan, and Chandrasekaran 421 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Let D be a finite set of n data d,, i=I,2 ,..., n in the domain of c. Let Q be a set of q truth values. Let 21 be a map, v : D --+ Q, which assigns a value from Q to each d E D. A datum d E D corresponds to an assertion about some feature in a specific situation in the domain, and v(d) is the truth value of the assertion in a q-valued truth system Q. If some d E D asserts that feature z has a discrete value y in some situation for example, then in the case q is three, v(d) may be True, False, or Unknown depending on whether the assertion is known to be true, false, or it is not known whether the assertion is true or false. We assume that for a given c, specification of D and v for any situation in a class of situations is a necessary and sufficient condition to determine u E U for c. We define concept evaluation as a five-tuple (c, u, D, v, fee>, where c, U, D, and v are as defined above, and fee is a function that takes D and v as inputs, and outputs a u E U. Concept evaluation may be performed with use of the strategy of concept matching, which is to match a conceptual structure with the description of the situation, and determine a likelihood that the concept is appropriate for the situation by the degree of the match. Concept matching may be viewed as an instantiation of fee. We now describe different implementations of concept matching, and derive the computational complexity of concept evaluation using these implementations. We assume an oracle for testing the value of one datum. We express the time complexity as the number of calls to the oracle, and the space complexity as the number of tests which have to be encoded in the knowledge-base. 2.2. Table Look-up A first implementation of concept matching is table look-up. Knowledge is organized as a qn x 2 table. The first column of each row in the table contains a different entry from the qn possible combinations of v(dJ, i=l,2,..., n, and the second column contains the corresponding value of u. The control of information processing is row by row. Starting with the first row in the table, the entry in the first column of the row is matched with the input; if the match succeeds then the entry in the second column of the row is the output, else the entry in the first column of the next row is matched with the input, and so on. The time and space complexities Tcel and Scel respectively, are given by T ccl = O(n. qn) S ccl = Oh. 47 2.3. Tree Traversal A second implementation of concept matching is tree traversal. Knowledge is organized as nodes in a discrimination tree. The top node in the tree corresponds to d, and has q branches coming out of it, one for each of the q possible values that d, may take. The branches lead to q different nodes, each of which corresponds to d, and has q branches coming out of it. This organization of knowledge is repeated until di, i=I,2 ,..., n have been represented on the tree. Thus, there is one node at the first level, q nodes at the second, q2 nodes at the third level, and so on. There are q 72 branches coming out of the qnml nodes at the nth level, each of which leads to a value of u. The control of information processing is top-down. Starting with the root node the branch that matches v(d,) in the input is taken, and the next node is reached, where the branch that matches v(dJ in the input is taken, and so on until v(dJ, i=I,2,..,n in the input have been matched. The match of v(d,J leads to the value of 21 which is the output. The time and space complexities TCeg and Scee respectively, are given by T ce2 = O(n) S ce2 = WP) The space complexity is the sum of the geometric series of q2 from i=O to i=n-1. 2.4. Structured Matching We may view D and c as characterizing two different levels of abstraction in a given domain. Let us introduce l-2 intermediate levels of abstraction Gy j=l,2,..., l-2 between the D and c levels. Let us consider nI features at the G, level, ne features at the G, level, and so on, with n1 M n/k, ne z nJk, and so on, where n is the number of data in D, and k is some small constant greater than one. The number of intermediate levels of abstraction depends on k; 1 x logk(n). Let us assume that it is possible to form nI disjoint subsets of values v(dJ, i=1,2,...n, with no more than k values in any subset, such that each such subset may be used to abstract the value of some feature at G,. Let us assume also that it is possible to form ne disjoint subsets of the values of features at G,, with no more than k features in any subset, such that each such subset may be used to abstract the value of some feature at G, This process may be repeated until the value u for c is abstracted from the values of features at the G/-, level. We may call the hierarchy thus formed a feature hierarchy. The idea of hierarchical feature abstractions was first developed by Samuel in his work on game playing programs [Samuel, 19671. We now describe a third implementation of concept matching called structured matching IBylander and 422 Knowledge Representation Johnson, 19871. Knowledge is organized in a feature hierarchy as above. At any level in the hierarchy a small number of strongly interrelated features are grouped, evaluated, and abstracted together to a higher level feature, and weakly interrelated features are evaluated and abstracted in different groups. The interactions between two groups of features at some level are taken into account at a higher level in the hierarchy. k represents the upper bound on the number of features that may be grouped together at any level in the hierarchy. The task of abstracting the value of a feature at some level from the values of features at the lower level in the hierarchy may be performed by a simple matcher that uses table look-up. Notice that in going from one level of abstraction to another it is not important if the range of likelihood values p, does not equal q. In the case q is three for instance, if the likelihood value for a feature is high then the truth value of the feature may be taken as True, if the likelihood value for the feature is low then the truth value may be False, and if the likelihood value is in middle range then the truth value of the feature may be Unknown. The control of information processing is top-down. The information processing starts by invocation of the simple matcher corresponding to the concept c which is at the top node in the feature hierarchy. Since the simple matcher requires the values of the features input to it, it invokes the simple matchers at the next lower level in the hierarchy. The invocations of the simple matchers proceed downwards through the hierarchy, until the level of abstraction just above D level is reached. Since the values of input features at this level are known, the feature abstractions may begin. The feature abstractions flow upwards in the hierarchy until u is computed at the top node. Since each simple matcher in the hierarchy uses the strategy of table look-up with no more than k values the time and space complexities for each simple matcher are both O(k. qk), which is a constant. There is one simple matcher on the top level, k simple matchers on the second level, k’ on the third level, and so on for the 1 levels. The time and space complexities are the sum of the geometric series of ka from i=l to i=l-1, where 1 x5 logk(n). Thus, the time and space complexities of concept matching using the strategy of structured matching Tees, and SceS respectively, are given by T - O(n) c&9 - S ce3 = O(n) 3. Complexity of Classification 3.1. Definitions Let C be a finite set of m concepts, ci, j=1,2,..,m in a given domain. C is a taxonomy of m concepts in the domain. Let U3, j=1,2 ,..., m be m finite sets of p discrete values each. Let U be a set composed of sets U, j=1,2 ,... m. A value u3 likelihood that the concept c E li represents the E C t”,r j=1,2 ,..., m, is present in a specific situation’ in the domain. Let D, Q, q, and v be defined as for concept evaluation. We note that the number of data d E D for classification would typically be much larger than for concept evaluation. We assume that for a given C, specification of D and v for any situation in a class of situations is a necessary and sufficient condition to determine u3 E uj for ci E C, j=1,2 ,..., m. The m concepts in C are equivalence classes of different subsets of D. We assume that the concepts in C are independent of one another; if cl1 and cl2 are two concepts in C, then the subset of D that may be classified into c 1 and Cam, is the union of the two subsets of D t at i-l may be classified into c and c separately. We define classification as $‘five-tupli’(C, U, D, v, fc) where C, U, D, and v are as defined above, and fc is a function that takes D and v as the input, and outputs uJ E U3, j=l,2 ,..., m. The classification task may be performed with use of the strategy of concept activation, which is to activate concepts in the taxonomy for evaluation. Concept activation may be viewed as an instantiation of fc. We now describe different implementations of concept activation, and derive the computational complexity of classification for these implementations. We assume an oracle for concept evaluation. We express the time complexity of classification as the number of calls to the oracle for concept evaluation, and the space complexity as the number of concepts in the knowledge-base. 3.2. Direct Activation One implementation of concept activation is direct activation. Knowledge is distributed among the m conceptual structures, and the control of problem-solving is activation of concepts for evaluation, one by one. Concept evaluation is performed with a call to the oracle. The time and space complexities T, and SC respectively, are given by T, = O(m) SC = O(m) 3.3. Hierarchical Activation Let C, U, D, and v be as before. Let C’ be a finite set of m’ concepts, c >, j=1,2 ,..., m ‘. Let Uit j=l,.Z,...,m’ be m’ finite sets of p discrete values each. Let U’ be a set composed of sets U., j=l,2,...m’. A value u1 E UJ represents ?the likelihood that the concept c E specific situation in the doma!in. C’ is present in a Let D’ be a finite set of n’ data, d,, i=l,2 ,..., n’. Let m’ > m, and 12’ 2 72. C’, U’, and D’, are supersets of C, U, and Coel, Soundararajan, and Chandrasekaran 423 D, respectively. We redefine the task of classification as a five-tuple (C’, U’, D’, V, f’,), where C’, U’, and D’, are as defined above: v is as defined earlier, and f’c is a function that takes D’ and v as inputs, and outputs U. E Ui, j=1,2 ,...) m’. U’, and d’, Notice that since C’, are supersets of C, . U, and D, respectively, f ‘c entails fc. We now describe another implementation of concept ac vation, h hierarchical activation. In hierarchical activation the m’ concepts in C’ are organized in a concept hierarchy such that the m leaf concepts in the hierarchy correspond to the m concepts in C. The value. of m’ depends on the branching factor, b, of the hierarchy; m’ is directly proportional to m. The number of levels, 1, is given by 1 x logb(m’). Knowledge is organized in the concept hierarchy, and distributed among the m’ conceptual structures. The control of problem solving is top-down. Starting with the concept at the root of the hierarchy, each concept when activated is evaluated with a call to the oracle; if the match succeeds then the concept is established as present in the situation, and its subconcepts are activated, else the concept is rejected as present in the situation, and its subconcepts are rejected as well. This may result in the pruning of the tree. The space complexity S’(c) is given by S’(c) = O(m)) = O(m) In the worst case, each concept in the hierarchy may be activated. The time complexity in the worst case T’(c1) is given by T’(c1) = O(m)) = O(m) In the best case only 1 concepts may be activated for evaluation. The time complexity for the best case T’(cZ) is given by T’(c2) = O(logb(m’)) = O(logb(m)) 4. Cognitive Issues in Classificatory Reasoning In our framework for classificatory reasoning, the data that describe a situation in a given domain are allowed to take on only qualitative values, and the likelihood that a concept in the domain is appropriate for the situation is expressed as a discrete value. However, the use of numerical values and continuous functions might yield more precise results in some domains. Nevertheless, we use only discrete values and functions because intuitively they appear cognitively more plausible. For the construction of knowledge-using systems, in domains in which data appears in numerical form, the data values may be converted to qualitative form by preprocessing. Further, in our framework, uncertainty in the data values and the likelihood values for concepts is handled locally rather than through a global uncertainty calculus. In performing the task of classification with use of hierarchical activation for instance, a high likelihood value for the appropriateness of a concept for a given situation is interpreted as presence of the concept in the situation, and the uncertainty is not propagated to the subconcep ts. Again, the use of a global uncertainty calculus might yield more precise results in some domains; nevertheless, we handle uncertainty locally because intuitively that appears cognitively more plausible. It appears obvious that the computational efficiency of performing a task is a precondition for the cognitive plausibility of the implementation of the strategy used to perform the task. We comment below on the cognitive plausibility of structured matching and hierarchical activation. 4.1. Structured Matching Concept evaluation in general is computationally most efficiently performed with use of structured matching. However, in structured matching we had assumed that it was possible to form disjoint subsets of a set of features at some level in the feature hierarchy, such that each such subset may be used to abstract the value of a feature at the next higher level. This assumption is valid only in problem domains that Simon has called nearly decomposable [Simon, 811. If such a decomposition was not possible then the feature hierarchy would be tangled. A tangled feature hierarchy may be untangled by including the same feature(s) in different feature groups. For nearly decomposable domains the efficiency of performing concept evaluation using structured matching is due to two interrelated reasons. Firstly, a small number of interrelated features are grouped, evaluated and abstracted together at each level of abstraction in the feature hierarchy. This grouping together of a small number of interrelated features is analogous to the phenomenon of “chunking” in cognitive psychology. Secondly, an upper bound is imposed on the number of features allowed in a group. The imposition of a such an upper bound on the number of features is reminiscent of the notion of “short-term memory” in cognitive psychology. 4.2. Hierarchical Activation For performing the classification task in general the use of hierarchical activation in the worst case is as efficient as, and in the best case is more efficient than, the use of direct activation. However, in describing hierarchical activation we had implicitly assumed that building of a non-trivial untangled concept hierarchy with m leaf concepts was indeed possible. Again, this assumption may be valid only in nearly decomposable domains. Branches in the concept 424 Knowledge Representation hierarchy that are tangled at some concept c may be untangled by including a copy of c in each tangled branch. The computational efficiency of classification with use of hierarchical activation is due to the organization of knowledge in a hierarchy, which allows for pruning of the tree. A concept is activated for evaluation only if its parent concept has been established as being present in the given situation. Thus, a concept in general is evaluated in the context of its ancestor concepts. This use of context is cognitively appealing. Furthermore, if only an incomplete description of a situation were available, then direct concept activation might not lead to the establishment of any concept in the taxonomy. However, for the same incomplete description of the situation hierarchical activation might lead to the establishment of concepts at a level higher than the leaf level. This too is cognitively appealing. 5. Conclusions We have described different implementations of the strategies of concept matching and concept activation in terms of the organization of knowledge and control of information processing that they use. We have shown that structured matching and hierarchical activation are computationally the most efficient implementations for performing the tasks of concept evaluation and classification, respectively. Further, we have suggested that structured matching and hierarchical activation may be cognitively the most plausible implementations as well. Hierarchical activation and structured matching are computationally the most efficient implementations because they use organizations of knowledge that are most appropriate for the tasks of classification and concept evaluation, respectively. Organization of knowledge specific to its use is a central issue in knowledge-using reasoning in general. It is the organization of knowledge specific to its use from which the computational power to perform a task emerges. For each primitive type of knowledge-using reasoning there exists an organization of knowledge that is appropriate for it. Chandrasekaran has identified classification and concept evaluation as primitive types of knowledge-using reasoning [ Chandrasekaran, 19861 High-level knowledge representation languages for classification using hierarchical activation [ Bylander and Mittal, 19861, and concept evaluation using structured matching [Johnson and Josephson, 19861 have been developed. References [Berliner and Ackley, 19821 Hans Berliner and David Ackley. “The QBKG System: Generating Ex- planations from a Non-Discrete Knowledge- Representation”. In Proceedings AAAI-82, pages 213-216, 1982. [Bylander and Johnson, 19871 Tom Bylander and Todd Johnson. “S true tured Matching”. Technical Report, Laboratory for Artificial Intelligence Research, Department of Computer and Infor- mation Science, The Ohio State University, February 1987. [Bylander and Mittal, 19861 Tom Bylander and Sanjay Mittal. “CSRL: A Language for Classsificatory Problem-Solving and Uncertainty Handling”. AI Magazine, 7(3):66-77, August 1986. [ Chandrasekaran, 1986) B. Chandrasekaran. “Generic Tasks in Knowledge-Based Reasoning: High-Level Building Blocks for Expert System Design”. IEEE Expert, 1(3):23-30, 1986. [ Clancey, 19851 William Clancey. “Heuristic Classification”. Artificial Intelligence, 27(3): 289-350, 1985. [Gomez and Chandrasekaran, 19841 Fernando Gomez and B. Chandrasekaran. “Knowledge Organiza- tion and Distribution for Medical Diagnosis”. Readings in Medical Artificial Intelligence: The First Decade, Chapter 13, pages 320-339, William Clancey and Edward Shortliffe, editors, Addison- Wesley, Reading, Massachusetts, 1984. [Johnson and Josephson, 19861 Todd Johnson and John Josephson. “HYPER: The Hypothesis Matcher Tool”. Technical Report, Laboratory for Artificial Intelligence Research, Department of Computer and Information Science, The Ohio State University, April 1986. [Samuel, 67) Arthur Samuel. “Some Studies in Machine Learning Using the Game of Checkers II. Recent Progress”. IBM Journal of Research and Development, 11(6):601-617, November 1967. [Simon, 811 Herbert Simon. Sciences of the Artificial, Second edition, MIT Press, Cambridge, Massa- chusetts, 1981. Acknowledgments We are deeply grateful to Tom Bylander and Dean Allemang for their many contributions to this paper. This research was supported by research grants from the Air Force Office of Scientific Research (AFOSR-86-719026), and the Defense Advanced Research Projects Agency, RADC (F30602-85-C-0010). Gael, Soundararajan, and Chandrasekarin 425
1987
72
668
All I Know: An Abridged Report1 Hector J. Levesque2 Dept. of Computer Science University of Toronto Toronto, Canada M5S lA4 Abstract Current approaches to formalizing non-monotonic reasoning using logics of belief require new metalog- ical properties over sets of sentences to be defined. This research attempts to show how some of these patterns of reasoning can be captured using only the classical notions of logic (satisfiability, validity, im- plication). This is done by extending a logic of be- lief so that it is possible to say that only a certain proposition (or finite set of them) is believed. This research also extends previous approaches to handle quantifiers and equality, provides a semantic account of certain types of non-monotonicity, and through a simple proof theory, allows formal derivations to be generated. I. Introduction A great deal of attention has been devoted recently to for- malisms dealing with various aspects of non-monotonic reasoning [Reiter, to appear, 19881. Broadly speaking, these can be divided into two camps: those, like the log- its of [McDermott and Doyle, 19801 and [Reiter, 19801, which are consistency-based, and those, deriving from [Mc- Carthy, 19801 and [McCarthy, 19841, which are based on minimal models. In the former case, non-monotonic as- sumptions are made on the basis of certain hypotheses be- ing consistent with a current theory; in the latter case, non-monotonic assumptions are made on the basis of their being true in all minimal (or otherwise preferred) models of a current theory. For better or for worse, the latter ap- proach seems to be winning, in part, no doubt, because it can be given a compelling model-theoretic account, in addition to its more proof-theoretic formulation. However, one development that may begin shifting the balance towards consistency-based approaches is the ap- plication of logics of knowledge and belief [Halpern and Moses, 19851 and [McArthur, to appear, 1987].3 Although 1 This research was made possible in part by a grant from the Nat- ural Sciences and Engineering Research Council of Canada. Thanks also to Gerhard Lakemeyer, Ray Reiter, Jim des Rivieres, and Bart Selman for proofreading. 2Fellow of The Canadian Institute for Advanced Research 3For the purposes of this paper, the distinction between knowl- edge and belief is irrelevant, and the two terms will be used interchangeably. these have been used in non-monotonic contexts for some time (see [Levesque, 19811, [Konolige, 19821, and [Halpern and Moses, 1984]), only recently have clear and precise connections been established between these logics and the non-monotonic ones [Moore, 19831 and [Konolige, 19871. What do logics of belief have to do with consistency- based non-monotonicity ? The idea, roughly, is this: A “current theory” is no more than a set of beliefs. If these beliefs are closed under logical consequence, then a hy- pothesis is consistent with a current theory precisely when its negation is not believed. So under this account, non- monotonic assumptions are made based on failing to be- lieve certain other propositions. For example, one might be willing to believe that any bird that is not believed to be flightless can fly. Or perhaps this belief is restricted to certain birds, like those that are currently known. Either way, without claiming that this is the same thing as be- lieving that “Birds generally fly” or anything like that, it does appear that under the right circumstances, the belief leads to the same assumptions as the consistency-based approaches. Moreover, the expectation here is that the model-theoretic accounts of belief deriving from the (rea- sonably well established) logics of belief can then be used to semantically rationalize these consistency-based systems. II. Only knowing This is not to say that logics of belief can be used as is to account for non-monotonic reasoning. To see why not, consider how one might explain using belief, why the ever- popular Tweety flies. Assume we take as premises that 1. Tweety is a bird. 2. If a bird can be consistently believed to fly, it flies. There surely is something missing before we are entitled to write down our favourite non-monotonic conclusion: 3. Tweety flies. At the very least, we would have to know that our second premise applied to Tweety: 1.5 It is consistent with my beliefs that Tweety flies. But what justifies this assertion? Clearly not (1) by itself. Rather, it is the fact that, except for (2), (1) is by itself. 426 Knowledge Representation From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. That is, the understood (non-monotonic) assumption is that there are no other relevant beliefs about Tweety:4 1.2 This is all I know (about Tweety). Now (1.5) does seem to follow from (1) and (1.2), so that we are indeed justified in concluding that Tweety flies from Cl), W), and (2). The problem here is that although logics of belief al- low us to express and reason with (l), (1.5), (2) and (3), assumption (1.2) cannot be expressed. The approach to this issue taken by Moore and Konolige is to not even try to express it, but instead to characterize (outside the logic itself) sets of beliefs where (1.2) intuitively might be said to hold, and then to examine the properties of such sets, called stable expunsions.5 As expected, (1) and (2) have a single stable expansion, and it does indeed contain (3). However, what is lost by this use of logics of belief is precisely what might have been expected to be gained, namely a precise model-theoretic account of consistency- based non-monotonicity. The concept of a stable expan- sion (and clearly the key one in the non-monotonic aspect of the inference) is not defined in terms of the semantics of belief, but is a new metalogical property of certain sets of sentences. Because of this, the derivation from only knowing (1) and (2) to knowing (3) must be carried out completely outside the logic, as in McDermott and Doyle’s logic or in Reiter’s (in their case with appropriate meta- logical arguments about fixed points or extensions). In this paper, we present research that attempts to remedy this situation by augmenting a logic of belief so that propositions similar to (1.2) can be expressed directly within the language. There will be two modal operators, B and 0, where Bo is read (as usual) as “Q is believed” and 00 is read as “o is all that is believed,” or perhaps, “onby Q is believed.” It turns out that this latter concept can be given fairly intuitive truth conditions that are re- markably similar to those for belief. We will then establish correspondences to Moore’s stable expansions, generaliz- ing them in the process to the quantificational case. The existence of (sometimes multiple) stable expansions will emerge within the logic as valid sentences. Finally, we will exhibit a reasonably standard (though not recursive) proof theory for the logic (that is, with axioms and rules of in- ference) and show, perhaps for the very first time, a formal derivation of the belief that Tweety flies. It should be noted that this approach to logic uses it as a specification tool to describe a reasoner rather than as a calculus to be used by one. Thus, there is no notion of an agent “having” a theory in this language, except as stated explicitly using a B operator. While the patterns of reasoning to be described may be non-monotonic, the logic itself is perfectly monotonic [Israel, 19801. *Although obviously important, we do not attempt with relevance, or which beliefs are about what. here to deal 5To a first app roximation, these can be thought of as the fixed- points of McDermott and Doyle’s logic, or the extensions of Reiter’s. Because of space limitations, the formal presentation of the logic below will be somewhat terse, and most proofs will be deferred to [Levesque, in preparation]. e language an sernant ies The language we consider is called fZ, and its propositional part is built up in the usual way from propositional let- ters and the logical connectives 1, A (the others will also be used freely as syntactic abbreviations), and two spe- cial unary connectives B and 0. For the quantificational part, we include in addition an infinite stock of predicate symbols of every arity, an infinite collection of (individ- ual) variables, an existential quantifier, and a special two- place equality symbol. For simplicity, we omit function and constant symbols. However, we include a countably infi- nite set of standard names (called parameters in [Levesque, 1984a]), that are considered (like the equality symbol) to be logical symbols. Sentences are formed in the obvious way; in particular, there is no restriction on the relative scope of quantifiers and modal operators. The objective sentences are those without any B or 0 operators; the subjective sentences are those where all non-logical symbols occur within the scope of a B or 0. Sentences without 0 operators are called basic. We will use CI! and p to range over sentences, cr to range over the subjective sentences only, and 4 and 1c, to range over the objective sentences only. Finally, cr: is used to name the formula consisting of Q with all occurrences of free variable x replaced by standard name n. Before presenting the semantics of f., a few comments are in order. First, we will be interested in characterizing a system with full logical capabilities and perfect introspec- tion. In other words, beliefs will be closed under logical consequence, anything believed will be known to be be- lieved, and anything not believed will be known not to be believed. This means our notion of belief will satisfy at least the postulates of the modal system weak S5 (see [Halpern and Moses, 19851 or [McArthur, to appear, 19871 for why). However, it will be convenient to give a non- standard semantic account of L that avoids explicit use of possible worlds. Instead we will take a coarser-grained ap- proach and deal with the truth and falsity of sentences di- rectly. A possible world, then, is modelled by any function w from sentences to {O,l} satisfying certain constraints having to do with the interpretation of the logical sym- bols. We will call such functions valuations. As to the constraints themselves (presented below), there is nothing new about the interpretation of conjunc- tion and negation. The interpretation of equality sentences is based on the convention that standard names are taken to designate distinctly and exhaustively (something one would certainly not want for ordinary constant symbols). This exhaustiveness property also means that quantifica- tion can be understood substitutionally. This substitu- tional interpretation imposes no real restrictions on what Levesque 427 sets of sentences will be satisfiable. For example, it will certainly be possible to believe %CV without believing any of its substitution instances and, as will become clear be- low, a distinction will remain between B3za and 3xBa. Thus, the first four constraints on a function w from the sentences of L to {O,l) are that for every cy and p, 1. w(a! A p) = min[w(Or), w(p)], 2. w(T2) = 1 - w(a), 3. w(ni = nj) = 1 iff ni and nj are the same standard name, 4. w(3xa) = 1 iff for some n, w(a~) = 1. We will call any function satisfying these constraints a first- order or f.o. valuation. Note that these valuations treat sentences of the form Brx or Ocu as atomic sentences. Turning now to the belief operator, B, the by now standard way to give its interpretation is in terms of an accessibility relation over worlds: Ba is considered true at some world w iff cy is true at every ws that is accessible from w. But what are these accessible worlds? In our case, there are two considerations: (l), an accessible world must make all the beliefs in the original world come out true; and (2), the accessibility relation must be an equivalence relation. So we begin by defining, for any f.o. valuation w, g(w) to be the set of all f.o. valuations w’ such that for every basic cv, if w(Ba) = 1, then w’(a) = 1. To get an equivalence relation, we must also ensure that the same subjective sentences are true in every accessible world. We say that w x w’ iff for every (subjective) Q, w(a) = w’(a). Intuitively then, the accessible worlds from w are those elements w’ of !I?( w) such that w e w’. Using these defini- tions, we can now state a constraint on the interpretation of the B operator: for every (Y,~ 5. w(Ba) = 1 iff for every w’ x w, w’ E a?(w) * w’(a) = 1. We will call any function w satisfying the first five con- straints an autoepistemic or a.e. valuation. Not every f.o. valuation is an a.e. valuation (e.g., one that assigns different values to Ba and Bl-a). However, every valua- tion that is accessible from an a.e. valuation is itself one. Finally, with regards to the 0 operator, the idea is this: Beliefs are those sentences that are true in all acces- sible worlds. So to come to believe a new objective sen- tence means to reduce the set of accessible worlds, keeping only those where the new belief is true. Thus, the more known (in objective terms anyway), the smaller the set of accessible worlds, and vice-versa. Now to say that (Y is all that is known is to say that as little as possible is known compatible with believing CY. Thus, the set of accessible worlds is as large as possible consistent with believing a, since the larger the set, the less world knowledge repre- sented. Specifically, any valuation that satisfies the same 428 6This constraint applies to non-basic sentences, although we have yet to find a need for talking about believing (or only believing) sentences with 0 operators. Knowledge Representation subjective sentences and also satisfies cu should be accessi- ble. This leads to our final constraint: for every Q, 6. w(Oa) = 1 iff for every w’ z w, w’ E a(w) ++ w’(a) = 1.’ Any function w satisfying all six constraints is called a bogi- cal valuation. Note once again that not every a.e. valuation is a logical valuation, and that the accessibility relation takes logical valuations to only logical valuations. For each type of valuation, we say that a set of sen- tences is satisfiable iff some valuation of that type assigns 1 to all its members. A set of sentences implies a sentence iff the set together with the negation of the sentence is not satisfiable. Finally, a sentence is valid iff it is implied by the empty set. We will usually leave out the “logical” qual- ifier, except to distinguish a logical valuation (or validity etc.) from the other types. It is easy to see that for objective sentences without equality or standard names, f.o. satisfiability (and thus, f.o. implication and f.o. validity) coincide with their clas- sical definitions. Not SC obvious (by a long shot), is this: Theorem 1 A set of basic sentences is a.e. satisfiable ifl there is a weak S5 Kripke structure and a world within it where all the sentences are true. Thus a.e. satisfiability is the same as weak S5 satisfiability. This theorem justifies our lack of explicit possible worlds and ensures that, for example, standard axiomatizations of weak S5 characterize precisely a.e. validity for basic sen- tences (and we will present one such below). IV. Stable sets an expansions But our primary interest is the notion of only knowing. To justify our interpretation of 0, we will relate it to the concept of stable expansions. Before doing so, it is useful to consider the properties of the sets of sentences that can be simultaneously believed. We will call a set of basic sentences a belief set if there is a logical valuation for which these sentences are precisely the ones believed. In other words, I? is a belief set iff for some logical valuation w, I’ = { ,f3 ] ,0 is basic and w(BP) = 1). One important property we can show is that this definition of belief set is the correct quantificational generalization of what Moore calls [Moore, 19831 (following Stalnaker) a stable theory: Theorem 2 Restricting our attention to basic sentences,’ a set of sentences I’ is a belief set iff I’ is stable that is, -, satisfies the following conditions: 1. If I’ f.o. implies a!, then Q E I’.O 2. IfoH’, then&EI’. 7Note that this condition differs from one place: an “if’ becomes an “iff’. 8This theorem can be strengthened to handle arbitrary sentences (given a generalized notion of belief set) by extending the first con- dition below to closure under full logical implication. the one for belief in exactly consequence, ‘Moore required I? to be closed under tautological since he only dealt with a propositional language. 3. If or @ I?, then ~Bo! E I?. For the propositional version of the language, this theo- rem was proved as Proposition 3 of [Halpern and Moses, 19841 (and app arently independently by R. Moore, M. Fit- ting, and J. van Benthem). Unfortunately, a new proof was needed because their proof fails for a quantificational language, as it depends on the following: Proposition 1 [Halpern and Moses, 19841 Stable sets (in the non-quantificational sublanguage) are uniquely de- termined by their objective subsets. With quantifiers, plicated: however, the situation is much more com- Theorem 3 Stable sets (in the quantified language) are not uniquely determined by their objective subsets (and thus neither are belief sets). This theorem is proved by showing that there is a difference between believing {d(m), 4(723), (b(n5h.. 4, on the one hand, and believing M-d4(n3), 4(n5>, . -. , W+(x) A +d(x:))l, on the other, even though both sets involve exactly the same objective sentences. In the latter case, there is the additional information that there is a 4 apart from the known ones, information that simply cannot be expressed in objective terms. lo Thus, it is possible to agree on all the objective sentences without yet agreeing on all sentences. The main result here is the following: Theorem 4 Restricting our attention to basic sentences only,ll for any logical valuation w, w(Ocr) = 1 ifl the belief set of w is a stable expansion of cy, that is, the belief set I’ satisfies the fixed-point equation: I? is the set of f.o. implications of w u w I p E r) u {-BP I p 5i n. So only knowing a sentence means that what is known is a stable expansion of &hat sentence (or, more intuitively, what is known is derivable from that sentence using logic and introspection alone). This theorem provides for the first time a semantic account (closely related to that of possible worlds) for the notion of a stable expansion which Moore used to rationally reconstruct the non-monotonic logic of [McDermott and Doyle, 19801. In a subsequent paper [Moore, 19841, M oore provided a possible-world se- mantics for his autoepistemic logic, but not for the non- monotonic part concerned with stable expansions. In addi- tion, we have generalized the notion of a stable expansion to deal with a quantificational language with equality. loIt could be expressed if we allowed infinite disjunctions ranging over any set of standard names. l1 Again this restriction can be removed using logical implication in the definition. V. roof theory The fact that the semantic characterization of Ocu uses an “iff” where Ba uses an “if” suggests that it might be worthwhile to look at another operator that uses the “only if” condition alone. The proof theory we are about to present is most conveniently expressed using a new modal operator N for this only-if condition: w(Na) = 1 iff for every w’ M w, W’(Q) = 0 ==s w’ E R(w). Ocu cafl now be defined as the conjunction of Ba and N~cu. Taking Ba as saying “at least a! is believed to be true,” Na can be read as “at most cy is believed to be false,” from which Oa! is read as “exactly o is believed.” The remarkable fact about the N operator is that it behaves exactly like a belief operator, but with respect to the complement of the 8 relation: w(Na) = 1 iff for every w’ x w, w’ E B(w) * w’(a) = 1. This allows us to produce a proof theory for L that is very similar to what would be done for two separate be- lievers. The difference is that (1) the two “agents” are mutually introspective (i.e. know about each other’s be- liefs and non-beliefs), and (2) every world is an element of %oor%. Tohandle( we include not only the usual intro- spection axioms like (No 3 NNo), but cross-axioms like (Na > BNar). To handle (2), we simply stipulate that every falsifiable objective sentence that is true at every member of % must be false at some member of 8. Over- all then, the proof theory is formed by adjoining to any standard objective basis the following axioms: 1. the remaining axioms for weak S5, for both B and N: (a) Lqi, where C$ is any f.o. valid objective sentence, (b) L(a > P) 1 (La 3 LP), (c) VXLO 3 LVZCU, (d) (a > La), where Q is subjective, where L is either B or N; 2. N# > lBq5, where 4 is any objective sentence that is falsifiable;12 3. Oa 5 (Ba A Nlcr), for any (Y. The notion of a theorem is defined in the usual way (note that no new rules of inference are introduced). The first result about this proof system is: Theorem 5 (Soundness) Every theo?m is valid. The proof is by induction on the length of the derivation: the axioms are all clearly valid and the objective rules of inference obviously preserve validity. However, the more substantial result about this simple axiomatization is that for the propositional case anyway, it is also compl&e: 12Note that this set is not r.e. for the full quantificational objective language. Unfortunately, this is the price that must be payed for consistency-based reasoning. In its defense, however, the axiom only requires non-valid objective sentences, a relatively well-understood and manageable set. Levesque 429 Theorem 6 (Propositional completeness) If Q! is in the propositional subset, then it is a theorem i$it is valid.13 What this shows us is that with a minimum of extra ma- chinery over and above the (modal) axioms necessary for logics of belief, we can account for the semantics of L. VI. Some applications What is this logic good for ? One application is the formal specification of a Knowledge Representation service: given a certain KB, what are the sentences that are believed? To a ‘first approximation, it’s the logical implications of KB. However, if /? is some sentence that is not believed, then an introspective system also believes lB/? (i.e., it realizes that it does not believe /3). The problem is that KB does not imply lBP, nor does BKB imply BlB/3. So logical implication is not enough. In [Levesque, 1984a], this was handled by moving outside the logic and defining a special ASK operation. But given the 0 operator, we can stay within the logic: OKB does imply B-B/I. In general, the beliefs of an introspective system will be those sentences CY such that (OKB 1 Bcv) is a valid sentence of JZ. However, the main application of this logic is to give semantic and/or proof-theoretic arguments involving non- monotonic reasoning. Consider the above example involv- ing Tweety. First, we represent the default as Vz[Bird(z) A ~B~Fly(z) I Fly(x)].14 Now believing this and that Tweety is a bird certainly does not imply believing that Tweety flies. But we can show that if this is all that is believed, then the belief that Tweety flies does follow: Theorem ‘7 Let /? = Vx(Bird(z) A lBlFly(z) > Fly(z)). Then, O[Bird(tweety) A p] > BFly(tweety) is a theorem. Proof: We present a formal derivation using natural de- duction. The numbers refer to the above proof theory. a. O[Bird(tweety) A p] Assumption. b. B[Bird(tweety) A /I] From a using (3). c. BFly(tweety) v BlFly(tweety) From b using (1). d. Nl[Bird(tweety) A j3] From a using (3). e. N[Bird(tweety) 1 3mFly(z)] From d using (1). f* -B[Bird(tweety) 1 3z~Fly(z)] From e using (2). 9. YB-Fly(tweety) From f using (1). h. BFly(tweety) From c and g, by classical logic. Discharging the assumption gives the required result. q A propositional version of this argument can be made in terms of Moore’s stable expansions (that is, that there is a single expansion, and it contains the desired conclusion). 131 believe the axiomatization is also complete for the full lan- guage, but I have yet to find a proof. My propositional proof fails for the general case in a subtle and interesting way. See [Levesque, in preparation] for details. 140ther versions are possible, such as one where Bird is within the scope of a B. Also, in what follows, we will be using tweety and chilly as standard names. The significance of this derivation is that the argument only depends on the validity (or in this case, theoremhood) of a certain sentence of ,!Z, and so can be carried out com- pletely within the language itself in conventional logical terms. The only unusual step in the derivation is from e to f, where we infer on the basis of something being all that is believed, that a certain other sentence is not believed. This step depends on the fact that Bird(tweety) > Sc~Fly(z) is not f.o. valid. Indeed, if not flying was implied by being a bird, this proof would fail ( as it should), and B-Fly(tweety) would be the (correct) conclusion. This analysis also suggests what happens if we know in addition that Chilly is a bird that does not fly. The problem is that the step from e to f no longer works since the enlarged knowledge base now implies the exis- tence of a flightless bird. What happens, however, is that since BlFly(chilly) is true, so is NBlFly(chilly) by (1). The new version of step e now uses this to conclude that N[KB > 3x((a: # chilly) AlFly(z) where KB has the facts on Tweety and Chilly. Once again the argument to N is not f.o. valid, and so the derivation goes through as be- fore, ending with the belief that Tweety flies. Note that this conclusion depends (quite appropriately) on the fact that Chilly and Tweety are believed to be distinct, a logical property of our standard names. Although there is no really compelling reason to do so, we can define a non-monotonic logic easily enough using 0. For a finite set of sentences l?, define I-, by r b, a! iff (Oy > Bcr) is valid, where y is the conjunction of the elements of I’. Then, in the previous example, we have that Bird(tweety), p I-, Fly(tweety), but Bird(tweety), p, lFly(tweety) vn Fly(tweety), so this logic would be truly non-monotonic. VII. Determinate sentences The correspondence with stable expansions accounts for many of the properties of this logic. For example, the usual situation with multiple expansions also arises here. In our case, this is reflected in the language itself, with interesting consequences. Consider a typical sentence with two expansions, (~Bqb > $J) A (lB11, > 4). What happens here is that the sentence O[(+W 3 $,> A (-$ =J d)] = (04 V O$), which names the two expansions directly, ends up being va1id.l’ Thus, it is possible to only know the sentence in two distinct ways. The logic also specifies what is common to both, in that O[(lB4 > $J) A (lB$ > 4)] logically implies, for example, B($ V $). 15Similarly, the validity of -O[-B4 > ~$1 tells us that (TB+ > 4) has no stable expansions. 430 Knowledge Representation While the cases of multiple or missing expansions may be interesting in their own right, for those of us interested in Knowledge Representation, sentences with a single sta- ble expansion play a very special role. Call a sentence o determinate iff there is a unique (up to M) ELI such that w(Oa) = 1. Then we have the following: Theorem 8 CY is determinate ifl for every p, one of (Oau> BP) or (Oax lB,B) is valid. Thus, determinate sentences tell us exactly what is and what is not known. l6 As such, they can be used as rep- resentations of knowledge, since they implicitly specify a complete epistemic state. Examples of these include all objective sentences, and all examples outside this section. One important property of this logic is that it is al- ways possible to represent knowledge in objective terms. Although believing does not reduce to believing objective sentences (Theorem 3), only believing does: Theorem 9 For every determinate CY, there tive sentence 4 such that (Oa! E 04) is valid. is an o bjec- Thus, to the extent that an epistemic state can be repre- sented at all, it can be represented in objective terms. In other words, whatever defaults might be used (or what- ever other uses of non-objective sentences), if there is a unique end result, it can be described without reference to the modal operators. This theorem offers perhaps some reassurance to those who have been suspicious about these operators all along. This research attempts to show that non-trivial non- monotonic behaviour can be formalized using only the clas- sical notions of logic. This is done by extending a logic of belief to include a second modality that can be given a reasonably natural semantic and proof-theoretic account. As for future research, there are the following topics: formalizing what it means to say that a is all that is known about something; developing the concepts for a logically limited notion of belief [Levesque, 1984b]; and the missing quantificational completeness proof. Finally, Konolige’s account of default logic [Konolige, 19871 depends on a cer- tain restricted kind of stable expansion, and it remains to be seen how this will fit into the current framework. [Halpern and Moses, 19841 J. Halpern and Y. Moses. To- wards a theory of knowledge and ignorance: prelimi- nary report. In The Non-Monotonic Reasoning Work- shop, pages 125-143, New Paltz, NY, 1984. [Halpern and Moses, 19851 J. Halpern and Y. Moses. A guide to the modal logics of knowledge and belief: a preliminary draft. In IJCAI-85, pages 480490, Los Angeles, CA, August 1985. [Israel, 19801 D. I srael. What’s wrong with non-monotonic logic? In AAAI-80, pages 99-101, Stanford, CA, 1980. [Konolige, 19821 K. K onolige. Circumscriptive ignorance. In AAAI-82, pages 202-204, Pittsburgh, PA, August 1982. [Konolige, 19871 K. Konolige. On the Relation Between Default Theories and Autoepistemic Logic. Technical Report, AI Center, SRI International, Palo Alto, CA, 1987. [Levesque, 19811 H. Levesque. The interaction with in- complete knowledge bases: a formal treatment. In IJCAI-81, pages 240-245, Vancouver, B.C., August 1981. [Levesque, 1984a] H. Levesque. Foundations of a func- tional approach to knowledge representation. Arti- ficial Intelligence, 23(2):155-212, 1984. [Levesque, 1984b] H. Levesque. A logic of implicit and explicit belief. In AAAI-84, pages 198-202, Austin, TX, 1984. [Levesque, in preparation] H. Levesque. All I Know: A Study in Autoepistemic Logic. Technical Report, Dept. of Computer Science, University of Toronto, Toronto, Canada, in preparation. [McArthur, to appear, 19871 G. McArthur. Reasoning about knowledge and belief: a review. Computational Intelligence, to appear, 1987. [McCarthy, 19801 J. McCarthy. Circumscription - a form of non-monotonic reasoning. ArtificiaZ InteZZigence, 13( 1,2):27-39, 1980. [McCarthy, 19841 J. McCarthy. Applications of circum- scription to formalizing commonsense knowledge. In The Non-Monotonic Reasoning Workshop, pages 295- 324, New Paltz, NY, 1984. [McDermott and Doyle, 19801 D. McDermott and J. Doyle. Non-monotonic logic I. Artificial Intelligence, 13( 1,2):41-72, 1980. [Moore, 19831 R. Moore. Semantical considerations on nonmonotonic logic. In IJCAI-89, pages 272-279, Karlsruhe, West Germany, 1983. [Moore, 19841 R. Moore. Possible-world semantics for au- toepistemic logic. In The Non-Monotonic Reasoning Workshop, pages 344-354, New Paltz, NY, 1984. [Reiter, 19801 R. Reiter. A logic for default reasoning. Ar- tificial Intelligence, 13( 1,2):81-132, 1980. [Reiter, to app ear, 19881 R. Reiter. Nonmonotonic rea- soning. Annual Reviews, to appear, 1988. l6 These are Moses, 19841. also related to the “honest” sentences of [Halpern and hevesq ue 431
1987
73
669
Algorithm Synthesis through Michael IL Lowry Stanford Artificial Intelligence Laboratory Box 3350, Stanford CA 94305 And Kestrel Institute 1801 Page Mill Road, Palo Alto CA 94304 Abstract AI has been successful in producing expert systems for diagnosis, qualitative simulation, configuration and tutoring-e.g. classification problem solving. It has been less successful in producing expert systems that design artifacts, including computer programs. Deductive synthesis of a design from first principles is combinatorially explosive, yet libraries of design schemas do not have sufficient flexibility for applica- tion to novel problems. This paper proposes that the major factor in apply- ing design knowledge is reformulating a problem in terms of the parameters of generic designs. This pa- per shows how to represent knowledge of generic de- signs as parameterized theories. This facilitates prob- lem reformulation, making it a well defined search for appropriate parameter instantiations. The representation of design knowledge with parame- terized theories is illustrated with generic local search algorithms. The utility of parameterized theories is shown by deriving the simplex algorithm for linear optimization from specification. I. Introduction This paper1 presents a theory of design and problem solv- ing based upon problem reformulation. The key idea is to reformulate a specific problem into an instantiation of the parameters of a generic problem solving method. Its companion IJCA187 paper [Lowry, 1987a] describes prob- lem reformulation through abstraction by incorporating important problem constraints. Together they describe the methods that are being implemented in the STRATA au- tomatic programming system. A parameterized theory is a set of symbols which form a language and a set of axioms which constrain the symbols of the language. Some or all of these symbols are param- eters which are instantiated by mapping them to terms in another language. A mapping is valid if the axioms are valid when the terms are substituted for the parameters. Parameterized theories were originally developed as part of a rigorous foundation for abstract data types. They have ‘This work was done at Stanford University under DARPA con- tract N00039-84-C-0211, and at the Kestrel Institute under ONR contract N0001484G0473. subsequently been extended to abstract modules and spec- ification languages [Goguen and Burstall, 19851 [Goguen and Meseguer, 19821. The advantage of using a formal framework is the uni- fication and generalization of previous work, the identifi- cation of the key search problems, and a declarative repre- sentation. The advantage of using parameterized theories over schemas or skeletal plans is that they can express design knowledge without commitment to any implemen- tation, and can be readily combined, composed, extended, and specialized without destructive interference. A calcu- lus for combining theories can be found in [Goguen and Burstall, 19851. [Lowry, 1987b] gives an example of com- bining the theory of local search and the theory of GPS to yield selection sort. The pioneering work of Amarel[Amarel, 1968120 years ago showed the potential power of reformulation. Starting about 1980, a number of people began investigating meth- ods for searching the space of logically equivalent problem reformulations. Most methods involved finding problem reformulations targeted to a particular problem solving schema such as Divide and Conquer[Smith, 19851, Heuris- tic Search[Mostow, 19831, Depth First Backward Chain- ing[Subramanian, 19861, and [Riddle, 19861. The work presented in this paper generalizes the work cited above by representing problem solving schemas as parameterized theories. This paper also presents more general domain independent methods of searching for good parameter in- stantiations, which is the critical bottleneck for this kind of problem reformulation. The next section of this paper shows how to represent design knowledge as a refinement hierarchy of parameter- ized theories. The third section gives a brief overview of the simplex algorithm, and the fourth section shows how to design artifacts by choosing, refining, and instantiat- ing parameterized theories. The derivation of the simplex algorithm is used as an example. ene zed The diagram below illustrates a hierarchical representation space for algorithm design knowledge based upon param- eterized theories. The case shown is generic design knowl- edge for optimization problems, which will later be used 432 Knowledge Representation From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. in deriving the simplex algorithm. Each additional level is a refinement of the design knowledge of the previous level. Each level is represented as a parameterized theory which is applied to a particular problem by instantiating the pa- rameters such that the instantiated axioms are provably correct. Each additional level is a specialization (more parameterized axioms) and possibly an extension (more parameters) of the previous level. Thus this hierarchical representation naturally supports top-down refinement of an evolving design without being overly committed to an implementation language. Optimization Domain Stee Asc Si of Set Function Convergence rder Convergence is optimization over a domain D. The axioms specify the constraints between the parameters, i.e. the Co&Relation is a total order upto equivalence - that is all the domain elements are comparable. The Value function is a map from a state to an element of the domain. The VuZue function is usually implemented as a program variable, but other implementations are possible. An advantage of pa- rameterized theories is the flexibility of having no a priori commitment to a particular implementation. For an op- timization problem, the Value in the begin state is some element of the domain, and the Value in the end state is an optimal element of the domain. Optimality is determined by CostRelation. Applicability Conclitions: Local Odinmums E Global Chtimums Neighbor : D x D s Axioms Vx, y E D TrunsitiveCZosure( Neighbor)(x, y) Vx E D (Vy Neighbor(x,y) + CostReZution(x, y)} H {Vy E D CostReZation(x, y)} An applicability condition specifies the additional problem structure which is exploited by a general prob- lem solving method. There can be many problem solving methods which exploit the same problem structure, for ex- ample both steepest ascent and simple hill climbing exploit the equivalence of local optimums and global optimums. Instantiating the applicability conditions before commit- ting to a particular control and data flow structure is a stepwise refinement reformulation strategy which is read- ily supported with parameterized theory representations. The applicability conditions in this example specify The parameterized theories which will be used in de- locality in terms of a neighbor parameter. There are two riving the simplex algorithm are given below. They cor- constraints on this neigborhood parameter. First, when respond to the left hand side of the diagram above, i.e. a the neighbor relation is viewed as a graph, all domain ele- generic optimization problem, the applicability conditions ments are connected. This ensures that an optimal value is of local optimums being global optimums, local search al- reachable from any initial value. The second constraint is gorithm, and finally a performance guaranteed to be no that if a domain element is optimal over its neighborhood, worse than the size of the domain if no looping occurs. then it is also globally optimal. Each theory begins with parameters for sorts, relations, Algorithm Structure: Local Search and functions, which are followed by a set of axioms. Each Next : S + S successive layer is represented by the additional parame- Axioms ters and axioms which are added to its parent theory. Vx E S Neighbor(VaZue(x), VaZue(Next(x)) Global Input/Output Bebavior:Optimization Problems Vx E S CostReZution(VuZue(Next(x)), Value(x)) Domain D States S The algorithm structure for local search introduces the Begin, End :+ S Next parameter, which maps states to states. Next spec- Value : S + D ifies large-grained state transitions, usually corresponding CostRelation : D x D to the outer loop of a program. The axioms which are Axioms given here specify that the Value in the Next state is a better neighbor of the current Value. VuZue(Begin) E D PerformLLce Structure: No Looping Vx E S x = END c) Vy E D CostReZution(VuZue(x), y) Vx, y E D CostReZution(x, y) V CostReZution(y, x) Vx, y, z E D CostReZution(x, y) A CostReZation(y, z) + CostReZation(x, z) The global input/output behavior consists of the sort, relation, and function parameters which are used to specify the generic problem. In this example the generic problem Size(Domain) 2 End - Begin Vx, y E Sx # y 3 Value(x) # Value(y) The final layer of the representation of local search design is that of performance. The theory given here is a weak upper bound on the number of state transitions. It states that the number of high-level state transitions from the beginning state to the end state is bounded by Lowry 433 the cardinality of the optimization domain if there is no looping. This section has shown how to represent a gen- eral problem solving method as a refinement hierarchy of parameterized theories. Each additional layer intro- duces more constraints and possibly additional parame- ters. The significant decision points for problem reformu- lation are choosing refinements and instantiating param- eters. Choosing a refinement is a classification problem- that is, a choice between a small number of alternatives. Often, different refinements lead to equally viable algo- rithms. Parameter instantiation is a much more difficult search problem, and is the main focus of section 4. Linear optimization is finding the optimum value of a lin- ear cost function given a set of linear constraints. From the abstract viewpoint of Euclidean geometry, the linear con- straints describe a convex polyhedron (possibly unbounded or null), and the cost function describes a direction. The desired output is the point(s) on the polyhedron which is furthest along the direction vector. The insight of the simplex algorithm is that the output will include a vertex of this polyhedron. The skeleton of the simplex algorithm is local search between adjacent vertices until a local optimum is reached. Because of convexity, a local optimum is guaranteed to be a global optimum. The standard form of a linear optimization problem is to minimize c - z such that Ax = b and x; 2 0. The input is a row vector c, a column vector b, and an m x n matrix A. The output is a row vector x. In the standard form, a vertex is represented by m linearly independent columns of the matrix A, where m is the number of rows and n is the number of columns, n being strictly greater than m. Thus there are m choose n possible representations of vertices(a vertex might have multiple representations). Adjacent vertices share m - 1 columns. The co-ordinates of a vertex can be explicitly determined by solving the m x m submatrix of column vectors for b using gaussian elimination. A vertex has m non-zero co-ordinates. The significant design choices in deriving the simplex algorithm are first the parameterized theory refinements which lead to the choice of a local search algorithm and more importantly the parameter instantiations: Instantiation of the domain of optimization to be just the vertices, which makes the search space finite. Instantiation of the neighbor relation to be vertices which share m- 1 columns, thus minimizing the search at each step. Specifying a total order on subsets of m columns to avoid looping. The CostRelation only gives a partial order. Instantiating the first phase of the algorithm, which yields a valid starting point for the optimization. Algorithm This section discusses the use of parameterized theories in designing an algorithm. The full derivation can be found in [Lowry, 1987b], this overview focuses on the methods used in instantiating the parameters (step 3 of the basic method). Heuristics for instantiating the paTameteTs are themselves represented as parameterized theories and map- pings between parameterized theories. The basic design method is: 1. Choose a parameterized theory, or refinement. 2. Propagate constraints. 3. Generate problem specific instantiations of free pa- rameters which satisfy the propagated constraints. 4. Iterate until the Next parameter is fully constrained, i.e. the algorithm is complete. Constraints are accumulated on the sequence of state changes, represented by the Next parameter. These con- straints are then transformed to a set of state transforma- tion rules, which are then compiled by the REFINETM compiler into lisp code. The input to STRATA is the prob- lem definition, domain knowledge such as theorems of lin- ear algebra, and a library of design knowledge expressed as parameterized theorems. The output of STRATA is the set of constraints on the Next parameter. Input: Broblern Definition for Linear Optimization VaZuel(Begin) = (A, b, c) Value2(END) = xoUt Axout = b,xfut 2 0 Vx E {x 1 Ax = b A xi 2 0) c - xoUt 5 c - x Partial Output: Constraints on the Next parameter (These constraints are derived by propagating the neighbor instantiation to the local search refinement, as explained later.) Vs E States s = END t+ {Vv E vertices Adjacent(VaZue2(s), v) * c - VaZue2(s) 5 c - v} Vs E States Adjacent(VaZue2(s), VaZue2(Next(s)) Vs E Statesc - VaZue2(Next(s)) 5 c - VaZue2(s) 434 Knowledge Representation The first axiom states that if the current value of xoUt instantiating the Neighbor parameter of local search yields is locally optimal, then the algorithm should terminate. small neighborhoods so that the local search of each neigh- borhood is efficient. This heuristic defines the neighbor The following two axioms state that the next value of xoUt should be a better neighbor of the current value of xout. - relation in terms of the parameterized theory of a distance metric. in particular the minimal distance such that local optimums are global optimums: Neighbor(x, y) H Dist(x, y) < K K H Minimize(U I Vx E D (Vy E D CostRelation(x, y)} @ {Vy Dist(x, y) < U + CostRelation(x, y)} Dist(x,x) = 0 x#y=~Dist(x,y)>O Dist(x, y) = Dist(y, x) The first step in the derivation is to instantiate a generic input/output behavior to the linear optimization problem. The generic optimization problem is partially instantiated with the following representation map: D H {x 1 Ax = b A xi 10) ab.2 poly CostRelation H (X(a, b) c.a>c-b)ab. LAMBDA Value H Value2 VaZue(End) H xoUt VaZue( Begin) UNINSTANTIATED The instantiated axioms for generic optimization are provably true in the problem domain theory of linear al- gebra. The uninstantiated parameter is constrained as fol- lows, it is the postcondition for the first phase of the sim- plex algorithm: VaZue2( Begin) E poly Heuristic knowledge for instantiating the parameters can be encoded in parametric form. This knowledge ex- presses additional constraints on a parameter and/or spec- ifies how to instantiate a parameter in terms of other pa- rameters. The additional constraints serve to focus the generation of problem specific instantiations of free param- eters in step 3. Specifying the instantiation of a parameter in terms of other parameters which are syntactically closer to the domain represent ation can reduce the ‘reformulation distance’ that needs to be spanned by equivalence preserv- ing transformations and general purpose theorem proving methods. As an example, one heuristic for instantiating the DOMAIN of optimization is to find a predicate which restricts the domain to a subset which includes at least one optimal solution: 3xP(x) A Vy E D CostReZation(x, y) D’ H {x E D 1 P(x)} This heuristic can be invoked as a demon when instan- tiating the domain parameter of an optimization problem. This heuristic essentially encodes a parameterized proof that restricting the domain of optimization yields a valid algorithm. Given the representation map derived above, and the following theorem found in textbooks on linear programming, this heuristic derives an instantiation for D’ which restricts the domain of optimization to vertices, i.e. vectors with only m non-zero co-ordinates: Thm: 3x E poly size{i 1 xi # 0) = m /\{Vy E poly Lambda(x, Y)}~ D’ I+ {x E poly I size{i I xi # 0) = m) ‘ab. vertices. The next step (step 1 of the basic method) in the derivation is to choose the applicability conditions and in- stantiate the parameters with appropriate domain func- tions and relations. A heuristic which is activated when 2 ab. abbreviates abbreviated 31n this derivation timal solution. it is assumed that there exists a bounded W- Dist(x,z) < Dist(x, y) + Dist(y, z) To instantiate this heuristic, STRATA first generates a distance metric on D (the domain of optimization), at- tern&s to find a minimal K and if successful instantiates the i\reiqhbor parameter. After searching various possibil- ities, a successful instantiation is found that uses a com- posite distance function, i.e. Dist(x,z) H G(H(x,z)). G is instantiated first to the primitive function Setsize, and some distance metric axioms are back-propagated to con- straints on the function H (H has arity setof non-zero co- ordinates x setof non-zero co-ordinates + setof non-zero co-ordinates): Vx E verticesH(x,x) = q5 Vx, y E vertices x # y + H(x, y) # 4 Vx, y E verticesH(x, y) = H(y, x) H is instantiated to SymmetricSetDif f erence, the Dist parameter is instantiated, and the triangle inequality is verified: Dist(x, y) I--) SetSize(SymmetricSetDif f erence(x, y)) A similar derivation also works in deriving local search algorithms for Minimal Spanning Trees, and approxima- tion algorithms for the Traveling Salesman Problem [Pa- padimitriou and Steiglitz, 19821. This suggests that a fruitful line of research is to apply ExpZanation Based GeneraZizationlWinston et al., 19831 to derive new pa- rameterized theories as heuristics for instantiating pa- rameterized designs. For this particular example, the generalization would instantiate the distance metric to SetSize(SymmetricSetDif f erence(x, y)) when the do- main of optimization can be formulated as subsets of an- other set: IF D H {s I s c E} THEN Dist(x, y) I+ SetSize(SymmetricSetDifference(x, y)) The instantiated Dist parameter is then used to in- stantiate the Neighbor relation by finding the minimal K such that local-global optimality is satisfied: Neighbor(x, y) H SetSize(SymmetricSetDif f erence(x, y) < 2 This instantiation states that 2 vertices are neighbors - if they differ by 2 non-zero co-ordinates, i.e. they share m - 1 column vectors. This instantiation is abbreviated Adjacent. When this instantiation is propagated to the local search parameterized theory refinement, it yields the Lowry 435 constraints on the Next parameter given at the beginning of this section. The rest of the derivation uses the same techniques of choosing an incremental parameterized theory to refine the evolving design, propagating constraints, and then instan- tiating free parameters. The search for parameter instan- tiations uses the same methods described above, including constraint propagation and heuristics expressed as param- eterized theories and representation maps. This section has shown how design can be factored into a classification problem of choosing a parameterized theory and a reformulation problem of finding appropri- ate domain terms to instantiate the parameters of generic designs. A major contribution is the demonstration of how domain independent inference techniques, domain in- dependent heuristics, and domain knowledge expressed as theorems can be combined to focus the search for compos- ite terms to instantiate the parameters of parameterized theorems. V. Summary This paper has presented a representation for design knowledge based upon parameterized theories, which fac- tors the design problem into a classification problem (choose a generic design strategy) and a reformulation problem (reformulate the problem into the parameters of the generic design). The reformulation problem is combi- natorially explosive and poorly understood. Previous work has usually used either ad hoc domain specific techniques or brute force generate and test combined with theorem proving for verification. In contrast, parameterized theo- ries naturally support techniques such as constraint prop- agation, solving for an unknown parameter in terms of known parameters and mutual constraints, and other in- ference methods. They also provide a convenient matrix for expressing heuristic knowledge for choosing good in- stantiations of a parameter. Parameterized theories offer significant advantages over skeletal plans and program schemas. These advan- tages are due to being able to combine parameterized the- ories with a flexible and semantically well defined calculus [Burstall and Goguen, 19771. In my research, this flexi- bility is used to express design knowledge as a refinement hierarchy of parameterized theories ranging from a generic problem, applicability conditions, algorithm structure, and finally performance calculations. Each additional layer is a specialization (additional axioms) and/or an extension (additional parameters) of the previous parameterized the- ory. VI. Acknowki This research benefited from discussions with Professor Thomas Binford, Dr. Yinyu Ye and Irvin Lustig(Stanford- EES, Operations Research), Dr. Joseph .Goguen (SRI- Theory of Abstract Data Types), Dr. George Stolfi (Stanford-C om u a ional Geometry), Dr. Douglas Smith p t t and Dr. Cordell Green (Kestrel Institute-knowledge based automatic programming). This paper benefitted from the comments of the referees and the editing help of Raul Du- ran, Laura Jones, and Patricia Riddle. References [Amarel, 19681 Saul Amarel. On representations of prob- lems of reasoning about actions. Machine Intelligence 3, 1968. [Burstall and Goguen, 19771 Rod M. Burstall and Joseph Goguen. Putting theories together to make specifica- tions. In IJCAI 5, pages 1045-1058,1977. [Goguen and Meseguer, 19821 Joseph Goguen and Jose Meseguer. Universal realization, persistent intercon- nection and implementation of abstract modules. In ICALP, Springer Verlag, 1982. [Goguen and Burstall, 19851 Joseph A. Goguen and Rod M. Burstall. Institutions: Abstract ModeE Theory foT Computer Science. Technical Report CSLI-85-30, CSLI, 1985. [Lowry, 1987a] Michael R. Lowry. The abstrac- tion/implementation model of problem reformulation. In IJCAI-87, August 1987. [Lowry, 1987b] Michael R. Lowry. Akgorithm Synthesis through ProbZem Reformulation. PhD thesis, Stan- ford University, 1987. [Mostow, 19831 Jack Mostow. Machine transformation of advice into a heuristic search procedure. In Machine Learning, An ArtijkiaZ Intelligence Approach, chap- ter 12, Tioga Press, 1983. [Papadimitriou and Steiglitz, 19821 Christos H. Papadim- itriou and Kenneth Steiglitz. CombinatoriaZ Opti- mization: Akgorithms and Complexity. Prentice-Hall, 1982. [Riddle, 19861 P a ricia Riddle. An overview of problem t reduction: a shift of representation. In Workshop on Knowledge Compilation, pages 91-112, 1986. [Smith, 19851 Douglas R. Smith. Top-down synthesis of divide-and-conquer algorithms. Artificial Intelli- gence, 27(l), September 1985. [Subramanian, 19861 Devika Subramanian. Reformula- tion. In Workshop on Knowledge CompiZation, pages 119-121, 1986. [Winston et al., 19831 Patrick Winston, Thomas Binford, Boris Katz, and Michael R. Lowry. Learning physi- cal descriptions from functional definitions, examples, and precedents. In AAAI-83, August 1983. 436 Knowledge Representation
1987
74
670
Curing Anomalous Extensions Paul Morris IntelliCorp 1975 El Camino Real West Mountain View, CA 94040 Abstract In a recent paper, Hanks and McDermott presented a simple problem in temporal reasoning which showed that a seemingly natural representation of a frame axiom in nonmonotonic logic can give rise to an anomalous extension, i.e., one which is counter-intuitive in that it does not appear to be supported by the known facts. An alternative, less formal approach to nonmonotonic reasoning uses the mechanism of a truth maintenance system (TMS). Surprisingly, when reformulated in terms of a TMS, the anomalous extension noted by Hanks and McDermott disappears. We analyze the reasons for this. First it is seen that anomalous extensions are not limited to temporal reasoning, but can occur in simple non-temporal default reasoning as well. In these cases also, the natural TMS representation avoids the problem. Exploring further, it is observed that the form of the TMS justifications resembles that of nonnormal default rules. Nonnormal rules have already been proposed as a means of avoiding anomalous extensions in some non-temporal reasoning situations. It appears that, suitably formulated, they can exclude the anomalous extension in the Hanks- McDermott case also, although the representation does not adjust smoothly to fresh information, as does the TMS. Some variant of nonnormal default appears to be required to provide a correct semantic basis for truth maintenance systems.l I. Introduction One of the central requirements for an effective temporal reasoning system is a reasonable solution of the frame problem. The frame problem is that of specifying the effects of actions in a way that allows efficient determination of the properties that hold in subsequent states. In particular, a representation is needed that allows exploitation of the fact that in many situations of interest most properties are unchanged by a given action. The development of nonmonotonic and default logics has been seen as promising a way of achieving such a representation within a weH-understood declarative framework. Unfortunately, a foundational difficulty in this approach has recently been uncovered by Hanks and McDermott [Hanks and McDermott, 19861, who present an example of temporal reasoning where the natural default logic representation is shown to be inadequate for 1 This research was supported in part by the Defense Advanced Research Projects Agency under contract No. F30602 85 C 0065. The views expressed are those of the author and do not necessarily represent the position or policy of DARPA. deriving some intuitively sound conclusions. Shoham [Shoham, 19861 proposes a solution to this difficulty which uses a default reasoning mechanism specific to temporal reasoning. This seems to suggest that the problem is an artifact of temporal reasoning. To counter this view, and show that the problem is a wider one for nonmonotonic logic, we will present an example drawn from non-temporal default reasoning that reproduces the difficulty. An alternative approach to default reasoning that has seen considerable use in practice, but has undergone relatively little formal study involves the mechanism of a truth maintenance system (TMS). W e will see that, surprisingly, the difficulty noted for nonmonotonic logic disappears when the example is reformulated in terms of a truth maintenance system. Moreover, the TMS revises its beliefs appropriately in response to fresh information. An examination of the TMS representation suggests the use of nonnormal default rules in the Hanks-McDermott example. Nonnormal rules, suitably formulated, can indeed exclude the anomalous extension. However, in contrast to the TMS, the nonnormal rule representation does not respond smoothly to changes in belief. The difference arises because in cases where the TMS produces a contradiction, provoking backtracking, the default rule representation can result in no extensions. A small change is suggested to the semantics of default rules to make them more closely approximate the behavior of TMS justifications. With the suggested modification, the use of nonnormal default rules opens up the possibility of having inconsistent extensions, even though the underlying monotonic theory is consistent. Rather than place the burden of excluding inconsistencies on the default logic mechanism, one might regard applications of reductio ad absurdum reasoning to resolve inconsistencies as an extra-logical operation that modifies the existing axiom set. In support of this view, we present an example from the area of planning which suggests that such reasoning needs to make distinctions at a level beyond ordinary logic. anks-McDermott Anomaly Common approaches to nonmonotonic logic use default inference rules [Reiter, 19801 or circumscription [McCarthy, 19801 to extend a set of beliefs with as many default assumptions (and their deductive consequences) as can be consistently added. The resulting larger set of beliefs is called an extension. Note, however, that the relative consistency of defaults may depend upon the order in which they are added, giving rise to multiple competing extensions. One resolution of this (suggested by Hanks and McDermott) is to regard a statement as being a nonmonotonic “theorem” if it holds in every extension. Morris 437 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Hanks and McDermott present an example of temporal reasoning (the “shooting example”) that gives rise to multiple extensions. However, the example is such that only one of these corresponds to our intuition. A second extension corresponds to a possible interpretation of the events, but, one that intuitively is not supported by the known facts. This means that some conclusions which are intuively valid can not be derived as theorems. default reasoning that reproduces the difficulty and is structurally very similar to the shooting example. Consider the following statements. 0 Animals normally can not fly. 8 Winged animals are exceptions to this. In the shooting example, a sharpshooter loads his gun and lies in wait, for a victim. When the victim appears, the gunman shoots. Assuming the gun is loaded at the time the shot occurs, the victim dies. Thus, we have a state Sl at, which the gun is loaded, followed by a waiting period until another state 52 where the victim is alive, followed by a shooting action resulting in a third state S3. The question is whether the victim is alive or dead at S3. In the anomalous extension the gun mysteriously becomes unloaded during the waiting period, so that the victim survives. e All birds are animals. e Birds normally have wings. Now suppose Tweety is a bird. Given the above statements, one would intuitively conclude that Tweety has wings. One would NOT conclude that Tweety is unable to fly, although Tweety is obviously also an animal. However, just, as in the Hanks- McDermott shooting example, a seemingly natural representation of this examnle in nonmonotonic logic produces two extensions. In I Following Hanks and McDermott, we use the notation T(f,s) to indicate fact, f is true in state s. For each such proposition and action e, we have a frame axiom of the form T(f, s) A -m(f, e, s) 2 qf, mSULTe,s)) where RESULT(e,s) represents the state resulting from applying action e to state s, and -AB(f, e, s) represents the assertion that f is unaffected by e in state s. one of these, Tweety is a normal bird that has wings and may be able to fly. This matches our intuition. In the other extension, Tweety is a normal animal, but an abnormal bird. Thus, he is wingless, and unable to fly. This is an anomalous extension. To formalize the example, we make the assignments A = “Tweety is an animal” B = “Tweetv is a bird” We can summarize the shooting example as providing the W = “Tweet; has wings” axioms cl = "Tweety can not fly (is Grounded)" l.T(ALAqS2) 2. T(ALM.3, S2) A -AB(ALNE, SHOOT, S2) 3 T(ALNE,S3) 3. ?LOADED, S2) 3 AB(AL~, SHOOT, S2) abA = "Tweety is an abnormal animal with respect to not flying” abB = "Tweety is an abnormal bird with respect to having wings" The negations of the last two statements correspond to defaults. 4. T(LOADED, S2) ZI T(DEAD, S3) Continuing the formalization, we have the implications l.AA-abA 1 G 5. qLOADED, Sl) 6. CLOYED, si) A +B(LOADED, wm, Sl) 1 T(LOADED, S2) and the default rules 2. W r> abA 3.B 3 A :M -AB(LOADED, WAIT, Sl) +B(LOADED, WAIT, Sl) :M -AB(ALm, SHOOT, S2) -AB(ALM?, SHOOT, S2) 4.BAyabB 1 W We are also assuming B as an axiom. Following the analogy with the shooting example, we have the default inference rules :M -abA TabA As Hanks and McDermott point out,, with the above axioms, each default rule defeats the other. This means that there is an extension where the second default is applied, but, not, the first, leading to the conclusion T(ALNE, S3), which intuitively does not appear to be supported by the facts presented. It is not, obvious from this example where the difficulty lies. At first sight it appears conceivable that temporal reasoning possesses special problems related to the flow of time and causality. Thus, Shoham proposes that chronologically earlier defaults should be added first when constructing an extension. We argue here that to pin the blame on temporal reasoning is to misdiagnose the disease; the real difficulty is independent of temporal reasoning. Indeed, other authors [Reiter and Criscuolo, 1983, McCarthy, 19861 (see also [Etherington, 19871) have previously observed anomalous extensions arising in non-temporal reasoning. To’ further clarify the situation, we present, a natural example from non-temporal and :M -abB YabB The first of these can be applied to construct an extension that includes TabA. However, we can then use implication 2 to deduce -W. This allows us to conclude abB by 4, which prevents the second default rule from being applied. Thus, the extension does not, include -abB. This extension is counter-intuitive. It, is easy to verify that the intuitive extension can also be obtained. This shares with the shooting example the characteristic that by assuming a normality, we could deduce an abnormality, and thereby arrive at an anomalous extension. In order to isolate the problem further, we will try to remove extraneous features from the examples, and boil them down to their essential elements. In 438 Knowledge Representation the shooting example, we could simplify the implications by omitting undisputed facts such as T(LOADED, Sl). This gives US +B(LOADED, WAIT, Sl) 1 T(LOADED, 92) T(LOADED, S2) 13 AB(ALNE, SHOOT, S2) T(LOADED, S2) 3 T(DEAD, S3) -AB(ALIW3, SHOOT, 52) 1 T(ALNE, S3) The intermediate fact T(LOADED, S2) is significant only in connecting the two abnormality facts. Eliminating this “middle- man” gives a further simplification to -AB(LOADED, WAIT, Sl) 1 AB(ALNE, SHOOT, S2) -vAB(LOADED, WAIT, Sl) 1 qDEm, S3) -AB(ALNE, SHOOT, S2) 1 T(ALIK!Z, 53) Applying a similar process of simplification to the bird example gives TabB 1 abA TabA 3 G The pattern in both cases is that we have two defaults A and B such that A 2 -B. Logically, this is equivalent to B 1 -A, so that the crucial information would appear to be symmetric in A and B. Nevertheless, in the examples seen, the intuitively correct extension includes A but not B. It might be noted that the anomalous extension in the bird example could be ruled out by including B 3 abA as an additional axiom. However, this fails to accurately capture the knowledge that it is wings that make flight possible for birds. In particular, if we subsequently learned that Tweety had his wings torn off in an accident, we would have no way to revert to the default for animals and conclude that Tweety is unable to fly. . Truth Maintenance Truth maintenance systems were introduced by Doyle [Doyle, 19791 and have been refined and extended by many workers since then. The TMS model we use in this paper is essentially that of Doyle. Truth maintenance systems have often been regarded as performing a kind of resource bounded inference rather than true inference in that beliefs are propagated based on the current status of other beliefs rather than their ultimate status. As a practical matter, however, a TMS is usually left alone until it reaches a quiescent state. Such a state corresponds to a fixed point, just as in default logic. In a truth maintenance system, the notion of default does not arise directly. Instead, one is allowed to use nonmonotonic justifications. These may be regarded as inferences which may partly depend upon a state .of ignorance with respect to certain facts. For example, if one’s car is left parked, and there is no reason to think it is moved, one expects it to be there when one returns. We will write out(A) to represent ignorance of fact A. Then the car example could be expressed by the justification P A out(M) + R where P denotes the car is parked, M that it is moved, and R that it will still be there upon return. We call P an IN justifier of the justification. We say M is an OUT justifier.ll One may view an 2 This notation for one which segregates nonmonotonic justifications differs somewhat from the standard the IN and OUT justifiers, and writes them separately. OUT justifier as providing a kind of built-in default. In terms of a TMS, the example would be An unconditional premise, such as B in this example, by a justification with an empty set of justifiers. 1. AA out(abA) -+ G 2. W --+ abA 3.B 3 A 4. B~out(abB) -+ W most natural coding of the bird is represented In a TMS the notion of a labelling plays a role similar to that of an extension in nonmonotonic logic. One might define a valid labelling as an assignment of IN/OUT status to each of the propositions in such a way that each of the justifications is satisfied. For example, if the labelling specified A as IN and abA as OUT, then to satisfy the first justification, G would have to be IN. If the labelling specified B as IN and W as OUT, then to satisfy the fourth justification, abB would have to be IN. With this definition there would be two valid labellings corresponding to the two extensions discussed earlier. In particular, there would be an anomalous labelling having abB labelled as IN. However, the label propagation mechanisms employed in truth maintenance systems generally have the property that only well-founded labellings are obtained. A labelling is well-founded if it is valid in the sense above and, in addition, every proposition labelled IN is well-justified, that is, it is the conclusion of some justification whose OUT justifiers are all labelled OUT, and whose IN justifiers are all themselves well-justified. Since abB is not the conclusion of any justification, there is no well-founded labelling that labels it as IN. Thus, for a truth maintenance system, there is a single accessible extension in this example, namely the one corresponding to our intuition. It is interesting to note that if we subsequently learn that W is false, the contradiction-handling machinery of a truth maintenance system (in what might be considered an application of reductio ad absurdum) will install a new “backward” justification for abB. This causes the second extension to become accessible since it now corresponds to a well-founded labelling. This is exactly what we expect intuitively: if we learn that due to an unfortunate accident poor Tweety is wingless, we indeed want to revert to the default for animals, and conclude he is unable to fly. The natural TMS representation of the implications in the shooting example is T&U-VE, S2) A out(AB(ALNE, SHOOT, S2)) --+ qALm, S3) T(LOADED, S2) - AB(ALIVE, SHOOT, S2) T(LOADED, 5’2) - T(DEAD, S3) qLOADED, Sl) A out(AB(LOADED, WAIT, Sl)) + qLOADED, S2) which similarly excludes the anomalous labelling since AB(LOADED, WAIT, Sl) is not the conclusion of any justification. Notice that if we subsequently learn T@LJVE, S3), a contradiction is produced, causing backtracking. The only possible culprit is out(AB(LOADED, WAIT, Sl), so the TMS installs a justification for AB(LOADED, WAIT, Sl), which causes a shift to the second Morris 439 extension. Again, this satisfies our intuitive expectations: if we learn the victim survives, the only possibility given the statement of the problem is that the gun became unloaded. In both examples, applying the process of simplification considered earlier produces a pattern of the form out(A) + B and out(B) -+ C. In general, with a pattern of this form where there are no cycles, there is a single well-founded labelling. Moreover, a TMS will arrive at that labelling irrespective of the order in which the justifications are added. IV. Nonnormal Defaults The question arises: what property of a TMS enables it to escape the Hanks-McDermott anomaly? It might appear at first sight that default logic is unable to capture the well-foundedness requirement, or that the limited nature of the inference performed by the TMS is responsible. The first possibility can be ruled out because the minimality requirement for extensions in default logic is there to ensure well-foundedness. The unidirectional nature of TMS inference does play a role. However, we will see that it is possible to exclude the anomaly by changing the default logic representation in a way suggested by the TMS formulation. If we examine the behavior of an OUT justifier B in a TMS justification A A out(B) 4 C we see that it is satisfied when B is not IN. For a quiescent state of the TMS, this means B is not derivable. In a logic system the non- derivability of B would be equivalent to -B being consistent with the other facts. This suggests that we regard the entire TMS justification as a default inference rule of the form Taking this approach in the 1 and 4 by the default rules C A:MlabA G bird example, we replace justifications and are obtained by modifying normal defaults to anticipate conditions which would render them inappropriate. Thus, in the bird example one might use the seminormal rules A: M(GA-abA) G and B : M(WAlabB) using seminormal rules also produces the anomalous extension. Thus, even seminormal rules appear insufficient to properly constrain interactions between defaults in all cases. and W B : MlabB W respectively. Now observe that application of the second default rule (together with justification 2) defeats the first default rule, but not vice versa, so we end up with a single extension. underlying monotonic theory is itself inconsistent. defeats itself, since from W and -W, using ordinary (classical) deduction, one can derive abB. Indeed, it has been proved [Reiter, 19801 that a default logic extension is inconsistent if and only if the In view of the difficulty in responding to new information, it might be preferable if Default Logic somehow emulated a TMS and allowed the possibility of inconsistent extensions. We make an informal suggestion here as to how this might be achieved. Suppose the interpretation of MB in the default rule A:MB C In the shooting example, the frame axioms, instead of being implications, become default rules of the general form T(f,s) : M +@f,e,s) T(f,RESULTs,s)) It may be verified that this formulation eliminates the anomalous extension. We see in both examples the use of so-called nonnormal default rules. Hanks and McDermott did not discuss nonnormal defaults in their paper. is changed from the usual “it is consistent to assume B” to “it is consistent to assume B or B is provable.” Now an application of a default rule that results in an inconsistency will not automatically undercut itself, so the possibility arises of having an inconsistent fixed point. Reiter V. Isolated Defaults and Criscuolo [Reiter and Criscuolo, 19831 (also [Etherington, 19871) suggest the use of a special kind of nonnormal default, called a seminormal default, to exclude It seems a little drastic to represent all justifications as default rules. Such rules are unidirectional and we would like to preserve 440 Knowledge Representation as much as possible of the bidirectional nature of implicational inference (e.g., if -A 3 B then -B 3 A). Intuitively, the entire default content of a justification resides in the OUT justifiers. In the car example considered earlier, there appears to be an underlying assumption that the car will not be moved which is separable from the other parts of the justification. Indeed, one of the attractive aspects of normal defaults was that they could be isolated in this way. We now consider whether a similar isolation can be achieved for nonnormal defaults. Consider the expression “out(X).” One way of viewing this is that it represents a proposition in its own right distinct from X, although related to it. To achieve behavior resembling that of a truth maintenance system, we could add :M -x out(X) as a default logic rule. With this separate specification of the OUT justifiers, we can represent justifications as ordinary implications. Looking once again at the bird example, we can show that with this formulation there is no extension containing out(abA). Suppose that there is such an extension. Since it is NOT the case that left to an extra-logical contradiction-handling procedure. Indeed, we will see that in any case contradiction-handling needs to be sensitive to extra-logical issues. It is worth remarking that the well-known result [Charniak et al., 19791 for truth maintenance systems, that the absence of odd loops guarantees the existence of a well-founded labelling, appears related to the coherence theorem of Etherington [Etherington, 19871. A companion result is that the absence of ALL nonmonotonic loops (i.e., odd loops and non-zero even loops) guarantees a unique (though possibly inconsistent, as we have seen) well-founded labelling. This raises an interesting possibility. If we follow the approach described above for representing defaults by justifications of the form out(-D) -+ D, and if only the contradiction-handler is allowed to produce justifications for the -D propositions, then ordinary conflicts between defaults will only cause inconsistencies, not multiple (well-founded) labellings. Moreover, if the contradiction-handler is careful to avoid creating cycles in the nonmonotonic support structure, uniqueness of the labelling can be maintained. The guarantee of a unique extension/labelling would seem to be a nice property for a nonmonotonic reasoning theory. out(abA) + TabA, there is no direct conflict between out(abA) and -abB. Thus, the default rule for out(abB) can be applied to VI. Contradiction Handling conclude out(abB). But this allows us to. deduce abA, which prevents the default rule for out(abA) from being used. Thus, the extension does not contain out(abA) after all. Observe that there is no difficulty with the extension containing out(abB). One cautionary note is in order with respect to the suggested default logic formalization of out(X). It has been customary in truth maintenance systems to represent the adoption of A as an assumption by introducing out(-A) -A as a justification. If we have a similar justification out(-B) 4 B for B, and also have A + -B, then the default logic formalization once again has multiple extensions. In this case the TMS has only a single well-founded labelling because, from the TMS point of view, the inference A + -B is unidirectional. Assuming the single extension is what is intended, it appears unfortunate to rely on unidirectionality with respect to a monotonic justification like A+ -B to achieve it in the TMS. This would mean that monotonic justifications also would have to be represented a~ inference rules, rather than implications, in the default logic formalization. To avoid this difficulty, we suggest using justifications out(-A) + A out(-B) + B A--,-B to represent the intended situation, where -X is a dummy proposition distinct from X or -X (one might read -X as “X is defeated”). Now there is a single extension in the suggested default logic formalization. With the out(-D) -+D representation of defaults, it is possible to derive inconsistencies. For example, if we have defaults A and B defined analogously to D, and if A-+ -B, then both justifications are operative, so that a contradiction is derived. We argue that inconsistency in this kind of situation is preferable to having multiple extensions. Since monotonic inferences can give rise to inconsistencies, it seems reasonable to allow nonmonotonic inferences the same privilege. The inconsistency can be removed by supplying a new justification such as A + -B or B -+ -A. We argue that the choice between these, or other resolutions, is best Truth maintenance systems have been used to support a more efficient search process in problem solving. In this approach, the choices available in the search are represented as assumptions. An inconsistent set of choices gives rise to a contradiction, causing dependency-directed backtracking [Stallman and Sussman, 19773. Planning applications combine temporal reasoning with problem solving search. If full use is to be made of a TMS in such an endeavor, then some assumptions will represent choices while others may represent default hypotheses about the environment (or even default persistences arising from the frame axioms). Choices are generally ruled out if they conflict with our desires. Hypotheses are revised if they conflict with observation. When the two are mixed, trouble can result. Suppose, for example, our old friend Tweety is incarcerated in a bird cage, and we are considering opening the cage door. Under normal conditions, we can deduce that Tweety might fly away. Let us suppose further that we do not wish Tweety to fly away. This conflict between an expectation and a desire would ordinarily lead to dependency-directed backtracking. However, if assumptions are represented uniformly, then a TMS could just as easily revise a default hypothesis about Tweety (say, that he has wings), as revise the choice of opening the birdcage. Thus, the system might postulate Tweety is wingless solely to avoid the disagreeable conclusion that he might fly away. That would be wishful thinking! On the other hand, suppose we actually do want Tweety to fly away and open the birdcage for that purpose. In this case, if we wait patiently but observe no flight we might be justified in concluding that Tweety is abnormal in some way that is preventing the flight. In other words, a conflict between an expectation and a desire leads one to reconsider choice of action, while a conflict between an expectation and an observation should lead one to reconsider one’s beliefs. The TMS contradiction handling machinery will need to make such distinctions when employing reductio ad absurdum reasoning. This suggests that contradiction-handling needs to be treated as an extra-logical operation, rather than being built in to Morris 441 the logical formalism. VII. Conclusions [Reiter and Criscuolo, 19831 Reiter, R. and G. Criscuolo. Some Representational Issues in Default Reasoning. Int. J. Comput. Math. Q:l-13, 1983. Hanks and McDermott pointed out a difficulty in the default logic [Shoham, 19861 Shoham, Y. Chronological Ignorance: Time, formalization of temporal reasoning: the existence of anomalous Nonmonotonicity, Necessity and Causal Theories. In extensions. Closer examination shows the difficulty is not peculiar Proceedings AAAI-86. Philadelphia, 1986. to temporal reasoning, but occurs in a wide range of default reasoning tasks. In these cases, the natural representation of the problem in a truth maintenance system appears to clear up the difficulty. An inspection of the TMS representation suggests that nonnormal default rules are required to approximate its behavior. Further investigation indicates that nonnormal defaults (or some equivalent) are crucial in avoiding anomalous extensions. [Stallman and Sussman, 19771 Stallman, R.M., and G.J. Sussman. Forward Reasoning And Dependency-directed Backtracking In A System For Computer-Aided Circuit Analysis. Artificial Intelligence 9:135-196, 1977. We have also seen that nonnormal defaults can be isolated and represented by simple nonnormal default rules, or (using a TMS) nonmonotonic justifications of a simple form. An approach is suggested where conflicts between defaults cause inconsistencies rather than multiple extensions. In this approach the responsibility for resolving inconsistencies is shifted to an external contradiction- handler. In support of the view that contradiction-handling should be regarded as an extra-logical operation, a new difficulty has been noted concerning the use of reductio ad absurdum reasoning in applications which combine default reasoning with problem solving. It appears that such reasoning needs to make distinctions -- between choices and hypotheses, and desires and observations -- which are at a level beyond ordinary logic. Acknowledgements I am grateful to Richard Fikes, Johan de Kleer and, particularly, Bob Nado for conversations which helped to clarify many of the ideas presented in this paper. The referees made helpful comments for improving the presentation. Discussions at the Frame Problem Workshop prompted my consideration of the differences between the TMS and Default Logic in responding to new information. References [Charniak et al., 19791 Charniak, E., Riesbeck, C., and D. McDermott. Artificial Intelligence Programming. L. E. Erlbaum, Baltimore, 1979. [Doyle, 19791 Doyle, J. A Truth Intelligence 12(3), 1979. Maintenance System. Artificial [Etherington, 19871 Etherington, D.W. Formalizing Nonmonotonic Reasoning Systems. Artificial Intelligence 31:41-85, 1987. [Hanks and McDermott, 19861 Hanks, S., and D. McDermott. Default Reasoning, Nonmonotonic Logics, and the Frame Problem. In fioceedings M-86. Philadelphia, 1986. [McCarthy, 19801 McCarthy, John. Circumscription - A Form of Non-Monotonic Reasoning. Artificial Intelligence 13~27-39, 1980. [McCarthy, 19861 McCarthy, John. Applications of Circumscription to Formalizing Common-Sense Knowledge. Artificial Intelligence 28:89-116, 1986. [Reiter, 19801 Reiter, Raymond. A Logic Artificial Intelligence 13:81-132, 1980. 442 Knowledge Representation for Default Reasoning.
1987
75
671
Semantically Sound Inheritance Its ::I$ Nado and Richard Fikes IntelliCorp 197,s El Camino Real West Mountain View, California 94040-2216 Abstract Most frame languages either are glaringly deficient in their treatment of default information or do not represent it at all. This paper presents a formal description of a frame language that provides semantically sound facilities for representing default information and an efficient serial algorithm for inheriting default information down class-subclass and class-member hierarchies constructed in that language. We present the inheritance algorithm in two forms. In the first form, the algorithm provides justifications to a TMS, which then manages the inherited information. In the second form, the algorithm performs its own, special- purpose truth maintenance and therefore is useable in a system that does not, include a general-purpose TMS.l I. Introduction The common-sense reasoning required in many knowledge system applications relies heavily on the ability to use general information that is subject to exceptions: what has been called prototypic or default information. Although frame-based representation languages have become increasingly popular for expressing the domain-specific information on which the functionality of knowledge systems is based [Fikes and Kehler, 19851, most such languages either are glaringly deficient in their treatment of default information (as argued, for example, in [Brachman, 19851 and [Touretzky, 19841) or do not represent it at all (e.g., KL-ONE [Brachman and Schmolze, 1985) and KRYPTON [Brachman et al., 19831). Thus, an important step in the advancement of knowledge system technology is the development of a frame language that provides semantically sound facilities for representing and efficiently processing default information. This paper presents a formal description of such a frame language (based on the frame language in the KEETM system2) and an efficient serial algorithm for inheriting default information down class-subclass and class-member hierarchies constructed in that language. The language has been implemented at IntelliCorp in a system called OPUS. As observed by Touretzky [Touretzky, 1986], the “shortest- 1 Tius research was supported in part by the Defense Advanced Research Projects Agency (DARPA) under contract No. F30602 85 C 0065. The views and conclusions reported here are those of the authors and should not be construed as representing the official position or policy of DARPA or the U.S. government. 2KEEworlds, IntelliCorp. KEE and Knowledge Engineering Environment are trademarks of Color Yalue: ? Figure 1: A Problem with the “Shortest Path” Ordering path” ordering of defaults used by most inheritance systems (e.g., FRL [Roberts and Goldstein, 19771 and NTL (Fahlman, 1979]), does not, always successfully provide the desired preference of more specific defaults over less specific defaults. Problems arise in some cases of multiple inheritance, where nodes are allowed to have more than one parent link. An example, adapted from Touretzky, is depicted in Figure 1. The typical inheritance algorithm correctly prefers White over Grey as a default color for a royal elephant, because the default from RoyalElephants has a “shorter path” than the default from Elephants. However, in the situation shown in the figure, Clyde has a redundant class membership link to Elephants. Clyde, then, inherits both the default White from RoyalElephants and the default Grey from Elephants along paths oj equal length. Thus, shortest-path algorithms are not sufficient to correctly handle this situation.3 This, and other shortcomings of existing algorithms are overcome in the OPUS algorithm presented here. An additional motivation for this work is to enable “truth maintenance” (or, “reason maintenance” as it is sometimes called) capabilities to be incorporated into frame-based representation systems. Truth maintenance algorithms provide an automatic means of managing derived results as changes are made in a model [Doyle, 19791. In addition, a truth maintenance system (TMS) can be used as the basis for a context mechanism that enables a frame system to model and compare multiple hypothetical situations (as W&S done, for example, in the KEEworlds TM facility [Morris and Nado, 1986]). 3 The same problem is obtained if added along the two paths from Clyde equal numbers to Elephants. Of intermediate subclasses are Nado and Fikes 443 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. also Inheritance mechanisms add derived results to a model. They typically provide an efficient special-purpose form of truth binary relationships considered to hold between each member of class represented by the frame and other entities in the domain. the maintenance for those results in that they remove information they have derived when a change occurs in the form or content of the hierarchies on which those derivations are based. If a general- purpose TMS has been incorporated into a frame system, then the TMS can be used to maintain the inherited information, thereby significantly reducing the complexity of the inheritance mechanism. However, such a reduction can be obtained only if the derivations performed during inheritance are expressible in the logical formalism supported by the TMS. The inheritance algorithm in the current KEE system (and in other similar systems) is unsuitable for providing such justifications because it depends on arbitrary LISP procedures to perform its deductions and allows those procedures to use information whose semantic interpretation is unclear such as the order in which inheritance links are stored. The OPUS inheritance algorithm we present here performs sound deductions describable to a TMS in the form of nonmonotonic justifications whose justifiers are propositions expressible in the frame language. OPUS, therefore, in combination inheritance. with the KEEworlds system, performs context-relative After presenting the formal description of the frame language, we present the OPUS inheritance algorithm in two forms. In the first form, the algorithm provides justifications to a TMS, which then manages the inherited information. In the algorithm performs its own truth maintenance useable in a system that does not include a TMS. second form, the and therefore is II. A Frame Language with Defaults and Exceptions 1. Frames A frame represents an entity in the domain of discourse. Formally, a frame corresponds to a logical constant. A frame includes a collection of own slots that describe binary relationships considered to hold between the entity represented by the frame and other entities in the domain. A frame’s collection of own slots tecessarily includes MemberOf, which represents the standard set (i.e., class) membership predicate from set theory. 2. Class Frames A class frame is a frame that represents a collection (i.e., class) of entities in the domain of discourse. Such a class is itself considered to be an entity in the domain of discourse. Thus, a class frame has associated with it a collection of own slots describing the binary relationships that the class has with other entities. Those own slots include Subclass, SubclassOf, Member, and MemberOf, which represent the standard subset and set membership predicates from set theory. These slots provide the “links” over which inheritance is done. In addition, a class frame has associated with it a collection of prototype slots that describe 3. Own Slots An own slot has associated with it a collection of values, each of which represents an entity in the domain of discourse. Formally, an own slot named S has associated with it a binary predicate, which for convenience we will also call S. An own slot S in a frame F having value Vcorresponds to the assertion S(F,V). 4. Prototype Slots A prototype slot has associated with it a collection of necessary values, each of which represents an entity in the domain of discourse. Formally, a prototype slot S has associated with it a binary predicate NecS. A prototype slot S in a class frame C having necessary value V corresponds to the assertion NecS(C,V). Predicate NecS is related to predicate S by the following definition:4 NecS(C, V) = Vx [MemberOf(z,C) > S(x, V)] The following theorem follows from this definition and the set theory definition of SubclassOf in terms of MemberOf: NecS(C, V) A SubclassOf(x, C) 1 NecS(x, V) That is, necessary values of a prototype slot at a class frame representing a class C are also necessary values of the prototype slot at all class frames representing subsets of C. The OPUS inheritance algorithm performs the deductions implied by the definition of NecS and by the theorem by propagating necessary values of prototype slots to all subclasses and class members. The OPUS frame language without defaults can be characterized as expressing statements of the form S(x,y) and NecS(x,y) for arbitrary first order binary predicates S. The language does not recurse in that it does not represent predicates of the form NecNecS. B. Adding Defaults and Exceptions Our goal was to augment the frame language described above to enable class frames to include prototypical descriptions of class members. That is, we wanted to enable prototype slots to have default values that would be inherited to class members as assumed values for the corresponding own slots unless blocked by exceptions. We began by attempting to directly implement the formalism for defaults with exceptions in inheritance networks described by Etherington [Etherington, 19871. Etherington’s formalism is stated entirely in terms of unary class membership predicates. That is, he treats each class C as a unary predicate, C(x), that is true when x is a member of C. He defines a “Membership” link between an object u and a class C to mean a belongs to class C (i.e., C(n)). The OPUS MemberOf own slot corresponds to the membership link. He defines a “Strict IS-A” link between class Cl and class C.Z to mean Cl’s are always CR’s (i.e., Vx [Cl(x) 2 n(x)]). The OPUS SubclassOf own slot corresponds to the strict IS-A link. Own slots are treated in Etherington’s formalism by considering each slot-value pair (S,V) to be a unary predicate, S%‘(x), corresponding to the class of all objects having value V for own slot S (e.g., the class of objects having color grey). Given that formalism for own slots, a necessary value Vof a prototype slot S in a class frame C is a strict IS-A link between C and SV. *Here and in the rest of the paper free variables are implicitly universally quantified. 444 Knowledge Representation Etherington represents default information in his inheritance networks by “Default IS-A” and “Exception” links. A default IS-A link from class Cl to class C2 means “Normally, Cl’s are C2’s”, and is expressed formally by the default logic inference rule: Cl(x) : n(x) C2(x) The interpretation of this rule is: if Cl(x) (called the prerequisite) is known, and C2(x) (called the justification where it appears above the line) is consistent with what is known, then C2(x) (called the consequent where it appears below the line) may be concluded. An exception link has a class at its tail and a default IS-A link at its head. An exception link from class Cl to a default IS-A link from C2 to CS means “C1’s are exceptions to C2’s being CSs” (e.g., “Royal elephants are exceptions to elephants being grey”). Etherington provides no independent semantics for an exception link. Instead, he defines it formally as a modification to the default rule corresponding to the link being blocked. However, Doyle has suggested (as reported by Touretzky [Touretzky, 19861) that if the justification of the default rule corresponding to a default IS-A link contains an additional unary predicate unique to that default, then an exception link blocking that default can be defined to correspond to an assertion of the negation of that predicate for each member of the class at the tail of the link. Following that suggestion, a default IS-A link from class Cl to class C2 would correspond to the default rule: Cl(x) : n(x) A yExceptionToClC2(x) CG) ProExcS is defined as follows: ProExcS(C, V, OC) = Vx [MemberOf(x, C) 2 OwnExcS(z, V, OC)] As was the case for predicate NecS, the definition of ProExcS implies that prototype exceptions are inherited to subclasses. That is: ProExcS(C, V, OC) A SubclassOf(x, C) 3 ProExcS(x, V, OC) An assertion of the form ProExcS(C,V,OC) corresponds in Etherington’s formalism to an exception link from C to a default IS-A link from OC to SV OwnExcS statements are inferred from ProExcS statements and serve, following Doyle’s suggestion, to block default rules at appropriate class members. 2. DefS DefS(C,V) means that for each member x of C, if it is consistent to assume both that V is a value of own slot S in x and that no own exception at x blocks the inheritance of V for S from C, then it can be inferred that Vis a value of own slot 5’ in x. For a given binary predicate S, DefS is defined as follows: DefS(C, v) = MemberOf(x, C) : S(x,V) A lOwnExcS(x, V, C) SC? v) and an exception link from CS to the default IS-A link from C1 to c2 would correspond to the implication: Vx [C3(2) 1 ExceptionToClC2(x)]. To add Etherington’s default IS-A and exception links to the Defaults asserted at a class as DefS statements are used to infer SubDefS statements at the class and are inherited to all subclasses as SubDefS statements. C. Quantified Exceptions 1. ProExcS ProExcS(C,V,OC) means there is an own exception at each member x of C blocking the inheritance of default value V from class OC to own slot S in x. For a given binary predicate S, DefS(C,V) corresponds in Etherington’s formalism to a default IS-A link from C to SV 3. SubDefS The SubDefS predicate is an extension to Etherington’s formalism to provide for the inheritance of defaults to prototype slots in subclasses. That is, the frame language is designed so that the prototype slots at any given class frame C have all the necessary and default values to be inherited by members of C that have been asserted at C or at any of C’s superclasses. For example, the class frame AfricanElephants inherits from class frame Elephants the default value grey for the color prototype slot. Etherington has nothing in his formalism corresponding to that functionality. For a given binary predicate S, SubDefS statements are inferred from DefS statements by the following axiom and default rule: DefS(C, V) 3 SubDefS(C, V, C) SubDefS(C,V,OC)ASubclassOf(C,OC) : yProExcS(C,V,OC) SubDefS(C,VOC) Etherington’s link types and the statement forms we have introduced thus far for OPUS allow exceptions to be stated for specific values from specific origin classes. In practice, however, there is a need to assert collections of exception links. For example, one typically wants to state for a given slot in a given class frame (say the color slot in RoyalElephants) that any default value from any superclass is to be blocked and replaced by a given default value. Such assertions would be second order statements in Etherington’s formalism. We can express them in the OPUS formalism as first order quantified statements as follows: Vu OwnExcS(0, v, OC) Voc OwnExcS(0, V, oc) Vv,oc OwnExcS(0, v, oc) Nado and Fikes 445 Vu ProExcS(C, v, OC) Voc [SubclassOf(C, OC) 3 ProExcS(C, V, oc)] Vu,oc [SubclassOf(C, oc) 3 ProExcS(C, 21, oc)] The quantification of the origin class that is supported for prototype exceptions is only to superclasses of the class to whose members the exception applies. The restriction to superclasses is meant to implement the intuition that defaults at subclasses override defaults at superclasses. For example, a default color for royal elephants overrides a default color for elephants. Thus, we subclasses and class members unless blocked by exceptions. In this section we describe the algorithm in two forms, one assuming the availability of a TMS to maintain the derived results and the other not. In both cases we describe the information associated with each slot in the implementation and the operations performed by the algorithm. A. What’s In A Slot? Each own slot in a frame has associated with it sets of vallles and do not want a quantified prototype exception to block defaults from sibling classes and subclasses, but only from superclasses. (Although, note that the unquantified form of ProExcS blocks defaults from any given class, including sibling classes and subclasses. The useful in that it ability allows to block defaults from siblings may one to express a precedence ordering even though their be of own exceptions. Own exceptions are ordered pa.irs of the form (<value spec>, <origin class spec>), where <value spec> is either a value or the reserved symbol *, and <origin class spec> is either a class or the reserved symbol *. The * symbol matches any or value and thereby ow 11 to quantified defaults between classes subclass-superclass relationship is unknown.) As observed by Touretzky [Touretzky, 19841, the natural partial ordering of defaults in inheritance systems defined by the hierarchical structure of the inheritance graph resolves many ambiguities in an intuitive way. Touretzky introduces an “inferential distance” measure that expresses the desired natural origin class exceptions. corresponds Each prototype slot in a class frame has associated with it sets of necessary values, default values, and prototype exceptions. Default values are ordered pairs of the form (<value>, <origin class>) and prototype exceptions are ordered pairs of the form (<value spec>, <origin class spec>). The * symbol in prototype exceptions matches any value or any origin class that is a superclass and thereby corresponds to the desired forms of ordering of defaults and uses that measure to filter out extensions that violate the ordering. In OPUS, that effect is obtained by the explicit quantification of exceptions over superclasses. In Touretzky’s formalism, an exception always blocks a specific default value from all superclasses. Thus, unlike in OPUS, he cannot block all values from superclasses nor can he block values from a given superclass. In summary, for any first order binary predicate S, the OPUS frame language represents statements of the following form (with their Etherington link equivalents where applicable): SC01 v) 0 > -Member- > SV NecS( C, V) C> -IS.A- > SV DefS(C, s/? C>-Def.IS.A->SV SubDefS(C, V, OC) OwnExcS(0, V, OC) Vu OwnExcS(0, v, OC) Voc OwnExcS(0, V, oc) Vu, oc OwnExcS(0, w, oc) ProExcS(C,V,OC) C>-Exe->(OC>-Def.IS.A->SV) Vu ProExcS(C, v, OC) Vv [SubclassOf(C, oc) I) ProExcS(C, V, oc)] Vu, oc [SubclassOf(C, oc) 3 ProExcS(C, 2), oc)] The system does not recurse in that it does not represent NecNecS, DefNecS, etc. Consider how this formalism would be used to express the situation shown in Figure 1. DefColor statements would be used at Elephants and RoyalElephants to express the two defaults, and a quantified prototype exception statement would be used at RoyalElephants to block the inheritance of default colors from all superclasses, as follows: quantified prototype exceptions. B. Inheritance with a TMS In order to perform inheritance using a TMS, each value or exception that is considered for a slot has an assertion (TMS node) associated with it. The assertion’s formula (TMS datum) is as described in Section 2 for the different types of values and exceptions. A value or exception is added to a slot by giving its corresponding assertion a suitable justification, either a primitive justification or a justification recording some deduction external to the inheritance system. A given slot has a particular value or exception just in case the TMS assigns a positive belief status to its corresponding assertion. Demons are associated with each slot that are triggered by the TMS when an assertion concerning the slot is believed for the first time. A demon for a particular value or exception type is responsible for determining which inheritance justifications involving the newly believed assertion should be added to the TMS. Necessary values of prototype slots are inherited to class members as values of own slots via justifications of the following form: NecS(C, v) A MemberOf(Memb, C) 4 S(Memb, v) Necessary values of prototype slots are inherited to subclasses via justifications of the following form: NecS(C, V) A SubclassOf(Csub, C) - NecS(Csub, I’) Prototype exceptions are inherited from classes to class members via justifications of the following form: DefColor(Elephants,Grey) ProExcS(C, V, OC) A MemberOf(Memb, C) DefColor(RoyalElephants,White) + OwnExcS(iWemb, V, OC) Vu, oc[SubclassOf(RoyalElephants, oc) Prototype exceptions are inherited from classes to subclasses 1 ProExcColor(RoyalElephants, v, oc)] via justifications of the following form: . A IIPushl( Inheritance Algorithm for Defaults and Exceptions ProExcS(C, V, OC) A SubclassOf(Csub, C) --f ProExcS(Csub, V, OC) Default values of prototype slots are inherited to class members as values of own slots via nonmonotonic justifications of the following form: SubDefS(C, V, CC) A MemberOf(iWemb, C) A OUT[OwnExcS(Memb, V, OC)] -+ S( hfemb, 5’) The OPUS frame language has been implemented by modifying the frame language in the KEE system. The inheritance mechanism implements the deductions defined by the definitions, axioms, and theorems given above by “pushing” necessary member slot values when they are asserted to subclasses and class members, and pushing default member slot values when they are asserted to 446 Knowledge Representation Note that there is no OUT justifier for -S(Memb,V) in these justifications as the formal definition of default values requires. Such a justifier is not needed since statements of the form -S(Memb,V) cannot be expressed in the frame language and are therefore necessarily out. Default values of prototype slots are inherited to subclasses via nonmonotonic justifications of the following form: SubDefS(C, V, OC) A SubclassOf(Csub, C) A OUT[ProExcS(Csub, V, OC)] -+ SubDefS(Csub, V, OC) As before, these justifications do not need to have an OUT justifier for -SubDefS(Csub,l/,OC) because statements of the form -SubDefS(Csub,V,OC) cannot be expressed in the frame language and are therefore necessarily out. Quantified own exceptions are used to generate instantiated own exceptions as needed to block the inheritance of default values that match the quantified form. The instantiated exceptions are produced via justifications of the following forms: OwnExcS(I;: *, OC) -+ OwnExcS(F, V, OC) OwnExcS(F, V, *) -+ OwnExcS(F, V, OC) OwnExcS(F, *, *) --+ OwnExcS(F, 17, OC) Quantified prototype exceptions are not inherited. Instead, they are used to generate instantiated prototype exceptions as needed to b!ock the inheritance of default values that match the quantified form. The instantiated exceptions are produced via justifications of the following forms: ProExcS(C, *, OC) A SubclassOf(C, Csuper) A SubDefS(Csuper, V, OC) --+ ProExcS(C, V, OC) ProExcS(C, V, *) A SubclassOf(C, Csuper) A SubDefS(Csuper, V, OC) -+ ProExcS(C, V, OC) ProExcS(C, *, *) A SubclassOf(C, Csuper) A SubDefS(Csuper, V, OC) --+ ProExcS(C, V, OC) 1. Example Consider the statements that would be asserted and derived by this inheritance mechanism for the example from Figure 1. The inheritance of color Grey from Elephants to RoyalElephants would be done via the following justification: SubDefColor(Elephants,Grey,Elephants) A SubclassOf(RoyalElephants,Elephants) A OUT[ProExcColor(RoyalElephants,Grey,Elephants)] -+ SubDefColor(RoyalElephants,Grey,Elephants) The inheritance of color Grey from Elephants to Clyde would be done via the following justification: SubDefColor(Elephants,Grey,Elephants) A MemberOf(Clyde,Elephants) A OUT[OwnExcColor(Clyde,Grey,Elephants)] --+ CoIor(CIyde,Grey) The generation of the instantiated prototype exception for Grey at RoyalElephants would be done via the following justification: ProExcColor(RoyalElephants,*,*) A SubclassOf(RoyalElephants,Elephants) A SubDefColor(Elephants,Grey,Elephants) -+ ProExcColor(RoyalElephants,Grey,Elephants) The instantiated prototype exception for Grey at RoyalElephants prevents inheritance of Grey as a default to RoyalElephants. Thus, no justification is generated for inheriting Grey from RoyalElephants to Clyde. Inheritance of the instantiated prototype exception for Grey at RoyalElephants to Clyde would be done via the following justification: ProExcColor(RoyalElephants,Grey,Elephants) A MemberOf(Clyde,RoyalElephants) -+ OwnExcColor(Clyde,Grey,Elephants) That inherited exception would block the inheritance of Grey to Clyde. The inheritance of color White from RoyalElephants to Clyde would be done via the following justification: SubDefColor(RoyalElephants,White,RoyalElephants) A MemberOf(Clyde,RoyalElephants) A OUT[OwnExcColor(Clyde,White,RoyalElephants)] -+ Color(Clyde,White) Since there is no exception at Clyde blocking the inheritance of White from RoyalElephants, White will become the color of Clyde. C. Inheritance Without a TMS The above inheritance scheme relies on a TMS to remove inherited values when the assertions on which the inheritance was based are removed. For example, if the default color for elephants is removed, then the TMS will also remove Clyde’s color if it was in the model only because of the default. Inheritance without, the services of a TMS is considerably more complex since the inheritance machinery must, in effect, provide a truth maintenance capability for inherited values. It, order to provide for the removal of inherited values, the OPUS inhrritance machinery requires each slot to have both a local and a resultant set of values and exceptions. The local sets are used only by th: :nheritance algorithm and contain those values or exceptions that are either asserted or are determined by some means other than inheritance. Resultant sets contain all the values and exceptions, including the local ones and those derived by inheritance. When a value or exception is to be added to (or removed from) a slot, it is added to (or removed from) the appropriate local set and the inheritance machinery recomputes the affected resultant sets for that slot. When the values of the MemberOf (or SubclassOf) own slot of a frame are modified, the inheritance machinery recomputes the resultant sets of each own slot (or prototype slot) of the frame. When a resultant set of a prototype slot is modified, affected resultant sets of all its descendants in the inheritance graph are recomputed. In the paragraphs below, we describe how each type of resultant set is computed. References in the descriptions to values and exceptions are to the resultant sets unless explicitly indicated otherwise. The set of resultant necessary values for a prototype slot S in a class frame C is the union of the local set of necessary values for S in C and, for each Csuper that is a value of the own slot SubclassOf in C, the set of necessary values for prototype slot S in Csuper. The set of resultant default values for a prototype slot S in a class frame C consists of the local default values for S in C and, for each Csuper that is a value of the own slot SubclassOf in C, the default values for prototype slot S in Csuper that do not match an exception for S in C. The set of resultant values for an own slot S at a frame F consists of the local values for S at F and, for each C that is a value of the own slot MemberOf in F, the necessary values for prototype slot S in C and the default values for prototype slot S in C that do not match an own exception for S in F. The set of resultant exceptions for an own slot S in a frame F is the union of the local set of exceptions for S in F and, for each C that is a value of the own slot MemberOf in F, the set of exceptions for prototype slot S in C. The set of resultant exceptions for a prototype slot S in a class frame C consists of the local instantiated exceptions for S in Cy for each Csuper that, is a value of the own slot SubclassOf in C, the exceptions for prototype slot S in Csuper, and each (V,Csuper2) that matches a local quantified exception for S in C and is a default value for some Csuperl that is a value of the own slot SubclassOf in C. Nado and Fikes 447 Note that quantified exceptions remain in the local set and are not inherited. Quantified exceptions produce instantiated exceptions as needed to block defaults that would otherwise be inherited. 1. Example Figure 2 shows the local and resultant values and exceptions produced by the inheritance algorithm for our elephants example. The default (Grey,Elephants) at Elephants and the quantified exception (*,*) at RoyalElephants would cause an instantiated exception (Grey,Elephants) to be generated at RoyalElephants. That instantiated exception would be inherited to Clyde. The exception at Clyde would block inheritance of the (Grey,Elephants) default from Elephants. The default (White,RoyalElephants) at RoyalElephants would be inherited to Clyde as Clyde’s color. IV. Conclusion We have presented a formal description of a frame language that makes a clear distinction between necessary and default values of prototype slots. The formalization is baaed on previous work by Etherington, but extends his formalism to more closely match the structure of frame languages and to allow more convenient overriding of defaults at superclasses by defaults at subclasses. We have presented two distinct methods for implementing the inferences warranted by the formal description of the frame language. The first makes use of nonmonotonic justifications in a TMS to record inferences corresponding to default inheritance. This method is suitable for situations in which a TMS is needed in order to maintain conclusions derived from non-inheritance inferences or to implement context-relative inheritance. The second method, in effect, implements a more efficient, special purpose truth maintenance algorithm in order to maintain the validity of inherited values. It is appropriate for situations in which a general purpose TMS is not needed. A topic of current investigation is how to combine the two methods into a single system in which the special-purpose Color El ephants Local Defaults: (f;rey, Elephants) Resultant Defaults: (%;rey, Elephants) c Color c Lacal Exceptions:(*, *) I Resultant Exceptions: 1 (;6rey, Elephants) 1 Ellepharnts Local lIefaults: I ! bite, RoyalE-kphants) t ll Resultant Defaults: I (White, RoyalElephants) ‘i 8 1 : Color si : Resultant Exceptions: Cl& IGmy, Elephants) Resultant Values: Figure 2: Inheritance without a TMS algorithm is used whenever possible. In many applications, general knowledge about the relationships among classes of objects in the domain and default values of prototype slots is entered directly by the domain expert and does not vary during the course of problem solving. The membership of individuals in classes and the values of own slots are more likely to be inferred during problem solving and to vary with hypothetical context. These considerations suggest that the special purpose algorithm can be used for maintaining inherited values in the upper regions of a taxonomy, with the TMS method being used as appropriate in the lower, more problem- dependent regions. References [Brachman, 19851 Brachman, R.J. “I Lied about the Trees” Or, Defaults and Definitions in Knowledge Representation. AI Magazine 6(3):80-93, 1985. [Brachman et al., 19831 Brachman, R.J., Fikes, R.E., and H. J. Levesque. KRYPTON: A Functional Approach to Knowledge Representation. Computer 16(10):67-74, 1983. [Brachman and Schmolze, 19851 Brachman, R.J., and J.G. Schmolze. An Overview of the KC-ONE Knowledge Representation System. Cognitive Science 9(2):171-21G, 1985. [Doyle, 19791 Doyle, J. A Truth Maintenance System. Artificial Intelligence 12(3), 1979. [Etherington, 19871 Etherington, D.W. Formalizing Nonmonotonic Reasoning Systems. Arti jicial Intelligence 31:41-85, 1987. MIT Press, Cambridge, Massachusetts, 1979. [Fahlman, 19791 Fahlman, S.E. NETL: A System for Representing and Using Real-World Knowledge. ]Fikes and Kehler, 19851 Fikes, R. and T. Kehler. The Role of Frame-Based Representation in Reasoning. Communications of the ACM 28(9):904-920, 1985. [Morris and Nado, 19861 Morris P.H., and R.A. Nado, Representing Actions with an Assumption-Based Truth Maintenance System. In Proceedings AAAI-86. Philadelphia, 1986. [Roberts and Goldstein, 19771 Roberts, R.B. and I.P. Goldstein. The FRL Manual. MIT AI Memo 409, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, September, 1977. [Touretzky, 19841 Touretzky, D.W. Implicit Ordering of Defaults in Inheritance Systems. In Proceedings AAAI-84, pages 322-325. Austin, Texas, 1984. [Touretzky, 19861 Touretzky, D.W. The Mathematics of Inheritance Systems. Morgan Kaufmann, Los Altos, California, 1986. 448 Knowledge Representation
1987
76
672
A Strategy for Impleme Jane Terry Nutter Department of Computer Science Virginia Polytechnic Institute and State University Blacksburg, Virginia 24060 ases Abstract Assimilation is a process by which a knowledge base restructures itself to improve the organization of and access to information in the base. This paper presents a strategy for implementing assimilation in propositional knowledge bases which distinguish between the axioms of the system’s knowledge (called the context) and the derived consequences of those axioms (called the belief space). The strategy in question takes advantage of housekeeping phases in which the system discards accumulated clutter to discover useful patterns of access on the basis of which the context can be reorganized. Unused axioms are replaced by their more useful consequences; derivable generalizations that shorten common inference paths are added to the belief space. 1. Introduction Systems that use propositional knowledge bases must choose how to organize those bases: what information to make explicit and what information implicit, how to structure the explicit information, how much information to store and how much to infer as needed, and so on. Up to now, these decisions have rested with system designers. Fundamental choices have been made before implementation, in selecting a particular set of propositions to begin with and in decisions such as whether to retain results of inferences (final results, intermediate, or both). Changing these decisions usually involves manual intervention. To change what is explicitly included, the designers go in by hand and take out some propositions, put in others, and so on. To change what information is automatically added and retained, more extensive and costly alterations must be made. How systems organize their knowledge obviously affects their performance. But the nature of a given domain does not as a rule dictate a single best organization of its information. On the contrary, what organization is best for a given system depends on the circumstances under which the system is to be used, and on the interests and desires of its users. These circumstances, interests, and desires vary from one system to another in a single domain, and even over time for a single system. As a consequence, getting the decisions right at design time requires extensive customizing, if indeed it is possible at ‘all. The alternative is to get the systems to reorganize their knowledge bases themselves. This course has several evident advantages. It minimizes the impact of errors in initial decisions, it lets systems adapt to the environments in which they are used, and it provides a way for systems to optimize their knowledge bases relative to the inferences they are actually called on to make. The approach reported here hypothesizes that this function can usefully be combined with automated “forgetting”: at regular intervals, the system examines the portions of its belief space which have not been accessed recently, discards some of the information as useless, and reorganizes the rest to reflect the way the information has proved useful. This combined process of restructuring and controlled forgetting is called assimilation; this paper describes a strategy for assimilating information automatically which contributes toward the above goals at the ‘same time that it lets systems “housekeep” to remove information clutter that is not proving useful. In the process, the system forms useful generalizations on the basis of the patterns of use and discovery of information. The strategy proposed here can be adapted to a variety of system architectures, but the discussion will be in terms of a knowledge base implemented in SNeBR [Martins, 1983a] [Martins, 1983b] [Martins and Shapiro, 19831 [Martins and Shapiro, 19841 [IMartins and Shapiro, 1986a] [Martins and Shapiro, 1986b] [Martins and Shapiro, 1986~1, a semantic network architecture with a relevance-logic style belief revision system, implemented on SNePS [Shapiro, 19791 [Shapiro and Rapaport, 19861 and augmented with a monotonic logic for reasoning with default-style generalizations [Nutter, 1983a] [Nutter, 198381. This section describes the aspects of that architecture which contribute significantly to the discussion of assimilation. Propositions are represented as fragments of network, with logical relations (connectives, quantifiers, etc.) represented by reserved arcs. Since quantification is over nodes, which may represent individuals, properties, propositions, of any other objects of thought, the logic in question is higher order. In addition, the mapping between SNePS representations and propositions in standard higher-order predicate logic is not always one-one. For instance, SNePS relations can take a set (as oppgsed to a tuple) of arguments. I-Ience for a symmetric relation R, SNePS can express in a single atomic proposition node the same information as two first order atomic propositions R(a,b) and R(b,a); and the SNePS representation eliminates axioms of symmetry altogether. However, for the purposes here, the SNePS Nutier 449 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. representation can be treated as equivalent to a representation in standard logic (first or higher order), augmented by a capacity for representing default generalizations. The default reasoning system introduces a logical operator p, which takes a proposition or formula and marks it as uncertain (p can be read roughly as “presumably”). For the purposes here, the only significant rule about p is that p’s are “inherited” through inferences. That is, suppose that a proposition + can be inferred from a set of propositions w = (Wl,...,Wn), and suppose that for 1 5 i < n, vi’ = vi or Wi’=pWi. Then from W’ = (Wl’,...,Wn’} the system can infer p$. For a discussion of the default logic, see [Nutter, 1983a]. The belief revision system introduces the concepts of contexts and belief spaces. A context is a set of propositions taken as hypotheses which form the deductive basis of a belief space, and which may be thought of as the axiom set of a potential agent’s beliefs. The belief space associated with a given context contains all propositions which have been deduced from that context. The belief space does not automatically include all propositions entailed by the context, since agents are taken as knowing (or believing) only those propositions actually inferred (and those trivially subsumed by them). As propositions are deduced from hypotheses in a context and other members of the associated belief space, those propositions not trivially subsumed by propositions already present are added to the belief space, along with other propositions proved along the way. Propositions may belong to several belief spaces, i.e., the belief spaces of different contexts can and frequently will overlap. In addition to modeling the system’s beliefs and beliefs of other agents, contexts provide settings for hypothetical reasoning, and so on. At any given time, there is a distinguished context called the current context (CC), which contains the hypotheses of the system’s belief space. This is the context we will be most concerned with here. Propositions in a belief space have associated deductive histories (one for each deduction which had the proposition as its conclusion). Each deductive history includes a record of the proposition’s set-of-support, which is the list of hypotheses actually used in deriving the conclusion in that particular deduction (where a previously proved proposition ,A is used, the set-of-support includes A’s set-of-support, not A itself). Belief spaces can be identified using sets-of-support: a proposition is in the belief space of a context C provided that it has at least one deductive history whose set-of-support is a subset of C. The assimilation strategy proposed here, then, is targeted on systems whose knowledge base can be construed as having the structure given in figure 1. The effect of assimilation will be to alter both the contents and the structure of the CC and its associated belief space. o Assimilation and Forgetting The essential idea behind this approach to assimilation is to make use of internal housekeeping cycles for discarding “clutter”. The architecture described above Knowledge Base Figure 1. Structure of the Knowledge Base saves not only all results of deductions (answers to top-level user questions, for instance), but also all intermediate results not trivially subsumed by known propositions. For instance, say that the CC contains representations for the propositions V x(Bird(x) 3, Has-feathers(x)), v x(Ostrich(x) 1 Bird(x)) and Ostrich(Oscar), and suppose that the system is asked whether Oscar has feathers. Upon completing the deduction, the system will add a representation of Has-feathers(Oscar) and Bird(Oscar). For questions that require realistic amounts of inference, these intermediate results can increase the size of the knowledge base significantly. The system could simply choose not to add the intermediate results, but then it will have to repeat the same chains of reasoning, sometimes quite long ones. In other words, the question whether to retain these intermediate results represents a classic time-space trade-off, in this case a choice between reinventing the wheel and remembering everything the system has ever known, however trivial. The ideal answer would be to retain only those results which the system will actually want and find useful in the future. The key idea here is that these may be easier to identify in retrospect than in advance, and identifying them in retrospect can be worthwhile. Suppose that the system time-stamps every proposition in the knowledge base every time that the proposition is accessed. (“Accessed” here means actually used, not just involved in a set-of-support manipulation, for instance.) Suppose also that a latency time period t is chosen to reflect “recent use”. Then at regular intervals, the system can scan for propositions which have not been accessed in the period from now - t to now. One simple rule would be the following: if these propositions do not belong to the CC, then they constitute clutter and can be forgotten. But not all clutter is useless. For example, suppose that an element of the CC has not been accessed in the latency time. It may be that instead of using that proposition in making deductions, the system is using other members of the belief space which in fact shorten deductive chains. Those other members of the belief space may have been proved using the original hypothesis; it cannot simply be dropped without curtailing the belief space and losing information that is actively in use. But it may be possible to replace that hypothesis by one or more of the propositions in the belief space. That is, the fact that an hypothesis 450 Knowledge Representation has not been used in a while may show that the current axiomatization is less efficient than an available alternative. This alternative provides a restructuring of the belief space to adapt to the system’s environment. Even unaccessed propositions in the belief space but outside the CC may not be simple clutter. For instance, suppose that the system has unaccessed propositions of the form f(a) for different instances of a. If the system does not contain Vxf(x), and if that is true, maybe it should add it. That is, instances of concrete propositions of the same form may reflect frequently followed paths of inference which can be shortened. So to optimize the balance of time wasted on repeated inferences versus space wasted on useless results, the system performs controlled forgetting, checking first that no crucial information will be lost. But before simply forgetting any single item, the system reflects on it to see whether it indicates a useful restructuring or other adaptation of the belief space. It is this reflection and consequent altering and restructuring which constitute assimilation. IV. The ~Ssimi~atiQ~ §trategy Select a latency time t to represent the longest period for which a proposition may be unused without being considered for assimilation or forgetting. On each assimilation cycle, the system scans through the knowledge base to form the set Q = {@ 1,. . . ,$I I for 1 I i I n, Q, i has not been accessed for at least t long]; propositions in 0 are called stale. Because the strategy is sensitive to the order in which the members of $ are considered, they are ordered with the most recently accessed last. That is, for i c j, time-of-last-access(@i) I time-of-last-access(+j). For each $i, there are three possibilities: @i is in the CC, Cpi is not in the CC but is in the system’s belief space, or Qi is in neither the CC nor the system’s belief space. We these up in turn. A. +iisintheCC Either @i can be proved from CC - $i or it cannot. If it can, then it can be dropped without loss, since any inferences which can be made using it can also be made without it, and apparently they are. Before dropping 9 i, howyheer, the system must find any propositions in belief space in whose set-of-support 4 i occurs and replace it in those sets-of-support by the set-of-support of the proof of Qi from CC - $i. (In SNeBR, finding these propositions is simply checking for a link, and does not require scanning the knowledge base. Implementations on other systems may involve a higher search cost.) The system can then forget $ i (i.e. delete it from the knowledge base). This is the provision which makes the order in which stale propositions are considered matter. Suppose that $i and Qj both belong to CC. Then it may be that @i can be inferred from CC - @i and that Cpj can be proved from CC - @j, but $i cannot be proved from CC - {@i,Qj} and neither can oj. In this case, only one can be forgotten. Since the one considered first will be the one forgotten, the one that has gone unused longer should be considered first. In the second and more interesting case, Qi can not be derived from CC - Cpi. Either Qi occurs in one or more sets-of-support, or it does not. If it doesn’t, then it seems reasonable to forget it: if the system has never needed this piece of information before, it is unlikely to do so later. (Hypotheses which are anticipated to be needed rarely but crucial when they are can be flagged to prevent deletion.) A slightly more liberal strategy would drop Qi provided that it occurs only in the sets-of-support of stale propositions; this effect can be obtained by ordering $ so that all members of the system’s belief space that are not in the CC come before any that are. If Qi can not be proved from CC - $i but does occur in sets-of-support, it may still be inferrable from the belief space. Since everything in the belief space that is not in CC can be inferred from it, the only beliefs not in CC that we have to consider are those with oi in their set-of-support. Let B(Qi) = {pv I w is in the system’s belief space, w e @, and Cp i is in the set-of-support of a~], and consider CC’ = CC u B(@i) - $i. Suppose that @i can be proved from CC’. This means that CC’ contains the information of Qi, in a form that * the system finds more useful: it is actually accessing the members of B(@i), but not $i itself. Suppose there is only one proof of @i from CC’. Let B’($i) be the subset of B($i) appearing in the set-of- support of that proof. Then add B’(Qi) to CC, replace $i by B’($i) in all sets-of-support, and forget Qi. This restructures the CC to adapt to the actual pattern of use. Suppose on the other hand that there are several proofs of Qi from CC’ and that they have different associated sets B’(@ i). Then the system must choose which propositions in its belief space to elevate to the cc. The system could choose the smallest set. This corresponds to the principle that axiom sets should be kept as small as possible, and so favors space over time. Alternatively, it could choose the set whose elements collectively have the largest number of recent references. This is the more interesting option, because the more adaptive: it adds the propositions most used and hence presumably most useful. It is also possible to combine these, to select a relatively small set with relatively high usefulness. . $i is in the system’s belief space but not in the CC Before forgetting stale members of the system’s belief space, the system checks whether they suggest useful generalizations. Suppose that Qi contains at least one individual constant. Check the belief space for other propositions Q i’ which differ from + i only in individual constants. (In the case of SNeBR, this check costs relatively little because the propositions in question share structure with Qi at $1 relational constants. In other specific architectures, it may be more costly.) If other such $ i’ exist, form the proposition fi* by replacing all individual constants that the @i’ do not hold in common by variables and then quantifying universally over those variables. If Qi* is already in the system’s belief space, do netting at this point. If it is not, try to deduce it, and if successful, add it to the belief space. If $i* can not be deduced, try to deduce p$i* . Next, check Q, for other propositions which have at least one individual constant in common with $i. For Nutier 451 each such $ j, consider the pair {q i,$ j } and look for pairs {@i’,$j’) where oi’ corresponds to $i as above, Q j’ corresponds to + j, and @ i’ and $ j’ share the corresponding constant. Let @j = Qi 3 Qj, Qji = +j 3 $i, $ipj = $i I> p$j, and Qjpi = $j 3 p$i, and form $ij*, $ji*, @i:tt and Qjpi* as before. These represent hypotheses “if-then” unrversal rules (and corresponding default generalizations) which might shorten inference paths. Try to deduce each of these from the CC; any which can be deduced should be added to the belief space. Finally, forget $i and any of the Qi’ which are also in Cp and not in the CC. C. Qi is in neither the CC nor the system’s belief space. In this case, $i is important to the system only if it is in a belief space which the systems “cares about”. For 9i to have entered the knowledge base without belonging to the system’s belief space, one of several things must have been true. Either it was once in the belief space but left it because at least one of the hypotheses in the set-of-support for $i was dropped from the CC (not forgotten, but found to be false), or $i was deduced in the course of a deduction involving hypothetical reasoning, or +i was deduced in the course of reasoning about some other agent’s beliefs. If $i was involved in hypothetical reasoning, and if the context relative to which +i was deduced (or hypothesized) often matters to the system, the system may want to protect that context by flagging it in such a way that the rules for assimilating information in the system’s belief space also hold relative to that context’s belief space. This amounts to saying that some hypothetical situations matter enough to the system that it is worth maintaining information about them the same way that it is maintained about (what the system regards as) the actual situation. Likewise, if $i belongs to the context or belief space of another agent about whom the system frequently must reason, the context for that agent may also be protected. If $i is an hypothesis of an unprotected context, forgetting Qi involves also forgetting propositions in whose set- of-support $r i figures. The strategy therefore prohibits forgetting cpi if it occurs in every set-of-support for one or more propositions not in $, since the existence of such ‘non-stale propositions indicates that the context, although unprotected, is still active. In all other cases, if $i does not belong to the belief space of any protected context (including the CC), it can simply be forgotten. V. Discussion A. Information Loss Most of the strategy outlined above is straightforward both in implications and in implementation. Potentially controversial decisions surround instances of forgetting in which information is actually lost. These arise whenever something is forgotten which cannot be deduced from its context, that is, when hypotheses of active contexts are dropped. This happens when a stale hypothesis cannot be derived from the rest of the its context and has no “fresh” consequences. In the case of the CC, the rationale for forgetting the hypothesis is that if it hasn’t been needed yet, it probably won’t be. How good this rationale is depends on how long the system has been around: in the early phases of system use, some areas may simply not have been got to yet. It follows that it may be reasonable to have a second latency time tc which is checked before a stale hypothesis in the C!C is discarded. That is, to determine whether the hypothesis is stale, the same threshhold is used as for any other proposition. But before discarding the hypothesis, the system checks whether the hypothesis has been latent longer than tee The more conservative the system, the longer t should be. Alternatively, protection can be taken a”s” absolute for hypotheses: they can be replaced by other propositions so long as they remain deducible, but in no case can they be forgotten without being reconstructible. For hypotheses of unprotected contexts, the rationale for forgetting the information is that it is the only way to get rid of outdated contexts. Since hypothetical contexts arise routinely in the course of reasoning with certain inference rules, it is desirable to be able to get rid of them later: contexts and intermediate conclusions which existed only in order to work through a single proof-by-cases constitute true clutter, and should be forgotten. The price for this is that if for any reason a context which should be protected isn’t, information about it may be irretrievably lost. B. Extensions of the Strategy The strategy can be extended in several interesting ways. The most intriguing possibility occurs when $i belongs to CC, and when Qi can almost be inferred from CC’, but not quite. That is, most of the information of 9i has been captured by inferences already made, but there is some $i+ which is not there. What we would like the system to do is find simultaneously the weakest and the most interesting +ip such that CC’ + $it entails $i. The system would then add Qi t and the appropriate B’($i) to CC, making the correct alterations to sets-of-support and forgetting 0 i. The obvious problem here is to get the system to formulate @it. Other options include finding more sophisticated patterns in stale instances than the simple implications described above, and considering when the system should take information as suggesting (default) generalizations even when it can’t prove them. c. Computational Cost of the Strategy Despite the features of SNePS and SNeBR mentioned above which reduce search costs, assimilation passes are obviously very expensive. Worse, their cost rises rapidly as the number of stale propositions increases, since the amount of inference required grows rapidly. This might seem to suggest that passes should be run frequently (at least relative to t), to hold down the size of 0. Unfortunately, the usefulness of each pass also increases in proportion to the size of @. This suggests that passes should be infrequent, ideally off-line or at very low use times, and that t should be large. The 452 Knowledge Representation latter also follows from the desire not to discard potentially useful intermediate results too quickly. On [Martins and Shapiro, 1986~1 Jo50 P. Martins and Stuart the bright side, as the system adapts to its C. Shapiro. Belief revision in SNePS. In environment, the number of major changes should Proceedings of the Sixth Canadian Conference on decrease, resulting in a natural reduction in Artificial Intelligence, pages 230-234, Montreal, assimilation cost. Quebec, Canadian Society for Computational Studies of Intelligence, May 1986. The strategy described here provides mechanisms for systems to assimilate information in response to the actual patterns in which they have been called on to use that information. It allows them to eliminate clutter while retaining useful intermediate deductive results, thus avoiding repeating inferences while lessening the cost in space. More interesting, it also lets them restructure their belief spaces, promoting important derived principles to axiom status and demoting less useful axioms to belief status or forgetting them altogether. This lessens the need for system designers to anticipate the environment in which the system will be used (including the precise questions it will be asked) by letting systems adapt their axiom structures to their use environments, and within a single environment to changes in emphasis over time. The strategy is indifferent to the area of application, and while it was designed for a particular knowledge base architecture, it can be readily -adapted to other architectures so long as they retain an essentially axiomatic structure (that is, so long as they have a distinction between context and belief space). It is thus a very general approach to self-reorganization in the particular area of information assimilation. eferences [Martins, 1983aJ Jolo P. Martins. Belief revision in MBR. In Proceedings of the 1983 Conference on Artificial Intelligence, Rochester, Michigan, 1983. [Martins, 1983b] Jogo P. Martins. Reasoning in Multiple Belief Spaces. Ph.D. Dissertation, Technical Report 203, Department of Computer Science, SUNY at Buffalo, May 1983. [Martins and Shapiro, 1986a] Jolo P. Martins and Stuart C. Shapiro. Theoretical foundations for belief revision. In Joseph Y. Halpem, editor, Theoretical Aspects of Reasoning About Knowledge, pages 383-398, Morgan Kaufmann Publishers, Los Altos, California, 1986. [Martins and Shapiro, 1986bJ JoIo P. Martins and Stuart C. Shapiro. Hypothetical reasoning. In Applications of Artificial Intelligence to Engineering Problems: Proceedings of the First International Conference, pages 1029-1042, Southampton, U.K., University of Southampton, ADS 1986. [Martins and Shapiro, 19841 JoZto P. Martins and Stuart C. Shapiro. A model for belief revision. In Non-monotonic Reasoning Workshop, pages 241-294, New Paltz, New York, American Assocation for Artificial Intelligence, October 1984. [Martins and Shapiro, 19831 JoIo P. Martins and Stuart C. Shapiro. Reasoning in multiple belief spaces. In Proceedings IJCAI-83, pages 370-373, Karlsruhe, Federal Republic of German, International Joint Committee for Artificial Intelligence, August 1983. [Nutter, 1983a] Jane Terry Nutter. Default reasoning using monotonic logic: a modest proposal. In Proceedings AAAI-83, pages 297-300, Washington, D.C., American Association for Artificial Intelligence, August 1983. [Nutter, 1983b] Jane Terry Nutter. Default Reasqning in A.I. Systems. Technical Report 204, Department of Computer Science, SUNY at Buffalo, October 1983. [Shapiro, 19791 Stuart C. Shapiro. The SNePS semantic network processing system. In Nicholas V. Findler, editor, Associative Networks: The Representation and Use of Knowledge by Computers, pages 179-203, Academic Press, New York, 1979. [Shapiro and Rapaport, 19861 Stuart C. Shapiro and William J. Rapaport. SNePS considered as a fully intensional propositional semantic network. In Proceedings AAAI-86, pages 278-283, Philadelphia, Pennsylvania, American Association for Artificial Intelligence, August 1986. Nutter 453
1987
77
673
Allard and William F. Kaemmerer Artificial Intelligence Department Honeywell Corporate Systems Development Division 1000 Boone Avenue North Golden Valley, Minnesota 55427 Abstract We have developed and implemented a plan representation system which has been used as the knowledge representation for COOKER, a real-time process monitoring and operator advisory system for batch manufacturing processes. This representation (called “Goal/Subgoal” or “GSG”) associates two hierarchies of subgoals with each goal: a sequence of subgoals which need to be satisfied to satisfy the superior goal, and a set of requisite subgoals which must remain satisfied throughout the process of satisfying the superior goal. By explicitly representing correct process operating behavior instead of the infinite space of problem behaviors, a broad range of process operation anomalies can be recognized and diagnosed in terms of a single, simple description of the system. In this paper we compare GSG to our first approach at representation, describe the GSG representation, show how goals are used to monitor processes, and describe some results of our installation of COOKER in a manufacturing plant. I. Introducticbn A representation of batch manufacturing processes has been developed, implemented, and installed in a factory as part of a system which uses a goal and subgoal representation to monitor the plant in real-time and provide the plant’s operator with advice about its operation. This “Goal/Subgoal”, or “GSG” representation uses two hierarchies of subgoals attached to each goal to represent both the sequence of subgoals which need to be satisfied in order to satisfy a superior goal (i.e., the “phases” of a process), and those subgoals which need to remain satisfied throughout the process of satisfying a superior goal (i.e., the “requisites” of a process). The hierarchy of sequenced subgoals is used to represent the “batch” nature of a manufacturing process, and the hierarchy of sets of requisite subgoals is used to represent the “continuous” nature of a process. This representation was adopted from representations in the planning literature. The GSG approach has several advantages over a knowledge representation scheme we initially used in the project. The initial approach used a set of rules for recognizing phase transitions within the batch manufacturing process, expectations for conditions and events, a set of rules for recognizing problems within phases, a set of diagnostic rules describing problem/cause trees, and another set of rules lThis report describes work performed at Honeywell. Mr. Allard’s current address is Gensym Corporation, 125 Cambridge Park Drive, Cambridge, MA 02 140. Please address correspondence to Dr. Kaemmerer. describing fixes for verified problem/cause tree leaves. In this approach, the primary objects were problems. Conversely, GSG describes the space of behaviors in which the process is working correctly. By focusing explicitly on this space, the system can recognize behaviors falling outside of the process description as problem behaviors. This is an improvement over approaches which try to explicitly describe the infinite space of problem behaviors. While both approaches are capable of recognizing and diagnosing the same sets of problems, GSG promotes an iterative knowledge engineering approach which first results in a simple knowledge base that recognizes the presence of all problems which affect monitored variables, but diagnoses very few of these problems. Further knowledge base work can then focus on the ability to diagnose a broader range of problem causes. On the other hand, a problem/cause tree approach promotes the generation of a forest of these trees. The resulting knowledge base permits diagnosis of all problems it contains, but it may never reach the point where it covers the full set of problems. The GSG representation also unifies all representation of the manufacturing process into one structure, eliminating the redundancy within the distinct rule sets of the problem/cause tree approach. Another advantage of GSG is that it helps split the representation of process information away from the methodology being used to utilize that information. Splitting these two was especially advantageous for us, since we were concurrently developing the knowledge base and the methodology for applying that knowledge. The GSG representation is a part of COOKER, an implemented expert system which monitors a batch manufacturing process and provides real-time advice to the operator. COOKER has several functions: It provides process operators with a continuous identification of the current phase of the process. It assists the operator in avoiding and/or recovering from undesirable process conditions by detecting unexpected changes in process variables and informing the operator of them via a textual description. Then, it advises the operator of actions that should be taken to avoid or recover from the problem, indicates the degree of urgency of the advised actions, and provides the operator with a notification when the undesirable process conditions are corrected. On request, the system provides explanations of the rationale behind its suggestions. Finally, if COOKER’s resources are insufficient to recommend a safe, appropriate response, it refers the operator to his or her shift coordinator. Currently, COOKER runs on a Symbolics 3640 Eisp machine connected via an IBM AT microcomputer to a Honeywell TDC 2000 Process Control System and a programmable logic controller. COOKER has four main subsystems: the data frames, data gatherer, operator interface, and inference engine. The latter three subsystems run as concurrent processes. The data gatherer sends data requests to, and buffers replies from the AT. The operator interface manages all windows, displaying advice and questions to the operator, 394 Knowledge Representation From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. and receiving replies. The inference engine handles unbuffering data from the data gatherer into data frames, receives replies from the operator, runs the monitoring and problem recognition mechanism on the goals, and runs any problem solving required by the goals. The balance of this paper provides an overview of COOKER’s capabilities, our initial Problem/Cause Tree approach to knowledge representation, the GSG slots and methods for process monitoring and problem recognition, and some conclusions about our system II. Initial Approac A. Domain Features Upon the initial investigation of the domain of real-time process control advisory systems, several different approaches to knowledge representation seemed attractive to us. Some of our initial explorations are documented in [Maemmerer and Mawby, 19861. The representation was built around the concepts of process phases and operator expectations. Since we were dealing with a batch process, there were several different phases of the process, each of which required very different operator actions. Say, for example, we are representing a process for making baked beans. In one phase the operator would fill a pressure cooker with beans, and in the next phase, he or she would heat and pressurize the vessel to cooking levels. During each of those phases, operators have expectations about conditions which should hold over the process variables, and expectations about events which should occur within some time frame. If an operator’s expectation about some condition or event was not met, he or she would recognize that as a problem, or a precursor to a problem, and take action. An example of a condition expectation being violated could be the pressure in a pressure cooker rising to a level which could pop open one of its safety valves. An example of an event expectation being violated is the temperature of a cooker not rising above some threshold by a certain time during product heatup, showing that the process was behind schedule. We surmised that better and more experienced operators would have more expectations and better recognition of the status of those expectations than novice operators. We held that the following were necessary components for a real-time process monitoring system: phase tracking, condition expectation monitoring, and event expectation monitoring. B. Problem/Cause Trees Based on this analysis, we developed a knowledge representation which we called the Problem/Cause Tree approach. It included a set of phase transition rules and expectations as data objects within the system. In support of those mechanisms we had rules which would recognize when an expectation had been violated and would start a problem solving session. Other rules generated and confirmed or rejected possible causes for the problem, and rules associated with each problem/cause tree leaf generated operator advice. Using KEE, a commercial expert system development environment, as a rapid prototyping tool, we made an initial implementation of this system. As knowledge acquisition and encoding of the received information continued, several problems became apparent. The first was that much of the information we received had to be represented more than once in the knowledge base. This presented itself most notably in the cases of phase transition rules and problem/cause tree rules. Each phase was supposed to take a certain amount of time, and if that time limit was exceeded there was a problem. To represent and identify these problems we wrote event expectations which mirrored most of our phase transition rules, resulting in double representation of a large body of information. Also, since our problem solving method required explicit rejection of causes as possible problem culprits, we needed a positive and negative statement of each problem/cause rule, again resulting in a double representation. More redundancies occurred in the problem/cause trees, since it was difficult to use the information within one tree as branches in different problem trees which shared similar causes. A second problem was the extent to which our methodologies for handling information and doing problem solving were influencing the way our rules were written. We recognized that a change in our methodology would force us to rewrite most of our rules. A third problem surfaced as we tried to extend our coverage of the possible problems in the plant. Using the Problem/Cause Tree approach, it was not easy to see, by inspection of the knowledge base, whether or not a given problem type was completely handled, nor how thorough the coverage was across the range of possible problems. These problems with the Problem/Cause Tree approach led us to develop GSG as a method for representing process information. The GSG representation is implemented as a defined flavor in Symbolics Zetalisp. In this object oriented programming scheme, state is retained in slots on each instance of a flavor and operations on instances are provided by methods. Various goal slots contain pointers to other goals, compiled functional objects which implement conditions associated with the goal, or strings describing the goal. Several methods have been defined on goals which implement condition checking, phase transitions, and problem solving. (See [Kaemmerer and Allard, 19871 for a description of the method for monitoring progress in problem solving.) The central mechanism for process monitoring is implemented in the method SATISFIEDP. A plant process is represented by a lattice of goals and subgoals. Each goal represents a plan to be carried out and its subgoals are a decomposition of it into subplans. Goals may also have a set of subgoals which represent conditions which must remain satisfied during the attempt to accomplish the superior goal. The current phase of a process is represented via a goal’s progress through its sequence subgoals. Each subgoal can have its own subgoals, and record its own progress through them. A. Goal/Subgoal Slots The following slots are used to build GSG objects. Sequence: An ordered list of goals which represents the substeps involved in satisfying this goal. Before a goal can test its success-criterion or preventers to declare itself as satisfied, the goal must determine that each of the subgoals in its sequence list, in turn, have been satisfied. This slot fills the requirement for phase tracking. reventers: A set of goals which must be satisfied simultaneously to satisfy a parent goal, after the sequence goals are satisfied. If there is a success-criterion it will be tested instead the preventer goals, but the preventers may still be present and can be used in problem solving, as they represent potential causes of failing to satisfy the goal, if they themselves aren’t satisfied. Success-criterion: A compiled condition which is tested after all sequence goals have been satisfied to see if this goal is now satisfied. equisites: A set of goals which represents conditions which are expected to hold throughout the attempt to satisfy this goal. These goals fill the requirement for condition Allard and Kaemmerer 395 expectations. If a requisite is not satisfied, then the parent goal has a problem. Requisites are checked only if the parent goal itself is not satisfied. Problem-yet: A compiled condition which is tested if a goal is sent a SATISFIEDP message, and was found to be not satisfied. If this condition returns TRUE, then this goal has a problem. If it returns FALSE, then the lack of satisfaction of this goal is a normal event as we wait for some process to complete. This condition fills the requirement for event expectations. Text: An English description of the problem that exists if this goal is not satisfied. It is used in status messages to the plant operator. The subgoals which need to be satisfied to satisfy a superior goal are the sequence subgoals and the set of preventers. These represent the batch nature of a process. The relationship between preventers and the success-criterion is as follows: When both are present, it is intended that the success- criterion should follow from a conjunction of the conditions represented by the preventer goals. Problem solving works by inspecting the set of subgoals of a problem goal which are blocking satisfaction of the superior goal. Thus, the presence of a success-criterion and a set of preventers provides the ability to encode Beth an absolute test for satisfaction of the superior goal and a set of diagnostic avenues to follow if there is a problem. An example of a situation where this is useful is a goal for opening some valve A, which has interlocks on its controller requiring valves B and C to be closed, and D to be open. In a goal such as this, the success-criterion would check the limit switch which indicates if valve A is truly open, and preventers of this goal would be made with success criteria that check that valves B and C are closed, and D is open. With this representation, the goal for A will only satisfy if A actually is opened. If any of the valves B, C, or D are in the wrong position, and are preventing A from opening through interlocks, it can be found in the problem solving process by isolating any preventer goals which are not satisfied. Also, if there is a case where B, C, and D are all in their correct positions, yet valve A still does not open, GSG operates correctly by not allowing the goal for A to satisfy, as well as rejecting valves B, C, and D as possible causes of the problem. . Goalhbgoal Methods There are three methods associated with the goal flavor which perform the operations required to monitor processes representing by goal trees. These methods are ACTIVATE, SATISFIEDP, and DEACTIVATE. ACTIVATE and DEACTIVATE perform initialization and other bookkeeping functions for goals and their subgoals, and SATISFIEDP is used to check if a goal has become satisfied. When a goal receives the ACTIVATE message, it stores the time at which it is being activated, sends the ACTIVATE message to all of its requisite goals, and sets its current position in the list of sequence subgoals to be the head of the list. If the sequence list is not empty, it also sends the ACTIVATE message to the goal at the head of that list. If there are no sequence goals, it sends the ACTIVATE message to all its preventer goals. When a goal receives the DEACTIVATE message, it sends DEACTIVATE to its requisite goals and to a current sequence goal, if any, which is trying to be satisfied. If there is no sequence left, DEACTIVATE is sent to any and all preventer goals. The SATISFIEDP method is described in detail below. COOKER’s inference engine co-process handles GSG objects in the following way. For every manufacturing line to be monitored there is a top level goal. The ACTIVATE message is sent to the top level goal when starting a batch. After the goal representing the process is activated, the inference engine enters a loop in which it unbuffers any data received from the AT into the data frames subsystem, sends each top level goal the SATISFIEDP message, spends time doing any problem solving required, and then waits if it has arrived at the end of the loop before the minimum top level loop time has elapsed. The wait state is entered so that the other processes running on the machine, such as the user interface, the data I/O process, and the garbage collector, will be able to get enough processing time. If the call to SATISFIEDP on the top level goal returns TRUE, then the goal is sent DEACTIVATE and ACTIVATE again to start a new batch. The SATISFIEDP message is used to ask a goal if its conditions for success have been met, to allow that goal to advance itself through its phases, and to allow it to check any conditions it monitors, possibly declaring that it has a problem. There are three phases to the SATISFIEDP method: advancing, success checking, and condition checking. I. Advancing Upon receiving a SATISFIEDP message, a goal advances itself through any remaining sequence subgoals which have not yet been satisfied. If there are none left it goes directly to the success checking phase. If there are some left it advances by sending its next sequence goal a SATISFIEDP message. If the subgoal returns TRUE, the subgoal is sent a DEACTIVATE message, and the current sequence position is set to the next goal down the sequence list, or to NIL if there are no goals left. When there are no subgoals left on the sequence list, this goal proceeds to the success checking phase. If there is another goal on the list, it is sent an ACTIVATE message, and then the superior goal loops back to the top of the advance procedure again and sends the newly activated subgoal a SATISFIEDP message. If any sequence subgoal replies FALSE to a SATISFIEDP message, then the goal will not satisfy, and it enters the condition checking phase. 2. Success Checking If a goal has succeeded in sequentially satisfying its sequence subgoals, or if there were none to start with, the goal enters the success checking phase in which it checks its success criterion or preventers to see if it can satisfy. In this phase, if a goal has a success-criterion condition, that condition is run and if it returns TRUE then (Hurrah!) the goal will immediately return TRUE in response to its SATISFIEDP message. If the condition returns FALSE, then the goal will not satisfy and it goes to the condition checking phase. If there is no success- criterion, then the goal will check its preventers. If there are some preventer subgoals, each is sent a :SATISFIEDP message. If all return TRUE then this goal is satisfied and returns TRUE. However, if there are no preventers or if one of them returns FALSE, then this goal will not satisfy and will enter the condition checking phase. 3. Condition Checking If a goal enters the condition checking phase it has already been determined that it will be returning FALSE to the SATISFIEDP message. In this phase it is checking that its expectations, in the form of a requisites list and a problem-yet criterion, are still being met. It begins by sending any and all of its requisite goals a SATISFIEDP message. If any of them return FALSE to the message, then this goal declares that it has a problem, since requisites represent condition expectations which should hold true throughout an attempt to satisfy this goal. Next, if the goal has a problem-yet condition, it tests that condition and if TRUE is returned, then this goal is declared to have a problem. If the problem-yet condition returns FALSE, 396 Knowledge Representation then it is a normal, acceptable situation that this goal has not yet satisfied, and no problem will be declared. After the goal has or has not been declared a problem, the goal returns FALSE as a response to its SATISFIEDP message. Summing up, to become satisfied a goal must first sequentially satisfy each of its sequence subgoals. Note that progress made in one response to SATISFIEDP persists to the next response. Next the goal must receive a TRUE response from a success-criterion condition, or if the goal has no success- criteriqn, it must have at least one preventer, and all its preventers must all be simultaneously satisfied. If a goal is not satisfied, then it will be declared to have a problem if any of its requisite subgoals is not satisfied, or if it has a problem-yet condition and that condition returns TRUE. Also note that a goal may return TRUE to one call to SATISFIEDP, and FALSE to the next without any intervening calls to DEACTIVATE and ACTIVATE. This feature is needed for goals which are used as requisites. A requisite may be satisfied, not satisfied, and then satisfied again during the course of satisfying its superior goal. C. An Example Figure 1 shows a Goal/Subgoal hierarchy which could be used to represent a baked bean cooking process. In the diagram, the very thick arrows represent sequence subgoal links, such as that between Cook A Batch and Load Beans; the thick arrows represent preventer subgoal links, such as that between Loader Locked and Relief Locked; and the thin arrows off the side of a goal box represent requisite subgoal links. Figure 1: Goal Hierarchy for cooking beans The following example illustrates the operation of the SATISFIEDP method. Suppose that the cooker has just finished being loaded with beans, its cap has been shut and pressurized, but it has not yet been heated up. The top level loop of COOKER’s inference engine has just finished unbuffering data from the data gatherer, and sends the SATISFIEDP message to Cook A Batch. Cook A Batch enters its advancing phase and sends SATISFIEDP to Load Beans. Load Beans goes into its advancing phase, finds that there are no sequence goals, and enters its success checking phase. This goal has a success-criteria which checks a level sensor switch in the cooker which turns off when the beans reach the right level. Load Beans checks its success criterion, the switch is off, the success criterion returns TRUE, and Load Beans immediately returns TRUE. Note that it does not go into its condition checking phase. Its Loader Locked requisite is probably already violated by now, but even if it is violated, it doesn’t matter since Load Beans is satisfied. Cook A Batch receives TRUE from Load Beans, sends DEACTIVATE to Load Beans, advances its sequence list, and sends ACTIVATE and SATISFIEDP to Cook Beans. Cook Beans finds it has no sequence list, goes to success checking, finds a success-criterion and calls it. The criterion finds that the beans have not yet been cooking for 2 hours, and returns FALSE. Cook Beans enters its condition checking phase and sends SATISFIEDP to Cap Locked, Pressure=225, and Temp=220, and all but Temp=220 return TRUE. Cook Beans responds by declaring itself a problem, and then returns FALSE to Cook A Batch. Cook A Batch leaves its advancing phase and enters its condition checking phase It sends SATISFIEDP to Steam Press=40, which returns TRUE. It checks its problem-yet slot which checks if more than 4 hours have passed since this cook was started, and it returns FALSE. So, Cook A Batch is not satisfied, but it is not a problem, and it returns FALSE to the top level loop. When problem solving, the system notifies the operator that there is a problem in Cook Beans, and that the cooker is not yet up to temperature. D. Discussion Using the information in goal slots and the operations provided by flavor methods, GSG provides the necessary abilities we identified for real-time process monitoring: phase tracking, condition expectation monitoring, and event expectation monitoring. The information needed for phase tracking is stored as a pointer to the current position in a goal’s sequence subgoals slot. This information is needed to represent progress through a batch process. The information needed to monitor condition expectations is stored as a set of subgoals in the requisites slot. We think of each phase of a batch process as a continuous process, and this information is needed to recognize problems in continuous processes. The informatmn needed to monitor event expectations is stored as a pointer to a condition in a goal’s problem yet slot. This information is used to recognize problems when progressing through a batch process. GSG’s positive statement of the desired behavior of a process makes it easy to recognize a broad range of problem situations. Furthermore, even if a GSG knowledge base is not complete enough to provide a diagnosis of the cause of a problem, it will nevertheless enable the expert system to recognize when a problem exists and alert the operator. This feature aids the quick initial representation of new manufacturing plants and processes for problem recognition, with the ability to incrementally add further diagnostic information at a later date. GSG’s hierarchy of goals stems from plan representations in the literature, such as the one in Chapman’s TWEAK [Chapman, 19851. In TWEAK, steps represent actions, and each step has associated with it a set of preconditions and postconditions. Plans are generated by starting with a goal, which is a desired condition. A temporal ordering on steps is then established such that the postconditions of a preceding step will assert propositions which satisfy the preconditions of the following step, and the step at the top of this hierarchy asserts a proposition which satisfies the initial goal. This hierarchy imposes a total order on steps within one temporal chain, but only a partial order across the full step set. Since the preconditions of the step which asserts the initial goal are achieved in the same way as the initial goal, these preconditions (or the steps themselves) are called subgoals of the initial goal. We have adopted this representation for GSG with some modifications. Preventer goals are most like the original sets of subgoals in plan representations. We have added subgoal sequences to somewhat collapse the deep hierarchy that results from temporal chains, and to allow a straightforward way to selectively activate only the goals which are currently being acted Allard and Kaemmerer 397 upon. The run-time information environment of our system does not need access to preconditions and postconditions of steps to generate step ordering information. Instead, it needs a set of co-conditions to monitor those preconditions which must remain satisfied until an action’s postconditions have been achieved. This interpretation of preconditions matches well with the condition expectations we identified in our initial domain explorations. These have become our requisite goals. Also, instead of asserting postconditions, our system needs to monitor the real process and recognize when the postconditions that an action was intended to produce have been accomplished. Postconditions have been replaced with the success-criterion condition, and a problem-yet condition has been added to monitor the system and ensure that this happens in a timely fashion. Thus, in GSG, event expectations have taken the form of expectations about goal satisfaction. Another approach, called Goal Tree/Success Tree modeling has recently been presented in [Modarres, et al., 19851 and [Kim and Modarres, 19861. It uses goal representations very similar to typical planning representation to encode information about goals for continuous processes and hierarchies of equipment combinations to provide real time advice to nuclear power plant operators. We believe that GSG is capable of representing any batch or continuous process which consistently follows a standard operating procedure. All batch manufacturing processes are analogous to continuous processes during the completion of individual phases, and all continuous processes have a batch component to them in start-up and shut-down operations. The requirement for a standard operating procedure must be imposed since this system has no planning capabilities of its own. We have considered adding a planning component to COOKER. Occasionally a problem will occur in the plant which undoes an affect which had been achieved by an earlier goal, and the plant needs to go through the process of re-satisfying that earlier goal. A planning component could be made which could schedule the earlier goal to be satisfied again. However, we have found that the plant engineers with whom we have worked have extensive standard operating procedures for handling these situations, and they do not need nor want our system to synthesize novel problem resolution strategies on the fly. We have also found that GSG does not represent diagnostic procedures in a clean way. Operators will occasionally violate their usual condition expectations in order to test a component, and GSG identifies these violations as problems. Also, we are looking at a different representation for requisites since, for example, it makes little sense to have a goal with a sequence as a requisite. At Honeywell we are continuing to explore GSG and other representations for manufacturing processes. IV. Conclusion The GSG system which we have described here is somewhat simpler than the current implementation. Since our initial design, we’ve added support for three-valued logic, lattice interconnections between goals, conditional goal activation, automatic goal synchronization with processes in progress, assumption fields, an incremental, dynamic problem solving mechanism, and we’ve defined a GSG Language which is translated into Zetalisp code through a goal compiler. Despite these further developments, the basic representation and methodology has remained constant. All process information is stored in goal objects which traverse their sequences of subgoals, and monitor their conditions. This approach has proven to be computationally efficient, taking an average of 477 milliseconds (elapsed time with the operator interface and data gatherer running) for top level loop SATISFIEDP processing across a set of goal lattices totaling approximately 800 goals. The GSG representation has worked well, allowing us to implement and quickly install an initial knowledge base which could recognize problems in all phases of the plant’s process, and then later add to that knowledge base to diagnose more problems and give more detailed advice about particular problems. We’ve been able to reuse many portions of our initial goal trees within subsequently developed branches, speeding up the process of encoding detail about further phases. The approach of positively representing what should happen within the process has allowed us to use a single representation to serve the two functions of process monitoring and problem diagnosis. We’d like to recognize the rest of the COOKER project team for their contributions to this work. They are Emilio Bertolotti, Arch Butler, Paul Christopherson, Kim Hermanson, Anne M. Hossfeld, Ron Mawby, John Nomura, John Seder-berg, and Alan Wolff. Thanks to Camille Bodley, Steve Harp, Anne M. Hossfeld, Ron Joy, Carol Kaemmerer, Kurt Krebsbach, John Nomura, Jim Richardson, and Alan Wolff for their comments on this report. eferences [Chapman, 19851 D. Chapman, Planning for Conjmctive Goals. Technical Report AI-TR-802, Artificial Intelligence Laboratory, Massachusetts Institute of Technology. [Kaemmerer and Mawby, 19861 W. F. Kaemmerer and R. Mawby, Representing Knowledge About Expectations in a Real-time Expert Advisor for Process Control. In Proceedings ISA-86, pages 809-820, Houston, Texas, Instrument Society of America International Conference, October, 1986. [Kaemmerer and Allard, 19861 W. F. Kaemmerer, J. R. Allard, An Automated Reasoning Technique for Providing Moment-by-Moment Advice Concerning the Operation of a Process. In Proceedings m-87, Seattle, Washington, American Association for Artificial Intelligence, July, 1987. [Kim and Modarres, 19861 S. Kim, M. Modarres, Application of Goal Tree-Success Tree Model as the Knowledge-Base of Operator Advisory Systems, Submitted for Publication to Nuclear Engineering and Design Journal, October 1986. Correspondence to M. Modarres, Department of Chemical and Nuclear Engineering, University of Maryland, College Park, MD 20742. [Modarres, et al., 19851 M. Modarres, M. L. Roush, R. N. Hunt, Application of Goal Trees in Reliability Allocation for Systems and Components of Nuclear Power Plants, Proceedings of the Twelfth International Reliability Availability MaintainabiliPy Conference for the Electric Power Industry, Baltimore, MD, April, 1985. 398 Knowledge Representation
1987
78
674
Partial Compilation of Strategic Knowledge1 Russ B. Altman and Bruce 6. Buchanan Knowledge Systems Laboratory Stanford University Abstract Many system building efforts in artificial intelligence intentionally begin with expressively rich and flex- ible declarative structures for the control of prob- lem solving-especially when the best problem solv- ing strategies are not known. However, as experience with a system increases, it sometimes becomes desir- able to compile declarative knowledge into procedures for purposes of efficiency. We present a paradigm for compilation which begins with declarative oppor- tunism, moves to a phase of heuristic implementation of a partial plan and finally evolves into a fully elab- orated procedure. We use the PROTEAN geometric constraint satisfaction system as an example. Using results from a purely declarative structure, we were able to compile strategic knowledge into a procedure for planning a solution. The problem ior of the new system is reported. solving behav- Knowledge compilation offers an engineering solution to the problem of combining the flexibility of a declarative represen- tation of knowledge with the efficiency of a more procedural representation. For applications in which knowledge is chang- ing frequently, the benefits of declarative representations may outweigh considerations of efficiency (especially during devel- opment). For others, in which run-time efficiency is more im- portant, the use of declarative representations for initial de- velopment must be followed by compilation of knowledge. As knowledge-based systems become larger, there is increasing use of separate meta-level knowledge structures (sometimes called strategic or control knowledge) to reduce the complexity and increase the understandability of these systems [Davis, 1980, Clancey, 1985, Hayes-Roth, 1985, Hewitt, 1972, McDermott, 19781. In this paper we show the results of compiling parts of this strategic knowledge, represented declaratively, into a par- tial plan that instantiates major control decisions. It is useful to have a rational method by which tion from declarative to procedural forms of strategic can be made gracefully. In this paper, we argue: the transi- knowledge ‘This work was funded in part by the following contracts and grants: NIH GM07365, DARPA N00039-83-C-0136, DARPA N00039-86-C-0033, NIH RR-00785, NASA-Ames NCC-2-274, Boeing Computer Services W271799, and a gift from Lockheed Corp. We would like to thank Alan Garvey, Craig Cornelius and Barbara Hayes- Roth for discussion of their experimental results. We also thank Jim Brinkley, Bruce Duncan, John Brugge and Oleg Jardetzky for col- laborative research on PROTEAN. 1. That the separation between strategic knowledge and do- main knowledge (as in the PROTEAN/BBl blackboard system) is useful in the development of efficient problem solving strategies. We suggest a three stage paradigm with which this development can usefully be viewed. 2. That if strategic knowledge is represented declaratively and separated from domain problem solving knowledge, then compilation of strategic knowledge can be performed and integrated within domain problem solving knowledge. 3. That the compilation of parts (but not all) of the problem -solving knowledge yields plans in which flexibility is sacri- ficed for efficiency. These plans embody a set of decisions that may anticipate global problem solving strategy better than more locally focussed strategy knowledge. We substantiate these claims with examples from the PROTEAN system for the determination of protein structure [Altman and Jardetzky, 1986, Brinkley et al., 1986, Hayes-Roth et al., 1986131. PROTEAN is a geometric constraint satisfaction system described in section II. In one version of this system, Hayes-Roth and coworkers have shown that declarative control structures can be used to control reasoning about constraint satisfaction in spatial assembly problems [Hayes-Roth et al., 1986a]. We have compiled elements of this strategic knowl- edge and have been able to implement plans to guide problem solving without any modification of the basic domain problem solving actions. The plans prescribe partial sequences of ac- tions. Portions of the problem solving for which there is no plan prescription are controlled opportunistically with heuris- tics. A. Three Phases of evelspment for Procedures The development of a computational procedure for the solution of difficult problems can be usefully divided into three phases. The first phase, which we calf the opportunistic phase, is char- acterized by the use of architectural frameworks in which there is considerable freedom for a designer to experiment with dif- ferent formulations of the search and different strategies for controlling it. Having gained experience from work within the oppor- tunistic phase of development, we can enter the partial plan phase. A partial plan provides an incomplete specification of the actions required for a solution. In this phase a designer draws upon the heuristics and experience gained during ex- perimentation to increase efficiency with a more rigid problem solving control plan. These plans are not fully prescriptive for problem solving, however, and some of the declarative strate- gic knowledge may remain when the plan prescribes nothing. Altman and Buchanan 399 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Partial Plans Re Compile Strategic Knowledge I I Figure 1: Three suggested stages in the development of strategies. The transition from declared strategies to more procedurally defined ones involves the compilation of strategic knowledge. The plans provide more direction and purpose to the problem solving than strategies with a shorter horizon. Thus, a plan imposes a particular detailed set of steps on problem solving and so reduces the amount of computational effort spent on choosing among different choices. Experimentation with a system in the partial plan stage may lead to refinement of the plan details. We believe, but have not shown, that compilation of strategy knowledge can be auto- mated when the syntax of both the declarative strategy knowl- edge and the domain knowledge structures is known. When the plan becomes fully elaborated it can be called a heuristic pro- cedure for solving the problem. The procedure phase requires little computational effort in choosing control alternatives-all decision points are predefined and the criteria for selection are predetermined. We have found that this computational saving comes at the cost of problem solving flexibility. B. Compilation of Knowled In a changing, experimental setting it is useful to represent strategy heuristics with declarative data structures in order to provide an environment for experimentation with different strategies. However, when there is evidence that certain strate- gies are superior to others, it may become desirable to incor- porate these strategies more directly,into the solution for effi- ciency. Compilation of declarative strategic knowledge is char- acterized by a move from a description of desirable actions to prescriptions for action. Systems which allow descrip- tive strategic statements are faced with the task of interpreting these statements and matching them with feasible actions in order to identify desirable actions [Hayes-Roth et al., 1986a]. Davis [Davis, 19801 f re erred to the interpretation of such meta- level strategic statements as content-directed invocation. How- ever, the cost of using generic control statement interpreters and action-matchers may not be warranted if there is a proce- dure which can identify the most desirable actions using spe- cific domain knowledge. Such a procedure bypasses the use of all-purpose interpreters and matchers, and thereby improves efficiency. We have manually constructed special-purpose, domain- dependent procedures for making strategic decisions in situa- tions for which criteria have become clear from experimenta- tion. These procedures eliminate the need for interpretation of control strategies and the overhead of matching these strate- gies with potential domain actions. They gain problem solving leverage by using knowledge of the application domain, and as such can be considered domain problem solving actions. The key step in compilation is creation of a partial plan, an abstract sketch of how to solve the whole problem which is stylized enough to allow straightforward translation into procedures. We do not assume that the plan is complete, however, and thus must abe able to solve problems with partially compiled, par- tially interpreted control strategies. Our procedures, therefore, produce a partial plan for the solution of the problem. Sec- tion II shows how the ideas apply to PROTEAN. Section III illustrates how this plan interacts with data driven control for a particular problem. A. The PROTEAN System PROTEAN is a system for determining the structure of protein molecules from experimental data. The system and motivations are described in detail in [Altman and Jardetzky, 1986, Brinkley et oI., 1986, Hayes-Roth et al., 1986131, but are summarized for present purposes. PROTEAN begins with a number of abstract or elementary objects (atoms or groups of atoms with fixed physical relationships) and constraints among the objects. A constraint typically specifies a range of distances between two points. PROTEAN makes “partial arrangements” of subsets of the objects in three dimensions, and then combines partial ar- rangements into a final solution space. In a single partial ar- rangement, a coordinate system is defined around a single ob- ject (called the anchor), and positions of other objects (ala- chorees) relative to the anchor are defined (see Figure 2). Ex- cept for the anchor, an object may have more than one “legal location” in which its positional constraints are satisfied. A “coherent instance” is a list of single locations for each object such that all constraints are satisfied. The set of all coherent instances represents the set of all structures of the protein that are consistent with the experimental constraints. PROTEAN has a basic set of actions that have been pro- cedurally defined within domain problem solving knowledge sources. They include: ANCHOR [B to A]: finds all locations for anchoree B rel- ative to anchor A (in a partial arrangement) which are consistent with the constraints between A an B. Figure 2 shows the accessible volume of two anchorees relative to a fixed anchor (HELIX-5). YOKE [B and C with respect to A]: reduces the list of locations of anchorees (B and C) in the space of an- chor A by pruning locations that are incompatible with the constraints between B and C. Figure 3 shows Helix-3 and Helix-l after YOKING. Their accessible volumes have been reduced by consideration of the constraints between them (cf. Figure 2). APPEND [C to B with respect to A]: finds all locations of object C relative to an anchoree B in the space of anchor A. This involves finding all locations of C relative to B and B relative to A and then producing the cross product to get all locations of C relative to A. Figure 3 shows Helix- 400 Knowledge Representation 2 positioned with an APPEND action by considering its constraints to Helix-3. CD CONSOLIDATE [ bj t o ec s with respect to A]: finds the set of locations (one from each object’s accessible volume) that constitute a “coherent instance.” Anchor Figure 2: PROTEAN’s basic problem solving action, AN- CHOR. Legal locations are shown as accessible volumes - around each anchoree. Figure 3: PROTEAN’s b asic problem solving actions, YOKE and APPEND. A partial arrangement can be considered a constraint sat- isfaction network in which each node is an object with a list of locations and each arc between nodes represents constraints on the relationships between pairs of locations taken from the nodes. We have shown elsewhere [Brinkley et al., 19861 that the anchor action corresponds to creating a constraint network that is node consistent in the terminology of Mackworth [Mack- - worth, 19771. Yoking corresponds to checking for consistency of the arcs. Consolidation is equivalent to a backtrack search for solutions to the constraint-network. Backtrack search is computationally prohibitive, and it can be made tractable by pruning the set of initial locations with anchor and yoke oper- ations. PROTEAN’s problem solving repertoire of four primi- tive actions has been implemented as a set of domain knowledge sources in the BBl blackboard environment [Hayes-Roth, 19851. Each action is represented as a domain Knowledge Source (KS) which is triggered when a relevant change is made to the prob- lem solving (“domain”) blackboard. A triggered KS is instan- tiated as a Knowledge Source Activation Record (KSAR) for each context in which the action becomes feasible, and is placed on the agenda. Domain Knowledge Sources are generally pro- cedural statements of how to perform calculations and make appropriate changes on the problem solving blackboard. A sep- arate control facility is then used to rate feasible actions and determine which actions should be performed. 0 Con&rolling The problem of arranging objects in three dimensional space under constraints is known as bin packing, and is NP-complete. PROTEAN is able to solve such a combinatorially explosive problem partly because it can make reasoned choices about the best objects and best actions on which to focus at each stage of problem solving. The strategic choices that must be made by PROTEAN include: 1. 2. 3. 4. 5. 6. 7. How many partial arrangements should be created? Which objects should be included in the partial arrange- ments? Which objects should be designated the anchors of the partial arrangements? Should an ANCHOR or APPEND actions be used to po- sition a particular object within an arrangement? In what order should YOKE actions be applied to most quickly reduce the size of the accessible volumes? When are two partial arrangements ready to be merged toget her? When should a partial arrangement be CONSOLIDATED (because pruning techiques have reaced a point of dimin- ishing returns)? In addition to our continued development of an explicit, declarative version of the strategy, there is a need for a system- atic compilation of the method. Thus, part of our research fo- cuses on the development of a straightforward procedural state- ment of how to make these choices, which in BBl are called “control problems.” The key characteristic of this implementation of PRO- TEAN in BBl is that there is a separation of the mechanism which generates feasible actions from that which selects actions for execution. When a control problem arises, the system can look to the agenda of feasible actions for a complete set of alter- natives, and choose among them. The process of compilation of strategic decisions reduces the frequency at which the complete agenda must be examined. The ACCORD language has been developed to define high level “control sentences” which declaratively and indirectly specify problem solving actions [Hayes-Roth, 19851. For example, in choosing the best anchor for a partial arrangement, one cqntrol sentence reads: ORIENT a PARTIAL-ARRANGEMENT about a LONG, RIGID, CONSTRAINING SECONDARY- STRUCTURE. Altman and Buchanan 401 This sentence is interpreted and matched with each task on the agenda (called a KSAR) in order to determine an overall rating for the task. The “action-type” of the KSAR is com- pared and scored relative to the action-type ORIENT, the potential anchor is checked and scored with respect to being a SECONDARY-STRUCTURE as well as being LONG, RIGID and CONSTRAINING. The definitions of these modifiers are stored in a knowledge base as quantitative rating functions. The KSAR action that best matches this declaration is chosen for execution. This control mechanism is flexible since it can handle a wide variety of problems, and be used for explaining its selec- tion [Schulman and Hayes-Roth, 19871. It also allows different modifiers to be easily tested and results to be compared [Gar- vey et al., 19871. I n short, it is convenient for experimenting to find specific rules for solving classes of problems. It is ex- pensive, however, since each potential anchor must be rated with respect to a number of different modifiers. In addition, it takes a best-first approach to control, and assumes that op- timal decisions locally will produce good global performance. Each control sentence is meant to choose the next step in the solution. A control sentence can not decide that a sequence of steps should be pursued, but can only select a single step. Therefore, this approach ties the program down to an extremely “deliberate” control process. A second method of control that we have implemented for PROTEAN incorporates experimental results with control sentences and imposes more structure on the problem solving sequence. The results in [Garvey et al., 19871 showed that cer- tain modifiers in the control sentence were more important than others, and usually led to better performance. For instance, in the case of selecting an anchor, the number and distribution of constraints to other objects is the most important variable (as captured by the modifier CONSTRAINING). We com- piled this rule into a new knowledge source which procedurally defined the criteria for choosing an anchor by examining prop- erties of the initial constraint network. We used similar results from our own studies to compile a procedure for deciding which objects should be introduced into a partial arrangement with the ANCHOR action versus the APPEND action. We added this information to the new knowledge source and were left with a domain problem solving KS which chooses the best global anchor, the best anchorees to introduce into the global partial arrangement as well as other “secondary” partial arrangements with which to define local geometries. This KS, therefore, produces a partial plan for solution of the problem (shown in Figure 4). As a result of compil- ing PROTEAN’s declarative control sentences, Helix-5 can be uniquely identified without further search to be the anchor. In addition, Helices 1,3,8,9 and 10 are designated as anchorees. Other objects are to be introduced into the space of Helix-5 with an APPEND action. In order to implement this plan, we also added a single control statement that favors KSARs men- tioned in the plan. This control statement simply checks to see if the KSAR appears in the plan or not, and replaces other control declarations (like the ORIENT control sentence shown previously) that require interpretation and matching. Our plan allows us to remove control sentences that address control is- sues 1,2,3 and 4 as listed in section II-B. Three points should be emphasized: 1. Our method for compilation has two steps: Helix-3 Helix-8 Mix-9 Helix-lo othsr-i -Bnchoree~ Figure 4: A graphical depiction of the partial plan for solution of T4 Lysozyme. (a) Manual Static Compilation Reformulate declarative strategic knowledge (de- rived from experimentation) into domain proce- dures for solving a problem. These procedures are contained in new domain knowledge sources and specify the compiled criteria for choosing objects and actions during problem solving. Replace all reformulated declarative strategic knowledge with a single strategic statement that chooses domain actions mentioned in the plan whenever they become executable. (b) Dynamic Compilation in Context Execute the procedure in the context of a problem statement in order to actually select objects and actions that con- stitute a partial plan for solution. The instantiated plan is used to guide control decision making. 2. By compiling strategic knowledge, we have decided to make some control decisions in advance. The “compiled” decisions are based on evaluation of the static properties of the objects in the problem and the domain problem solv- ing actions. They should not depend critically on dynamic properties of the problem. If unanticipated problems occur in implementation of the plan, this decision may prove to be extremely expensive. It is therefore important to have confidence in the declarative strategic sentences that are compiled. In our compilation of knowledge we have not altered any of the other knowledge sources for problem solving; we have just added one domain KS for planning, and a control declaration that requires KSARs that im- plement the plan to be executed before others. Thus, the compilation step is modular, relatively non-destructive to system integrity, and decreases the number of declarative sentences that must be interpreted and matched. 3. Having an overview of the global solution strategy also of- fers opportunities which are not available without a plan. For example, the plan produced by our procedure imme- diately suggests subtasks for parallel execution. Each of the secondary partial arrangements of Figure 4 represents an independent constraint network that can be brought to equilibrium in isolation from the others. When the plan is produced and instructions for follow- ing the plan are added, the nature of control changes signifi- cantly. The issue of choosing an anchor, for example, is not a 402 Knowledge Representation significant control issue any longer: it has been moved into the procedural detail of a problem solving knowledge source. How- ever, the exact order in which to perform YOKE operations still remains unresolved. Thus, the selection of feasible yoking actions has been left in a declarative, opportunistic framework. The plan thus leaves significant details (i.e., the order of yokes) unresolved until run-time interpretation of control knowledge sources, while fixing some details in the compiled steps. When all such strategy knowledge has been compiled into domain knowledge, then the strategy becomes procedurally defined by the sequence of these domain knowledge sources and there is little flexibility for testing alternative strategies. In order to illustrate the behavior of PROTEAN using a solu- tion plan, we present the results of the method when applied to the protein phage T4 Lysozyme. PROTEAN processes its input to define 37 superatoms into which the protein can be divided (as suggested by experimental data). In addition, PROTEAN creates a constraint set for each pair of objects between which there are distance constraints. Not all objects have constraints with other objects, so there is a total of 119 constraint sets (out of the total possible (372 - 37)/2 = 666). A graphical depiction of the constraint matrix is shown in Figure 5. bC000000 0 13 0 0 0 0 0 0 0 0 0000000000 0800000000000000 8 8 0 0 0 0 13 0 Q 0 0 0 0 a 0 0 T4 Lysozyme Constraint Network Figure 5: Matrix depiction of the constraint network in T4 Lysozyme. The objects occur in chemically linked sequence and are numbered 1 to 37, from left to right and top down. The ap- proximate strength of the constraint set, Cij, is indicated by the size of the spot at matrix position ij (or ji). The con- straint rows for key subunits are labelled. The matrix shows two large clusters of constraints in this system. There are gen- erally strong constraints between neighboring subunits, but few objects have strong constraints to distant objects. It is clear that the strategy decisions outlined in section II-B are not ob- vious, and require reasoning and analysis of the network. A trace of the problem solving behavior of PROTEAN is useful in understanding how strategic reasoning and domain ac- tions combine to produce useful problem solving behavior. For any given cycle, BBl may follow a compiled decision to follow the plan (Dl), may reason out a strategic decision about the best action (D2), or may perform the domain actions specified by Dl or D2 (A). CYCLE TYPE OF REASONING DECISIONS/ACTIONS ------_--------_------------------------------------------------------- 0 CONTROL 1 DOMAIN (A) (01) 2 CONTROL (Dl) 3-30 DOMAIN (A) 31 CONTROL (D2) 32-110 DOMAIN (A) 111 CONTROL (Dl) 112-200 DOMAIN (A) 201 CONTROL (02) 202-240 DOMAIN (A) 241 CONTROL (Dl) 241-250 DOMAIN (A) 251 CONTROL (02) 252-400 DOMAIN (A) Decides to Run the KS which examines the problem and produces a plan. Plan algorithm is run, HELIX5 is chosen as the anchor, anchorees. 12 objects are designated and 14 objects are designated appendees (See Figure 4). Decides to implement the plan from cycle 1 by automatically favoring KSARs which directly implement pieces of the plan. Partial Arrangement 1 (PAl) is established and oriented around HELIXI. Anchorees are introduced into PA1 and ANCHORed to HELIXB. Decides to YOKE accessible volumes determined in previous cycles. NOTE: there is no plan specification for this, so it is done opportunistically with declarative control. YOKES are favored between objects that are LARGE, have STRONG constraint sets, and have GIG-RELATIVE-DIFFERENCE in the size of their location tables. They continue until the constraint network within the partial arrangement reaches equilibrium. Decides to establish the secondary anchor spaces. ORIENTed around the secondary anchors as specified by the plan, and ANCHOR the appendees as specified. Carries out plan for secondary partial arrangements by ANCHORing appendees to secondary anchors. Decides to YOKE objects in secondary PAS in order to reduce location table size. Opportunistically YOKES objects. Decides to APPEND appendees into main PA1 as specified by the plan. APPENDS appendees into main PAl. Decides to continue YOKING new location tables in PA1 with previously yoked location tables from cycles 32-110. YOKES opportunistically until network equilibrium is reached and all location tables are at minimum. At this point, backtrack search CONSOLIDATION can be performed. .--------------------------------------------------------------------- About half of the control decisions are compiled in this example, and about half the resulting domain actions follow directly from them, rather than by interpreting and matching high level predicates. The plan shown in Figure 4 is partial be- cause there are significant numbers of reasoning cycles in which it makes no prescription for action (cycles 32-110, 202-240, and 252-400), and “best first” strategies must be used. However, the structure imposed on problem solving by the initial plan is strong enough to provide a clear procedural outline. We can continue to use a purely declarative control structure for testing and improving the plan if weaknesses are discovered. Full descriptions of BBl, PROTEAN and our initial control strategies can be found in [Altman and Jardetzky, 1986, Brink- ley et al., 1986, Hayes-Roth, 1985, Hayes-Roth et al., 1986131. The theme of transformation from declarative to procedural specification arises in many artificial intelligence programming Altman and Buchanan 403 efforts. Our work stresses the usefulness of the partial plan as an intermediate step. The EMYCIN system contains a rule compiler that maps domain rules into a decision tree [van Melle, 19801. The deci- sion tree is a fully elaborated plan for solution of the problem, and as such corresponds to the final stage or our three-phase paradigm. EMYCIN has a static view of how to control ev- idence gathering (goal-driven, backward chaining). We argue that an intermediate step of compiling control knowledge into domain rules before production of such a decision tree provides a greater flexibility in the development of procedures, since the control strategies used need not be static. HERACLES is an example of another system which uses declarative representa- tions of strategies, and thus could benefit from an intermediate stage of control compilation [Clancey, 19851. Similarly, meta- knowledge used by systems such as PLANNER’s rule filters [Hewitt, 19721 or NASL’s choice rules [McDermott, 19781 can be compiled into domain rules to gain efficiency at the expense of flexibility. Skeletal planning was characterized by Friedland in the MOLGEN work [Friedland, 19791. Our work uses many of the ideas of heuristic application of a global problem solving strat- egy. Our plans are partial with respect to the complete sequence of problem solving, but are not generalized to higher level con- cepts (i.e., they are expressed in the the low level vocabulary of domain actions). In that respect they are similar to Schank’s scrippts, but are part&Z scripts [Schank and Abelson, 19751. v. Conchsions The PROTEAN system for geometric constraint satisfaction in the domain of protein structure provides an excellent forum in which to experiment with different strategies. Others have formulated declarative strategies, and we have described here a compilation of parts of these strategies into domain prob- lem solving actions. The compilation of strategic knowledge has lead to a partial plan which focuses problem solving and requires less control deliberation. The plan has been used to determine the structure of T4 Lysozyme, and provides a frame- work for expansion of the procedural element of strategic rea- soning in the future. Our method works well in domains in which nearly in- dependent strategic decisions can be identified prospectively. Context dependent decisions can be made opportunistically at run time since we perform only partial compilation. This mix- ture of opportunistic and planned problem solving is especially powerful in domains such as PROTEAN’s in which a plan is useful for solving subproblems but opportunism is required to recombine or conjoin the solutions to the subproblems. References [Altman and Jardetzky, 19861 R. Altman and 0. Jardetzky. New strategies for the determination of macromolecular structure in solution. J. Biochem, 100(6):1403-1423, De- cember 1986. [Brinkley et aZ., 19861 J. Brinkley, C. Cornelius, R. Altman, B. Hayes-Roth, 0. Lichtarge, B. Duncan, B. Buchanan, and Jardetzky 0. Application of Constraint Sat&faction Tech- niques to the Determination of Protein Tertiary Structure. Technical Report KSL 86-28, Knowledge Systems Labora- tory, Stanford University, March 1986. [Clancey, 19851 W.J. Clancey. Heracles: representing proce- dures as abstract metarules. 1985. To appear in ‘Com- puter Expert Systems’, M. J. Coombs and L. Bolt, eds. Springer-Verlag, in preparation. [Davis, 19801 R. Davis. Meta-rules: reasoning about control. Artificial Intelligence, 15:179-222, 1980. [Friedland, 19791 P. Friedland. Knowledge-based Hierarchical Planning in Molecular Genetics. PhD thesis, Computer Science Department, Stanford University, September 1979. Report CS-79-760. [Garvey et aZ., 19871 A. G arvey, C. Cornelius, and B. Hayes- Roth. Computational Costs versus Benefits of Control Reasoning. Technical Report KSL 87-11, Knowledge Sys- tems Laboratory, Stanford University, February 1987. To appear in ‘Proceedings of AAAI, 1987’. [Hayes-Roth, 19851 B. Hayes-Roth. A blackboard architecture for control. Artificial Intelligence, 26:251-321, 1985. [Hayes-Roth et al., 1986a] B. Hayes-Roth, A. Garvey, M.V. Johnson, and M. Hewett. A Layered Environment for Rea- soning about Action. Technical Report KSL 86-38, Stan- ford University, November 1986. [Hayes-Roth et aZ., 1986b] B. Hayes-Roth, B.G. Buchanan, 0. Lichtarge, M. Hewett, R. Altman, J. Brinkley, C. Cor- nelius, B. Duncan, and 0. Jardetzky. Protean: deriving protein structures from constraints. In Proceedings of the AAAI, pages 904-909, Morgan Kaufmann Publishers, Inc., 1986. [Hewitt, 19721 C. Hewitt. D escription and theoretical anaZy- sis using schemata of PLANNER, a language for proving theorems and manipulating models in a robot. Technical Report TR-258, AI Laboratory, M.I.T., 1972. [Mackworth, 19771 A.K. M ac k worth. Consistency in networks of relations. Artificial Intelligence, 8:99-118, 1977. [McDermott, 19781 D. McDermott. Planning and acting. Cog- nitive Science, 2:71-109, 1978. [Schank and Abelson, 19751 R. C. Schank and R. P. Abelson. Scripts, Plans, Goals, and Understanding. Lawrence Erl- baum Associates, Hillsdale, NJ, 1975. [Schulman and Hayes-Roth, 19871 R. Schulman and B. Hayes- Roth. ExAct: A Module for Explaining Actions. Technical Report KSL-87-8, Knowledge Systems Laboratory, Stan- ford University, February 1987. [van Melle, 19801 W. van Melle. A domain-independent system that aids in constructing knowledge-based consultation pro- grams. PhD thesis, Computer Science Department, Stan- ford University, June 1980. 404 Knowledge Representation
1987
79
675
T’REAT’: A Better Mate lgorith for AI Production Systems Daniel I?. Miranker Department of Computer Sciences University of Texas at Austin Austin, Texas 78712 Abstract This paper presents the TREAT match algorithm for AI production systems. The TREAT algorithm intro- duces a new method of state saving in production sys- tem interpreters called conflict-set support. Also pre- sented are the results of an empirical study comparing the performance of the TREAT match with the com- monly assumed best algorithm for this problem, the RETE match. On five different OPS5 production sys- tem programs TREAT outperformed RETE, often by more than fifty percent. This supports an unsubstan- tiated conjecture made by McDermott, Newell and Moore, that the state saving mechanism employed in the RETE match, condition-element support, may not be worthwhile. I. Introduction Production systems are the basis of many expert sys- tems [Brownston et al., 19851. The growing use of expert systems is well known as is their large compu- tational requirements. Thus it is important to search for more efficient ways to execute production system programs. In general, a production system is defined by a set of rules, or productions, that form the production memory together with a database of current asser- tions, called the work&g memory (WM). Each pro- duction has two parts, the left-hand side (LHS) and the right-hand side, (RHS). The LHS contains a con- junction of pattern elements that are matched against the working memory. The RHS contains directives that update the working memory by adding or re- moving facts, and directives that affect external side effects, such as reading or writing an I/O channel. In operation, a production system interpreter repeat- edly executes the following cycle of operations: 1. Match. For each rule, compare the LHS against the current WM. Each subset of WM elements satisfying a rule’s LHS is called an instantiation. All instantiations are enumerated to form the conjlict set. 2. Select. From the conflict set, choose a subset of instantiations according to some predefined criteria. In practice a single instantiation is se- lected from the conflict set on the basis of the recency of the matched data in the WM. 3. Act. Execute the actions in the RHS of the indicated by the selected instantiations. rules In general much of the WM of a production system remains unchanged across production system cycles. Therefore it is worthwhile for the production sys- tem interpreter to incrementally compute the con- tents of the conflict set. The RETE match [Forgy, 19821, briefly outlined in section III , has often been assumed to be the best algorithm for this problem. However, the literature contains no comparitive anal- ysis of the RETE match with any other algorithm and a conjecture made by McDermott, Newell and Moore[McDermott, Newell and Moore, 19781, sug- gests that the state saving mechanism employed in the RETE match, condition-element support, may not be worthwhile. Section II describes several meth- ods for introducing state into a production system interpreter, including a new method incorporated into the TREAT algorithm called conflict-set support. Section IV describes the TREAT algorithm. Section V presents the results of an empirical study com- paring the performance of RETE and TREAT for the execution of five different OPS5 programs. For all five programs TREAT required fewer comparisons to do variable binding than RETE. In two instances TREAT required fewer than half. Figure 1 illustrates an OPS5 rule and WM. In the LHS of the rule the capital letters represent con- stants, the characters in brackets, pattern variables. Though not illustrated in the example, condition el- ements may be negated. A. Relationd atabase Analogy A convenient way to describe the primitive operations of a production system algorithm is to make an anal- ogy to relational database terminology. If the WM elements of a production system are considered to be tuples of some universal relationship in a relational database, then it becomes apparent that the LHS of a rule in a production system is analogous to a query in a relational database language. The constants in a single-condition element may be 42 Al Architectures From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Rule: Initial Working Memory: (P example-rule (A < x >) (B<z><y>) (C < Y >> -- > ; no RHS actions) (A 1) (B 12) P 2 3) I: 23;) F 2) Figure 1: Example Rule System viewed as a relational selection over a database of WM. We say a WM element partially matches a con- dition element if it satisfies the select operators or the intra-condition element pattern contraints. Con- sistent bindings of pattern variables between distinct condition elements may be regarded as a database join operation on the relations formed by the selec- tions. The conflict set is the union of the query results of each of the rules in the system. e traducing State A difference between database systems and produc- tion systems is that database systems usually com- pute queries one at a time over a large database. In terms of the analogy, a production system continu- ously computes many queries, as many as there are rules, over a slowly changing, modest size database. To minimize recalculating comparisons on different production system cycles production systems algo- rithms retain state across cycles. McDermott, Newell and Moore [McDermott, Newell and Moore, 19781 have identified three types of knowledge or state in- formation that may be incorporated into a production system algorithm. A fourth type, conflict-set support is exploited by the TREAT algorithm. In detail these are: e Condition Membership: Associated with each condition element in the production system is a running count indicating the number of WM ele- ments partially matching the condition element. A match algorithm that uses condition member- ship may ignore those rules that are inactive. A rule is active when all of its positive condition elements are partially satisfied. a Memory Support: An indexing scheme indicates precisely which subset of WM partially matches each condition element. By analogy, memory support systems explicitly maintain a represen- .tation of the relations resulting from the select operations. Later this representation will be called an alpha-memory. Q Condition Relationship: Provides knowledge about the interaction of condition elements within a rule. By analogy this corresponds to explicitly maintaining the intermediate results of a multiway join. B Conflict Set Support: The conflict set is explic- itly retained across production system cycles. By doing so, it is possible to limit the search for new instantiations to those instantiations that contain newly asserted WM elements. cl. ermott et al.‘s conjecture McDermott, Newell and Moore conjected that the cost of maintaining the state required for condition relationship exceeds the cost of the comparisons that otherwise would have to be recomputed. “It seems highly likely that for many production systems, the retesting cost will be less than the cost of maintaining the network of sufficient tests.“[McDermott, Newell and Moore, 1978] 0 or it The RETE match[Forgy, 19821 incorporates memory support and condition relationship. Until now, no work has been done to repudiate or confirm McDer- mott et. al.‘s conjecture. Despite that conjecture and a lack of any comparative studies of the RETE match with any other production system algorithm, the RETE match is commonly assumed to be the best algorithm for production system matching. Briefly, the RETE algorithm compiles the LHSs of the production rules into a discrimination network in the form of an augmented dataflow network. (See Figure 2.) Database operators are used as the opera- tors in the dataflow network. The top portion of the RETE network contains chains of tests that perform the select operations. Tokens passing through those chains partially match a particular condition element and are stored in alpha-memory nodes, thus forming the memory support part of the algorithm. Following the alpha-memories are two-input test nodes that test for consistent variable bindings between condition el- ements. By analogy, the two-input nodes incremen- tally compute the join of the memories on their input arcs. When a token enters a two-input node, it is compared against the tokens in the memory on the opposite arc. Paired tokens with consistent variable bindings are stored in beta-memories. Tokens that propagate from the last beta-memory in the network reflect changes to the conflict set. The reader is en- couraged to see [Miranker, 1987b] for a more detailed explaination. A. de0 in The advantages of RETE are; the large amount of stored state minimizes the number of times two WM elements will be repeatedly compared and similar rules will compile to similar networks, allowing shar- ing of network structures. The primary disadvantage of RETE is that when a WM element is removed the stored state must be un- often requiring the repetition of the precise se- of operations that were performed upon its ad- dition. Other disadvantages are; the size of the beta- memories may be combinatorially explosive, sharing A&ranker 43 condition elements. The cases concerning positive condition elements remain unchanged from the pre- vious section. The handling of negated condition el- ements is described in the abstract algorithm and in detail in [Miranker, 1987b]. Briefly, in the other two cases when changed elements partially match negated condition elements the condition is temporarily con- sidered to be positive and the change acts as a seed to create possible instantiations. If the change is an addition, instantiations are removed from the conflict set. If the change is a deletion, instantiation may be entered into the conflict set. C. Detailed TREAT Algorithm The TREAT algorithm exploits condition member- Figure 2: RETE Illustration network structure is not advantagous in a parallel environment due to contention and/or communica- tion costs, to maintain consistent state in the net- work RETE must perform extensive computation for rules that are inactive, thus not exploiting condition support. The incentive to develop TREAT was created by the difficulties associated with using RETE on parallel computers. [Stolfo and Miranker, 1984, Gupta, 19841. In a sequential computer RETE tokens may be ma- nipulated by simple memory accesses. In a parallel computer manipulating tokens can involve contention and costly communication steps. IV. The T EAT Algorithm ship, memory support and conflict set support. All the condition elements in a production system are numbered. The number associated with a condi- tion element is called the condition element num- ber (CE-num). Information relevant to condition ele- ments is stored in arrays indexed by CE-num. Alpha- memories similar to those used in RETE are used to form the memory support part of the algorithm, but rather than existing amorphously in a network they are formed explicitly as a vector, each entry contain- ing an alpha-memory. The alpha-memories are bro- ken into three partitions: old, new-delete and new- add.l The old partition, (old-mem), contains the partially-matched elements that have already been processed. During the act phase, elements are not added to the old-mem but to the memories in the add and delete partitions, (new-add-mem and new-del- mem). The calculation of the contents of the alpha- A. Conflict Set Support memory could be done by building the top portion of a RETE network. The implementation reported here To exploit conflict set support two observations must be made. Assume for the moment that there are no negated condition elements in the production system. If the only action of a fired rule is to add a new WM element, then the conflict set remains the same except for the addition of new instantiations that contain the new WM element. In the example below, adding (A 2) results only in instantiations containing (A 2). The second observation is that if the only action of a fired rule is to delete a WM element, then no new rules will be instantiated. Some instantiations may become invalide. These will contain the removed WM element. The essence of the TREAT algorithm is to exploit these observations. Additions to WM may be used as seeds to initiate a constrained search for new in- stantiations. Deletions are processed by examining the conflict set directly and removing any instantia- tion that contain a deleted WM element. (See Figure 4.1 used a hash function whose argument is the value of the first attribute in an OPS5 WM element. To incorporate condition support, whenever an old- mem is updated a test is made to see if its size has become zero or nonzero. If the critical change is de- tected, the size of each of the old-mems for the rule is examined and the set of active rules is updated accordingly. When an alpha-memory of an active rule is altered and the change corresponds to one of the three cases where a search for instantiations is required, then the search takes place among the changed (new) alpha- memory, the old-memories that correspond to the re- maining condition elements in the rule. Figure 3 con- tains an abstract program for the TREAT algorithm. D . B. Negated Condition Elements Join Optimization The join operation is commutative and associative. Thus when searching for consistent variable bindings the alpha-memories may be considered in any order. There are many multiway join optimizations[Ullman, Allowing negated condition elements slightly compli- cates the algorithm. The TREAT algorithm must consider four cases, the addition or deletion of WM el- ements that partially match both positive or negated lIn the implementations reported here, these are formed by three separate vectors. However, a vector of structures would probably have resulted in better paging characteristics. 44 Al Architectures 1. Act: Set CHANGES to the WM updates required by the RHS. 2. For each WM change in CHANGES do; (a) For each condition element, CEi do; e If the partial match of the element against CEi is successful and if addition to working memory then add WM-element to new-add-mem[CEi]. else add WM-element to new-del-mem[CEi]. end for; end for; Match: Process deletes. For each nonempty del-mem do; (4 04 (4 (4 Set cur-ce = CE-num of the selected memory. Set old-mem[cur-ce] = old-mem[cur-ce] - new-del- mem[cur-ce]. If size of old-mem[cur-ce] = 0 then update-rule-active. Case: If CE corresponding to the new-del-mem is pos- itive or negated. . 1. ii. Positive: Search conflict set for instantiations con- taining the deleted WM-elements. If found remove them. Negative: If the affected rule is active, then perform search for new instantiations by searching new-del- mem[cur-ce] and the old-mems that correspond to the remaining condition elements that are part of the affected rule. Check that the new instantiations are not invalidated by elements in old-mem[cur-ce]. end for; Q. Match: Process adds. 6. For each nonempty add-mem do; (4 (b) (4 (4 (4 (f 1 Set cur-ce = CE-num of the selected memory. Set old-size = the size of old-mem[cur-ce]. Set old-mem[cur-ce] = old-mem[cur-ce] + add- mem[cur-ce]. If size of old-mem[cur-ce] = 0 then update-rule-active. If the rule is active, then perform search for new instan- tiations by searching new-add-mem[cur-ce] and the old-mems that correspond to the CEs of the remaining CEs that are part of the affected rule. Case: If CE corresponding to the del-mem is positive of negated. i. Positive: Add these new instantiations to the con- flict set ii. Negative: Search the conflict set for each of the new instantiations and remove them if found. end for; Figure 3: Abstract Algorithm Illustrating TREAT Figure 4: TREAT Illustration 19821. However, in OPS5 the small size of the alpha- memories and the very small number of WM changes per cycle, (an average of 2.5), dictates that for an op- timization to be useful it must be simple to compute and result in a deterministic ordering of the alpha- memories. Three orderings were studied. Static- ordering, where the alpha-memories where consid- ered in the lexical order of condition elements. Seed- ordering, where the changed alpha-memory is con- sidered first, since in OPS5 these changes are almost always small and considering them first will greatly constrain the search. The third method based on semi-join reductions was not successful and will not be detailed. Note that the use of join optmizations allows TREAT to be used effectively for other pro- duction system languages. If a system is temporally- nonredundant the search for instantiations may still be performed in a different but still optimal order. Miran ker 45 E, An Example using TREAT’ Figure 4 shows the initial state created by the TREAT algorithm as well as the activity during the addition and deletion of a WM element (A 2). The activities of TREAT and RETE in this case are iden- tical except that TREAT does not maintain beta- memories. However, the beta-memories do not con- tribute constructively to the computation of the new instantiation. To be fair, note that for the add exam- ple had the WM element partially matched the “C” branch of the network RETE would have searched only a beta-memory while TREAT would have had to search both remaining alpha-memories. For a delete, the RETE match must recompute the tokens stored in the beta-memories and then delete them. TREAT outperforms RETE during deletetions by di- rectly updating the alpha-memories and the conflict- set. The key issue is; does the number extra compar- isons performed by TREAT while searching for in- stantiations exceed the number of comparisons per- formed by RETE while processing deletions? The results of an empirical study of this question are pre- sented in the next section. v. EAT vs. ETE This section presents quantitative measurements of identical runs of OPS5 programs on several differ- ent OPS5 interpreters. The RETE-based OPS5 in- terpreter is the familiar one distributed by Forgy from Carnegie Mellon University. The TREAT-based OPS5 interpreters were written at Columbia Univer- sity. A. Synogsis of the Five OPS5 programs representing a wide variety of characteristics were obtained from diverse sources. Some characteristics of these systems are summarized in Figure 5. MAB: The familiar Monkeys and Bananas pro- gram[Brownston et al., 19851. Waltz: A set of rules that perform Waltz con- straint propagation[Winston, 19791. Mapper: The Mapper is program that will assist a tourist to navigate Manhattan’s public trans- portation system. The Mapper has an extremely large WM. The maps for nearly the entire Man- hattan bus and subway systems are stored as 1124 WM elements. Mud: A system written at Carnegie Mellon Uni- versity to to analyze the castings from oil wells. It should also be noted that this is precisely the same system used by Gupta [Gupta, 19861 in his study of parallelism in OPS5. Mesgen: A natural-language program written by Karen Kukich at the Univ. of Pennsylvania that takes Dow Jones figures and converts them into text describing the course of a trading day. Number Number Average Cycles Average of of WM in test cs rules conditions Size run Size MAB 13 34 11 14 21 Mud 884 2134 241 972 Waltz 33 130 42 71 193 Mesgen 155 442 34 138 149 Mapper 237 771 1153 84 595 Figure 5: Summary of the Gross Characteristics of the Studied Systems El. counting Comparisons hs Variable Bindings It has been reported that 90% of the execution time of a production system is spent in the match phase. Evidence indicates that in the RETE-OPS5 imple- mentation the majority of the match time is spent in performing variable binding and in maintaining the beta-memory nodes [Gupta, 19861. The critical dif- ference between the algorithms is the method used to handle variable binding. The graphs in Figure 6 show the number of compar- isons required to do variable binding for each of the OPS5 programs for two variations of each algorithm. The bars are normalized to the number of compar- isons required by execution of the stardard RETE implementation. The dark portion of the bars in- dicates the number of comparisons required during the add cycles, the light portion, the number for the delete cycles. The RS bars represent the performance of the stan- dard release of RETE-based OPS5. The RN bars indicate the performance of RETE without sharing. We see that sharing does not contribute significantly, if at all, to the variable binding phase of the RETE match. The TN bars represent the performance of TREAT without any optimizations. Search is performed in lexical order. Depending on the system this version of algorithm may perform better or worse than the RETE match. Thus, some run-time optimization is necessary. The TO bars represent the performance of TREAT using the seed-ordering heuristic. Inspection of the graphs shows that TREAT with seed-ordering always performed better than RETE even on a sequential computer. Except for the Mapper2 with it’s very large WM the algorithm requires roughly half of the comparisons required of the RETE match. Note for each successful comparison performed by the RETE match there is the additional expense of maintaining a beta-memory. - 2There is evidence that with the introduction of hashing the per- formance of the Mapper would be closer to that of the other systems Al Architectures In many cases TREAT without any optimization out- performs the RETE match. With seed-ordering opti- mization, TREAT always outperforms RETE. In two intances TREAT required less than half of the com- parisons to perform variable bindings than RETE. This does not consider the additional cost of main- taining the beta-memories. Since the algorithms are nearly identical in all other respects, it may be con- cluded that TREAT is a better production system algorithm in both time and space. Further this study supports the conjecture made by McDermott, Newell and Moore that condition-element support many not be worthwhile. [Brownston et al., 19851 L. Brownston, R. Farrell, E. Kant, and N. Martin. Programming Expert Sys- tems in OPS5. Addison Wesley, Reading, Mass., 1985. [Forgy, 19821 Charles L. Forgy. Rete: A Fast Algo- rithm for the Many Pattern/Many Object Pa- tern Matching Problem. Artificial Intelligence, 19:17-37, 1982. [Gupta, 19841 Anoop Gupta. Implementing OPS5 Production Systems on DAD0 In Proceedings International Conference on Parallel Processing, IEEE Computer Society Press, 1984. [Gupta, 19861 A noop Gupta. Parallelism in Produc- tion Systems. Ph.D. Thesis. Carnegie Mel- lon University, Department of Computer Science 1986. [Miranker, 1987a] Daniel P. Miranker. TREAT:A Better Match Algorithm for AI Production Sys- tems; Long Version. Technical Report, Depart- ment of Computer Sciences, University of Texas at Austin, April 1987. [Miranker, 1987b] Daniel P. Miranker. TREAT:A New and Ieficient Match Algorithm for AI Pro- duction Systems Ph.D. Thesis, Computer Sci- ence Dept. Columbia University. Available as Technical Report TR-87-03, Dept. of Computer Sciences, University of Texas at Austin, Jan. 1987. [Stolfo and Miranker, 19841 Salvatore J. Stolfo and Daniel P. Miranker. DADO:A Parallel Proces- sor for Expert Systems. In Proceedings Interna- tional Conference on Parallel Processing. IEEE Computer Society Press, 1984. [Ullman, 19821 J.D. Ullman. Principles of Database Systems. Computer Science Press, 1982. [McDermott, Newell and Moore, 19781 J. McDermott, A. Newell and J. Moore. The Efficiency of Certain Production System Imple- mentations. In Pattern-directed Inference Sys- tems. Academic Press, 1978. [Winston, 19791 P. II. Winston. Artificial Intelli- gence. Addison Wesley, 1979. A&ranker 47
1987
8
676
Representing Databases in ames Ey-Chih Chow Hewlet t-Packard Laboratories 1501 Page Mill Road, Palo Alto, California 94304 Abstract Three methods for representing data in a relational storage system with an in-core frame-based system are experimented with and reported upon. Tradeoffs among these three repre- sentational methods are sizes of databases, times for loading data, and performance of queries. Essentially, these methods differ in ways of capturing relationships among frames. The three different ways of capturing such relationships are via links (pointers), symbolic names (keys), or both. Results of the ex- periments shed light on efficient interfacing of databases with frame-based systems. 1 Introduction Frame-based systems have become popular in building expert systems [Fikes and Kehler, 19851. A current research topic along these lines is how to efficiently hook disk-based database systems together with in- core frame-based systems to extend the capabilities of both systems [Abarbnel and Williams, 19861. As a step toward this outcome, this paper discusses the performance aspects of ways of representing data in relational storage systems using frames. By allowing slots to be pointers to other frames and to be multiple- valued [Stefik, 19791, frames are able to capture relationships among objects (frames) effectively. However, because putting pointers like these on disk leads to too many disk accesses, relationships among objects (tuples) in relational database systems are expressed instead via keys or other symbolic identifiers [Chamberlin et al., 19811. To investigate the tradeoffs between using pointers or keys with in- core databases, a conventional relational database benchmark, Fast- track [Chow, 19861 and [Chow and Cate, 19861, was adopted to com- pare three in-core frame-based alternatives used to represent the benchmark database. Experiments involving this comparison were conducted in terms of HP-RL [Rosenberg, 19831 and [1986b], an in- house frame-based expert-system toolkit running at Hewlett-Packard Laboratories. Results show that representing databases via pointers can improve the performance of queries involving joins. However, sizes of databases with such pointers are larger than those with only keys or other symbolic names. Obviously, the larger the databases, the longer the loading time needed. Additional findings were that evaluating database queries using the very general query-handling mechanism in HP-RL, under the environment of NMODE on top of HP-9000/300 UNIX1 workstation [1986a], suffered from low hit ra- tios. Furthermore, with the same environment, evaluating complex database queries, i.e. queries involving large amounts of relational data and joins, it is easy to incur garbage collection. An integrated system, combining HP-RL and Iris [Fishman et al., 19871, a next- generation database system with underlying relational storage and processing being developed at Hewlett-Packard Laboratories, is then proposed and is being further investigated to take advantage of Iris’ ability to efficiently handle data of disk-based d&abases. Section 2 describes three different HP-RL database designs for Fasttrack. In Section 3, we discuss the performance tradeoffs of ‘UNIX is a trademark of AT&T Bell Laboratories. (new-instance <dealers> : name DO : slots ((dlrid :v 1000) (name :v “eddie”) (phone :v “408726811 I”) (mist :v “Mist Data”) (daddr :v “19420 Homestead Ave. “1 (has :vs (couo~~oul3~ou2>)) (receive :vs ((OROH.ORiHOR23)) \\ I/ (a> An instance of dealers (new-instance Coutlets) : name ouo :slots ((oaddr :v “1180 Lochinvear Ave. *I> (zip-code :v 0) (owned-by :v (DO)) (competed-for :vs ~CO)(Cl6Oi)(C3001)) 1) (b) An instance of outlets Figure 1: Instances of classes dealers and outlets in schema #I these three designs. Section 4 suggests a way to use both the Iris database management system and HP-RL to take advantage of their strengths. Finally, we give a brief summary of our observations in Section 5. 2 Fasttrack atabase in The conventional database benchmark, Fasttraclp, basically includes a simple sales-record system and some test queries. The system con- sists of dealers that may operate several outlets. Outlets have com- petitors, determined by matching zip code. Customers order products from dealers. A typical execution .creates 500 dealers, 100 products, 1500 orders, and 1000 outlets, and contains information about 4500 competitors. The relational schema [Date, 19861 of this database is as follows: dealers (dlrid, name, phone, misc. daddr) outlets (dlrid, zip, oaddr) orders (dlrid, prod, qty, date) products (prod, price, desc) competitors (zip, corapid, cratio, prodtyp) where dlrid, prod, and zip are primary keys of relations dealers, products, and outlets respectively. Note that, in the above schema, relationships among entities are represented via keys. For example, relationships between dealers and outlets are represented via dlrids Chow 405 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. d measurement measurement Figure 3: CPU and disk usages in processing an HP-RL query former frame. This section discusses performance tradeoffs among the above three schemata. The discussions are divided into three parts: performance of queries, database sizes, and loading. Figure 2: Pointers vs. keys in the Fasttrack database 3.1 Performance of Queries I We used six queries to test the above three schemata. In terms ofJSQL [Date86], these six queries against the relational schema mentiOned in section 2 are as follows: in relations outlets and dealers respectively, where eign (primary) key of relation outlets (dealers.) dlrid is a for- In terms of frames, the above database can be represented by three alternative schemata. Each of these three schemata has the same five classes of frames (or objects): dealers, orders, outlets, products, and competitors. However, schema#l defines relationships among frames with pointers only. With this schema, a pair of instances of classes dealers and outlets respectively are shown in Figure la and lb. In this figure, instances of frames OUn, n = 0,1,2, are frames of outlets, ORn, n = 0,1,2, are frames of orders, DO is a frame of dealear, and Cn, n = 0,1501,3001 are frames of competitors. Curely brackets of frames denote pointers to the frames. Finally, facet value (values) of a slot is denoted by v (vs). In this way, relationships between dealers and outlets, for example, are represented via pointers of slots has and owned-by in instances of classes dealers and outlets respectively. Note that slot has is multiple-valued. Schema#2 defines relationships among frames with only keys as the above relational representation. With this schema, relationships between dealers and outlets are represented via keys of slots dlrids in both classes outlets and dealers, where both dlrids are single-valued. Schema#3 defines relationships among frames with redundant information, using both key; and pointers. This schema is used to investigate the possible performance improvement due to such redundant information. Note that the notion of keys adopted in schema #2 and schema #3 are borrowed from disk-based databases. Scheme #l is a more typical representation for a knowledge repre- sentation language such as HP-RL. The three schemata are shown in Appendix I. Frames which use foreign keys to relate to other frames can be deemed to have some implicit pointers among these frames. Namely, foreign keys can be viewed as a kind of (slightly indirect) pointer. With this viewpoint, schema #2, i.e. frames with keys only, can be conceptually shown in Figure 2a, where pointers are implicit. Frames which with pointers to other frames, on the other hand, can be viewed as extended semantic networks [Rich, 19831 whose nodes are frames themselves and whose arcs are pointers to other frames. Schemata #l, i.e. frames with pointers to other frames, can, therefore, be shown in Figure 2b, where pointers are explicit. The meaning of pointers in Fig. 2b is that dealers have outlets and re- ceive orders, orders are sent-to dealers and order-for products, outlets are owned4y dealers and are competed-for by competitors, and competitors compete-for outlets. Note that each pointer of a frame pointing to another frame can be represented as a slot of the 406 Knowledge Representation ql- select phone from dealers where dlrid = 1260 92. select dlrid from outlets where zip = 1260 s3- select oaddr,zip from outlets where dlrid = 1370 s4. select cratio,compid from outlets,competitors where outlets.dlrid = 1150 and outlets.zip = competitors.zip qs- select dealers.name,products.desc from dealers,orders,products where orders.qty between 10 and 20 and dealers.dlrid = orders .dlrid and products.prod = orders.prod se. select prod,price,desc from products Queries qi, q2 and q3 include selections and projections. Queries q4 and q5 are join queries. The selectivity factor of the join in q4 is 0.07%. The selectivity factors of the two joins in q5 are 0.2% and 1% respectively. Query qe is a query to retrieve all the data in relation prod. As in Prolog warren, 19811, forms of queries in HP-RL affect their efficiency of retrieval tremendously. OpLimum ways to express the above six test queries in HP-RL have been designed based on our inspection of the underlying data statistics and structures of schemata. We avoid joins as much as possible in expressing queries. Optimum ordering of joins of queries are chosen according to the underlying data statistics and the nature of the query-evaluation al- gorithm in HP-RL. We also take advantage of possible pointers be- tween objects to express joins. The three groups of translated HP-RL test queries for the corresponding three schemata are shown in Ap- pendix II. Measurements were based on running queries two consecutive times and were made for each query right after garbage collection. An interesting feature of response times for simple queries is that after garbage collection, much useful code and data for query evalu- ation is not really in core and needs to be paged in. This incurs a very low hit ratio. Therefore, the elapsed time for a query at the first measurement is affected by the size of the underlying database and is much longer than that at the second measurement. For example, the elapsed time to process qi of schema #l at the first measurement is a factor of 30 slower than at the second measurement. Figure 3 shows snapshots of usage of system resources in processing an HP-RL query. There is a big I/O peak the first time a query is processed and almost no I/O the second time. The hit ratio is not as low if a query 1 HP-RL Schema#l 1 HP-RL Schema#2 1 HP-RL Schema#J 1 Iris ¶l 1st 2nd 1st 2nd 1st 2nd 2.44 1 0.08 4.04 0.08 6.96 1 0.08 9 q6 1 15.36 10.18 16.66 10.12 15.22 10.96 30 Table 1: Response times of test queries (set) HP-RL schema #l HP-RL schema #2 HP-RL schema #3 1st 2nd 1st 2nd 1st 2nd 7.24 3.16 458.92 498.36 8.12 3.30 Table 2: Response times of the 1st answer of q6 (set) L is not posed right after garbage collection. Very often, response times of ad hoc queries are 2 - 3 times that of the corresponding second measurements mentioned above. The above phenomenon will be hid- den when much garbage collection is involved in query evaluation, that is, when there are many joins and duplicate answers of a query. For example, processing q5 of schema #I the second time takes even longer than the first time by a factor of 1.13. This is because the first measurement is made right after garbage collection, which makes this measurement have less garbage collection than the second one. The HP-RI, command solve-all suffers from garbage collection in pro- cessing joins involving large amounts of data because it is likely to create many temporary frames in such processing. There is another relatively specialized command in HP-RL, fast-solve-all, which partly solvea the above problem but does not relieve it entirely. Table 1 shows response times for the six queries in HP-RL schemata, listing both the first and second measurements. Perfor- mance of the queries in Iris is also attached, where the numbers are measured without index. Queries q3,q4, and q6 take longer in schema #2 than in schema #l and #3. This shows that defining objects to link through pointers is faster than naming keys of objects. Time sav- ings of q5 in schema #l and #3 against schema #2 are only about 26%. This is because most of the time spent in processing this query is used to eliminate duplicate answers. In q6, by replacing solve-all with solve to measure times spent in getting the first answer, we find that this query in schema #l or #3 is faster than in schema #2 by a factor of 60. This measurement is shown in Table 2. Note that, unlike schema #l, in schema #3 q2 avoids a join be- cause each instance of outlets has the key of its dealer. But q2 takes about the same time in schema #l and #3. Therefore, it is not necessary to keep redundant information the way schema #3 does. Now, we compare the performance of the HP-RL queries with that of the corresponding Iris queries. For a reasonably high hit ratio, HP- RL is superior to Iris in processing simple database queries such as q2 (HP-RL: at most 0.80s vs. Iris: with index 3.8s.) For an exceptional low hit ratio, however, Iris performs better than HP-RL in processing database queries (for q2, HP-RL: at least 5.74s vs. Iris: no index 4s.) In addition, with pointers as in schema #1, HP-RL is able to handle simple relational joins, i.e., joins not involving too many du- plicates and data, better than Iris does (for 94, HP-RL: 0.86s vs. Iris: with index 5s.) Iris, however, is much better than HP-RL at handling complex relational joins, i.e., joins involving.many duplicates as well as large amounts of data (for q6, HP-RL: = 600s vs. Iris: no in- dex 57s.) Essentially, this is because HP-RL retrieval commands like solve-all and fast-solve-all are intended to handle much more general kinds of data than Iris does. Therefore, in the specific envi- ronment of databases, HP-RL suffers from garbage collection during relational joins and eliminating duplicates. 3.2 atabase Sizes Sizes of the three HP-RL databases with the schemata mentioned Table 3: Loading times of schema #l (elapsed time) in section 2 depend on how the slot is implemented. For HP-RL default slot declarations, slots have three facets: daemon, value, and comment. In this case, sizes of the database are 2.73 Mbytes for schema #l, 2.31 Mbytes for schema #2, and 3.15 Mbytes for schema #3. However, for database-oriented frames like those of Fasttrack, only one facet, i.e. value, for each slot is enough. This can be achieved with an HP-RL command to override the default declaration. With this kind of slot implementation, sizes of database are I.57 Mbytes for schema #l, 1.31 Mbytes for schema #2, and 1.70 Mbytes for schema #3. The database of schema #l is bigger than that of schema #2 be- cause, although keys of objects are not used to represent relationships among objects in schema #l, information associated with such keys still needs to be kept. For example, key zip-code of class outlets of schema #l cannot be ignored without losing information. Schema #3 uses redundant information of both pointers and keys in representing relationships. Therefore, the associated database is the biggest of the three HP-RL databases. 3.3 Loading Due to frequent garbage collection, loading times of HP-RL databases depend on sizes of free dynamic heap spaces where the Fasttrack database is stored. The size of the dynamic heap of the HP-RL configuration used in these measurements is 5.82 mbytes with 4.56 mbytes free. In addition, to make the database smaller, each slot of each frame to be loaded in this measurement allows only one facet, i.e., value. There is a problem in loading frames with pointers. As a frame is loaded with reference to nonexistent frames, some partial frames, i.e., frames with only headers but no bodies, are created and echoed to the screen. Due to the expense of such ethos, sequences that load objects with minimal numbers of ethos require minimal time. The following two sequences of loading objects were tested: sequence #I. products, orders, dealers, competitors, outlets. sequence t2. products, orders, outlets, dealers, competitors. Loading times of sequence #l and #2 for schema #l are shown in Table 3. Loading via sequence #2 is slower than sequence #l because of the following reason. During the loading of outlets with sequence #2, instances of competitors are nonexistent. Accordingly, partial frames are created and are echoed to the screen. Since instances of competitors are much more numerous than those of the other four classes. Ethos on loading via sequence #2 are more than those via sequence # 1. In the following discussions, loading times of databases of schema #l and #3 were measured via loading sequence #l. Among the three HP-RL databases, the loading times increase (about 104 : 131 : 157 mins for schema #2 : schema #l : schema #3) roughly in linear proportion to the increases of the corresponding database sizes (1.31 : 1.57 : 1.70 mbytes), with a ratio about 1. In addition to the size of the database, another reason that the database of schema #2 had the shortest loading time is that no partial frames (and therefore no ChOW 407 ethos) are created in loading this database. To summarize, of the three HP-RL schemata, schema #3 does not appear to have any advantage over schema #l. In contrast to schema #2, schema #l trades good performance of queries for some memory space and extra loading time. Finally, Iris is able to handle complex joins better than HP-RL, although HP-RL is faster for simple queries. 4 Interffcing Databases with Frames . In this section, we propose a way to build high-performance knowledge base systems by combining Iris and HP-RL. Iris is designed to handle large amounts of data in wide ranges of applications. Expert systems built in HP-RL, however, deal with a limited domain in a particular session. To achieve high performance .using combined HP-RL and Iris, for a particular application domain, we retrieve and transform related Iris data of relational forms into frames and cache them in the HP-RL heap space during run time. Note that retrieval of data in a particular domain from Iris databases tends to involve complex joins. In addition, answers to queries from Iris often include duplicates. Such duplicates should be eliminated before cache. In the transformation of Iris data to frames, relation- ships among these frames should be represented by either keys, as in schema #2 mentioned in the previous sections or pointers, as in schema #l. Considerations involved in selecting one of these two al- ternatives are memory sizes, loading time and performance of queries. For example, if free space of HP-RL is sufficient and cached objects are dynamically preloaded to HP-RL, then performance of queries is the only consideration. In this case, using pointers to link objects (frames) is an appropriate way to gain efficiency. :i.nstance-slots ((dlrid :declare (single-valued) ;key of dealer; :type number) blll0 :declare (single-valued) :type string) (phone :declare (single-valued) :type string) (mist :declare (single-valued) :type string) (daddr :declare (single-valued) :type string) (has :declare (multiple-valued) ;pointers to outlets :type Contletd) (receive :declare (multiple-valued) ;pointers to orders :type {orders)) 1) (define-class outlets 0 :instance-slots ((oaddr :declare (single-valued) :type string) (zip-code :declare (single-valued) ;key of outlets Interfacing Iris with HP-RL in the above way tends to make Iris handle complex joins and HP-RL handle simple queries. This achieves high performance. The above discussion, however, does not address the issues of update and insert as data are cached back to Iris. This still needs to be investigated. 5 Conclusions This paper uses some experimental results to describe the tradeoffs among three representations of databases in frames. Representing re- lationships among frames via pointers will get better performance on queries than will representation via keys or symbolic names. How- ever, pointers will make the corresponding databases larger than will keys or symbolic names. The same benchmarks are used to compare the frame-baaed system, HP-RL, with the Iris database system, a next-generation database system with underlying relational storage and processing. Results show that for simple database queries, at a reasonable hit ratio, HP-RL tends to be faster than Iris because data for Iris queries are on disk. Because of its sophisticated buffer man- agement strategy and relatively specialized code in handling data of conventional databases, however, Iris is able to handle queries involv- ing complex relational joins much better than HP-RL does. Acknowlledgments The author is grateful to Dan Fishman, Steve Rosenberg, Henry Cate, Tom Ryan, Reed Letsinger, Bill Stanton, Pierre Huyn and Alan Shepherd for most inspiring discussions and comments of this work. Charles Hoch, Jim Davis, Wendell Fields, and Randy Splitter have provided excellent and responsive support of the experimental envi- ronments. The author also thanks the referees for suggesting several important improvements to the paper. Appendix I. Fasttrack database in HP-RE :type number) (owned-by :declare (single-valued) :type {dealers)) (competed-for :declare (multiple-valued) :type Icompetitors3) 1) (define-class orders 0 :instauce-slots (WJ :declare (single-valued) ;pointers ;poiuters to dealers to competito :type number) (date :declare (single-valued) : type string) bent,t.o :declare (single-valued) ;poiuters to dealers :type {dealers)) (order-for :declare (single-valued) ;pointers to products :type {products)) )) (define-class products <) :instance-slots ((prod* :declare (single-valued) ;k.eg of products :type number) (price :declare (single-valued) :type number) (desc :declare (single-valued) :type string) 1) (define-class competitors 0 :instance-slots ((coupid :declare Mugle-valued) :type number) (cratio :declare (single-valued) :type number) (prodtype :declare (single-valued) :type string) (compete-for :declare (single-valued) ;poiuters to outlets :type {outlets)) )) Schema #2 (define-class dealers 0 :iustence-slots ((dlrid :declare (single-valued) ;key of dealers :type number) hme :declare Wugle-valued) :type string) (phone :declare (single-valued) :type string) (mist7 :declare (single-valued) :type string) (daddr :declare (single-valued) :type string) 1) (defile-class outlets 0 :instauce-slots ((dlrid :declare Mugle-valued) ;key of dealers :type number) (zip-code :declare (single-valued) ;key of outlets :type number) (oaddr :declare (single-valued) :type string) )I rs Schema #I (define-class dealers 0 (define-class orders 0 :instance-slots ((dlrid :declare <single-valued) ;key of dealers 408 IKnowledge Representation :type number) :declare Kngle-valued) (prod :type number) WY :declare (single-valued) :type number) (date :declare (single-valued) :type string) )I (define-class products 0 :instance-slots ((prod? :declare (single-valued) :type number) (price :declare (single-valued) :type number) (desc :declare (single-valued) : type string) 1) (define-class competitors 0 :instance-slots ((zip :declare (single-valued) :type number) (compid :declare (single-valued) :type number) Ccratio :declare (single-valued) :type number) (prodtype :declare (single-valued) :type string) 1) Schema #3 Slots of classes in this SChprma are combinations corresponding classes in schema 81 and t2. +++ ;key of products query 5 (solve-all '(and (?x qty ?z) (and -0= ?z 10) -(<= ?z 20)) (?x prod ?a) (?v prod6 ?a) (?x dlrid ?p) (?u dlrid ?y) (0~ name ?ol) (?v desc ?vl)) :type '((?x {orders)) (?v {products)) (?s fdealers))) :returrm '(?ol ?vl)) query 6 (solve-al1 '(and (?I prods ?y) (?x price ?z) (?I desc 7111) :type '((?x {products))) :returns '(?J ?z ?a)) ;key of products Schema #3 et* Queries 1,3,4,5 and 6 in this schema are of the same forms as those in schema 81. query 2 in this schema, on the other hand, is of the same form as that in schema 62. *e+ ;key of outlets of slots of the Appendix II. Test queries in HP-RL Schema #I query 1 (solve-all y(CD260) phone ?x) : returns '?x) query 2 (solve-all '(and (?x zip-code 1260) (?x osned,by ?y) (?y dlrid ?z)) :type s((?x {outlets)) (?y {dealers))) :returns '?a) query 3 (solve-all '(and (fD370) has ?x) (7x oaddr ?y) (?x zip-code ?z)) :type '((?x fontlets))) :returns '(?y ?z)) query 4 (solve-all '(and (CD1501 has ?x) (?x competed,for ?y) (?y cratio ?z) (?y compid ?u)) :type '(C?x {outlets)) (?y Icompetitors))) :returns '(?z ?a)) query 5 (solve-all '(and (snd (?y qty ?z) -(>= ?z 10) ^(<= ?z 20)) (?y order-for ?P) (?P desc ?u) (?y sent-to ?x) (?x nsme ?v)) :type '((?y iorders)) (?x {deders))) :returns '(Pv ?u)) query 6 (solve-all '(and (?x prod6 ?y) (?I price ?z) (?x desc ?w)) :type J((?x <products))) :returns '(?y ?z ?u)) Schema #t2 query 1 (solve-all query 2 (solve-all query 3 (solve-all query 4 (solve-all *({D260) phone 7x) :returns '?xL) '(and (?x zip-code 1260) (?x dlrid ?y)) :type J((?x {outlets))) :returns '?y) '(and (?x dlrid 1370) (?x oaddr ?y) (?x zip-code ?z)) :type '((?x Ioutlets))) :returns '(?y ?z)) '(and (fDl60) dlrid ?P) (?x dlrid ?s) (?x zip-code ?y) (?z zip ?y) (?z cratio ?a) (?z compid ?v)) :type '((?x {outlets)) (?z {competitors))) :returns '(?a ?v)) References [19SSa] HP 9000 S eries 900 NMODE User’s Guide. Hewlett Packard Company, 1986. [1986b] HP-RL R f e erence Manual. Hewlett Packard Laboratories, September 1986. [Chamberlin ed al., 19811 D. Chamberlin e2 al. A history and evalua- tion of system R. Communications of the ACM, 24( lo), October 1981. [Fishman et al., 19871 D. Fishman et al. Iris: an object-oriented dbms. ACM ‘Transactions on Ofice Information Systems, 5(2), April 1987. [Abarbnel and W 11 i iams, 19861 R. Abarbnel and M. Williams. A re- lational representation for knowledge bases. In First Interna- tional Conference on Expert Database Systems, 1986. [Chow, 19861 E. Chow. Iris and HPRL. Technical Report STL-TM- 86-13, Hewlett Packard Laboratories, October 1986. [Chow and Cate, 19861 E. Chow and H. Cate. Performance Evalu- ation for IRIS Version 1.0. Technical Report STL-TM-86-13, Hewlett Packard Laboratories, October 1986. [Date, 19861 C. Date. An Introduction to Database Systems. Vol- ume 1, Addison-Wesley, fourth edition, 1986. [Fikes and Kehler, 19851 R. Fikes and T. Kehler. The role of frame- based representation in reasoning. Communications of the ACM, 28(9), September 1985. [Rich, 19831 E. Rich. Artifical Intelligence. McGraw-Hill, 1983. [Rosenberg, 19831 S. Rosenberg. HPRL: a language for building ex- pert systems. In Proc. IJCAI, 1983. [Stefik, 19791 M. Stefik. An examination of a frame-structured rep- resentation system. In Proc. IJCAI, 1979. [Warren, 19811 D. Warren. Efficient processing of interactive rela- tional database queries expressed in logic. In Proc. of 7th Int. Conf. on VLDB, 1981. Chow 409
1987
80
677
Optimizing the Predictive Value of Sholom M. Weiss*, Robert S. Galen**, and Prasad V. Tadepalli* *Dzgartment of Computer Science, Rutgers University, New Brunswick, NJ 08904 Department of Biochemistry, Cleveland Clinic Foundation, Cleveland, Ohio Abstract An approach to finding an optimal solution for an important diagnostic problem is described. Examples are taken from laboratory medicine, where the problem can be stated as finding the best combination of tests for making a diagnosis. These tests are typically numerical with unknown decision thresholds. Because of uncertainty in classification, the solution is described in terms of maximizing measures of decision rule performance on a data base of cases, for example maximizing positive predictive value, subject to a constraint of a minimum sensitivity. The resultant rules are quite similar to classification production rules, and the procedures described should be valid for many knowledge acquisition and refinement tasks. The solution is found by a heuristic search procedure, and empirical results for several data bases and published studies are described.’ 1. Introduction Rule based systems have found increasing success in capturing experiential knowledge. While relatively simple in structure, these systems have proved useful because they capture knowledge in forms that are familiar and easy to explain. Many rule-based expert systems solve problems that fall into the general category of classification [Clancey, 1985, Weiss and Kulikowski, 19841 problems. Diagnostic decision making is a typical example. This type of problem has many characterizations, and formal solutions have been developed under various assumptions through statistical hypothesis testing, discriminant analysis, and pattern recognition [Duda and Hart, 19731. Rule based systems can incorporate formal decision models, if the statistical information is available to build them, together with the pragmatic knowledge of when to invoke them. But in general this is not the case, and an expert system is resorted to precisely because one needs to start with the human expert’s best guess at what constitute good decision rules. The decision rules chosen by experts have to be easy to compute and explain. They therefore tend to involve relatively small chunk of information in their antecedent conditions, and tend to use easy-to-understand logical connectives (conjunction and disjunction), rather than the more difficult to interpret mathematical combing functions (such as linear combinations of finding values in linear discriminants). While many expert systems have been built over the past decade, there has been little progress in relating their performance to more traditional decision-making approaches. This is tied-in with the often cited weaknesses of the mathematics underlying many expert systems’ inference ‘This research was supported in part by the Biomedical Research Technology Program, National Institutes of Health, Grant P41 RR02230. schemes for handling uncertainty, and their related difficulties in automatic learning. The AI literature has recently had numerous discussions of the various approaches for combining rule scores in a valid probabilistic manner, e.g. Dempster-Shafer [Gordon and ShortIiffe, 19851 or Bayesian approaches. The optimizing approach that we describe in this section has a different emphasis. Rather than worry about combining different scores, we pose the simpler question of finding the left hand side of a rule (with certain simplifying structural constraints) that has the best potential for yielding correct classification. Because the rules are simple forms of standard production rules for classification problems; they can be analyzed quite exactly. While the knowledge engineering approach to building rule-based systems has been predominant in recent years, there continues to be a strong interest in search strategies that can potentially yield optimal solutions [Kumar and Kanal, 1983, Kumar, 1984,Pearl. 19841. In limited contexts such solutions can be of use for certain aspects of knowledge acquisition, e.g. optimal decision trees [Martelli and Montanari, 19781. In this paper, we show how a classical medical diagnostic problem, subject to some simplifying representational choices, can be solved in an optimal or near-optimal fashion. This should prove a particularly powerful tool for laboratory- based medicine, since it can help indicate what is the best set of tests to perform. Of general interest to the AI community is that the solution to this problem is an optimal decision rule that is posed as a logical set of clauses. While an optimal solution is stated in terms of statistical constraints, the identification of a solution to the problem is described as a heuristic search procedure. II. Statement of the Problem In this discussion, examples from laboratory medicine will be used. However, the solution is general and should be applicable to many areas outside medicine. Let us assume that we are developing a new diagnostic test whose measurement yields a numerical result in a continuous range. For a single test, the problem is to select a cutoff point, known formally as a referent value, that will lead to satisfactory decisions. For example, a physician may conclude that all patients having a result greater than a specific cutoff have the disease, while others do not. There are well-known measures to describe the performance of a test at a specific cutoff for a sample population. These measures are sensitivity, specificity, positive predictive value, negative predictive value, and efficiency [Galen and Gambino, 19751. Thus, results at each cutoff can be described in terms of these measures. Using a specific cutoff, there are four possible outcomes for each test case in the Weiss, Calen, and Tadepalli 521 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. sample.2 This is illustrated in Figure 1. 1 Test Positive 1 Test Negative 1 Hypothesis Positive 1 True Positives 1 False Negatives 1 Hypothesis Negative False Positives True Negatives Figureyl: Possible Outcomes of Tests for Hypotheses Figure 2 formally describes these measures of performance. I Sensitivity ITP/H+ I 1 Specificity (TN/H- I I Predictive value (+) I TP / T+ I Predictive value (-> I TN / T- [Ffficiency Figure 2: Formal Measures of Diagnostic Performance While all of these measures have their purpose, the one that is typically most valuable for rule-based systems is the positive predictive value. Positive predictive value measures how often a decision is correct when a test result is positive. Thus one may use a positive test that has high predictive value in rules that confirm a diagnosis, and apply different tests when the result is negative. Many rule based systems may be thought of as collections of rules with very high positive predictive values.3 We illustrate these points by describing data taken from a published study on the assessment of 8 laboratory tests to confirm the diagnosis of acute appendicitis for patients admitted to an emergency room with tentative diagnosis of acute appendicitis [Ma&and, Van Lente, and Galen, 19831. In the example of Figure 3, the white blood cell count (WI3C) is used as the test. Figure 3: Example of the 5 Measures of Performance for WE4C>10000 2For purposes of this discussion, we are eliminating the possibility of unknowns. 3This minimizes the effect of an unknown prevalence. In summary, for a single test with a given cutoff and the application of an arithmetic operator,4 these five measures can be determined for a population. The problem of determining an optimal cutoff can be described as maximizing one of these measures subject to specific constraints on the other measures.5 Constraints are the minimum required values for sensitivity, specificity, predictive values, and efficiency.6 Finding the optimum cutoff for WI3C can be posed in the form illustrated in Figure 4. MAXWWBNG Predictive value (+) of WBC The constraints are given below: Sensitivity 2 100.00% Specificity 2 0.00% Predictive value (-) 2 0.00% Efficiency z 0.00% Figure 4: Example of Problem Constraints for a Single Test We note that this problem can be seen as a special restriction on a statistical decision-making or pattern recognition problem. Here the cutoff threshold is not given and instead must be determined. For known population statistics, the threshold for each of these measures that corresponds to the optimal likelihood ratio choice might be determined. However, in diagnostic testing, it is rare that population statistics for large numbers of combinations of tests can be established. Our form of analysis then answers questions of optimal diagnostic performance on an empirical basis, for a particular sample of cases. Referent value analysis, or cutoff selection, is commonly done for single tests. We have developed procedures that allow for the possibility of choosing the set of constraints and maximizing the remaining measure not only for one or two, but for a larger number of tests.7 When more than one test is specified, combinations are formed by using logical AND or OR operators. We formulate the problem as finding the optimal combination of tests that will satisfy the given constraints for the data base. An additional constraint is added to the problem, in that the length of the expression is limited by a chosen threshold.8 In Figure 5 using the appendicitis data base, the problem is to find an optimal solution in the form of a logical expression whose length is 9hese operators are less than or greater than. ‘Sensitivity and specificity move continuously in opposite directions. For example, a 100% sensitivity cutoff with 0% specificity can always be found by classifying every sample as having the hypothesis. Predictive values have no such relationship and vary all over the place. ?he interrelations among these performance parameters, possible patterns of constraints for any given set of data. limit the 71f two tests have the same value for the optimized measure, then its conjugate measure is used to decide which test is better. Sensitivity and specificity are treated as conjugates to one another and so are positive and negative predictive values. When optimizing efficiency, positive predictive value is chosen to be the next decisive function. ‘This sets a limit on the number of tests that may be used in the decision rule. Some tests may be also deliberately excluded from consideration and some tests may be designated as mandatory. This allows for further pruning of the search space. 522 Machine learning & Knowledge Acquisition no greater than 3 tests9 MAXMUING Predictive value (+) The constraints are given below: where Bi is the ith Bell number.13 Sensitivity Specificity Predictive value (-) Efficiency Number of terms 5 100.00% r 0.00% 2 0.00% z 0.00% I 3 Figure 5: Example of Problem Constraints for 3 or Fewer Tests At this point we note that the rules are just like many found in typical classification expert systems, since, like productions, they are described as logical combinations of findings that are not mutually exclusive.1° Thus, they have the intuitive appeal in explaining decisions because of their modular nature, while being supported empirically by their performance over the data base. In contrast to traditional machine learning [Mitchell, 1982, Quinlan, 1986, Michalski, Mozetic, Hong, and Lavrac, 19861, the objective here is to find the single best conjunctive or disjunctive rule of a fixed size. Starting with undetermined cutoffs for continuous variables, these rules classify under conditions of uncertainty, where two types of classification errors, false positives and false negatives, need not be considered of equal importance. III. Complexity In Section II, we described the problem as finding the best logical expression of a fixed length or less that covers a sample population. In this section, we consider the complexity of exhaustively generating and testing all possibilities. Except for relatively small populations or numbers of tests,ll the exhaustive approach is not computationally feasible. Equation 1 is the number of expressions having only ANDs; Equation 2 is for expressions having either ANDs or ORs. In these equations, n is the number of tests, k is is the maximum number of tests in the expression, c is the number of constants (cutoff values) to be examined for each test, and ci is c raised to the ith power. While the number of distinct values that must be examined for each test may vary, we have have used a fixed number, c, to simplify the notation and analysis. In Equation 2, expressions are generated in disjunctive normal form. l2 ci (1) 9As noted in Section V, the optimal solution is a disjunction of 2 tests. ‘*An OR condition may encompass several conditions that are not mutually exclusive. The classification may have less than 100% diagnostic accuracy. l ‘These are tests with relatively few potential cutoffs. The most computationally expensive (exponential) I component of Equation 2 component is ci. It is possible to devise exhaustive procedures that do not require the examination of every value of a test found in the data base. For each test, one may examine only those points that overlap in the H+ and H- populations. Moreover, only the smaller set of the two sets of points in the overlapping zone need be candidates for cutoffs.14 Even taking this into account, relatively small values of c will make the computation prohibitive. Because expression, one may allow for the repetition of a test in an the number of generated_ expressions mav be substantially greater than Equation 2. I5 For the appendicitis data base having a sample of 106 cases, we cqmputed an average of 65 expressions/second on a VAX/785.16 Optimizing Predictive Values Because of the computational complexity of an exhaustive search, we have developed a heuristic search procedure for finding the optimal combination. In this section, we describe the procedure. While this procedure is not guaranteed to find an optimal solution, the expression found should almost always be near-optimal. In Section V, empirical evidence is provided to demonstrate that in numerous situations the optimal solution is found. In almost every real experimental situation,17 the logical expression found by the computer should be better than what a human experimenter could compose. Before specifying the heuristic procedure, a few general comments can be made. In an exhaustive search approach, it is possible to specify a procedure that needs no additional memory. Logical expressions are generated and they are compared with the current best. The heuristic procedure is based on an alternative strategy. A relatively small table of the most promising expressions is kept. Combinations of expressions are used to generate longer expressions. The most promising longer expressions in turn are stored in the table 13The Bell number is the number of ways a set of i elements can be split into a set of disjoint subsets. For i=O123 3 9 , , B,=1,1,2,5 respectively [Andrews. 19761. The Bell number is defined recursively as Bi+l=i (l) Bk 14Each test would have a a distinct number of cutoffs that must be examined, ct. In the equations, instead of ci, the products of ct for each generated expression must be summed. IsFor example, ~50 OR (a >30 AND b ~20). 19his is the averag e for length less than 4. Another data base mentioned in Section V has approximately 3000 cases, which increases the computations correspondingly. data and t2This normal form corresponds described in Section IV. to that used by the heuristic procedure t7These are situations where the experimenter does not know a priori the best rule. is analyzing new Weiss, Galen, and BadepaN 523 and are used to generate even longer expressions. Thus memory is needed to store the most promising or useful expressions. In Equation 2, the exponential component is the cl. Thus, if one can reduce the number of points in c, i.e. the number of cutoffs for a test, the possible combinations are greatly reduced. Figure 6 illustrates the key steps of the heuristic procedure. In Section IV.A, the approach taken to greatly reduce the number of cutoffs is discussed. I SELECT I 1 KEY CUTOFFS 1 1 FOR EACH TEST 1 CANDIDATES I OF LENGTH N OR LESS I Figure 6: Overview of Heuristic Procedure for Best Test Combination A. Selection of Cutoffs For each test in the data base, the mean is found for the cases satisfying the hypothesis (H+) and the cases not satisfying the hypothesis (H-). If the H+ has the greater mean, the ‘5” ooerator is used. If H+ has the smaller mean, the “c” oberator is used. l8 The next task is to select the test cutoffs. For a test, cutoffs that fall at interesting boundaries are selected. Interesting boundaries are those &here the predictive values (positive o? negative) are locally maximum. For example, if WBc>lOOOO has a positive predictive value of 97% and WBC>9900 and WBC>lOlOO each has a positive predictive value less than 97%, then 10000 is an interesting boundary for WBC. The procedure first determines the interesting boundaries on a coarse scale. Then it zooms in on these boundaries and collects all the interesting boundaries on a 9he equality operator “=I’ may also be used for discrete tests corresponding to simple encodings such as multiple choice questions. A discrete test is considered to be a test whose values are always between 0 and 10. finer sca.le.19 Finally, the boundaries are smoothened without changing the predictive statistics of the rule. Test cutoffs that have very low sensitivity or specificity are immediately pruned?O B. Expression Generation Logical expressions of all test variables in all combinations are generated in disjunctive normal form.*l This method avoids duplication of equivalent expressions since AND and also OR are symmetric. These expressions are stored in an expression table and longer expressions are generated combining shorter expressions. As each new expression is generated, the test variables are instantiated in all combinations of cutoff values. The test cutoffs were selected prior to expression generation. Figure 7 is a simple illustration of this process for 3 tests, {a, b, c) and expressions of length 2 or less. Figure 7: Example I I of Expressions with Variables (tests) If b has interesting cutoffs at b>lO, b>20 and c has interesting cutoffs at ~30, ~40, cc50, then the expression b AND c would lead to the possibilities of Figure 8. Figure 8: b>lO AND c<30 b>lO AND c<40 b>lO AND cc50 b>20 AND c<30 b>20 AND cc40 b>20 AND c<50 I I Example of Instantiated Expression Because new longer expressions are generated from shorter expressions that have been stored in a table, those expressions that have been pruned will no longer appear in any longer expression. During the course of instantiation of the 19A local maximum corresponds approximately to the following conditions for the cutoff and its two neighbors: One neighbor of the cutoff has the same number of correct classifications but more errors. The other neighbor hasfewer correct classifications but the same number of errors. 2cIn the current version of the program, 10 equally spaced intervals are used for the region where the two populations overlap. For zooming in on an interval, 20 Ever intervals are used between its 2 neighbors on the coarse scale. The minimum acceptable sensitivity or specificity for a test is currently set to be 10%. 21For example, AND c). a AND (b OR c) must be written as (a AND b) OR (a 524 Machine Learning & Knowledge Acquisition variables, some heuristics can be applied to prune the possibilities. These are discussed in Section 1V.C. Although the heuristic cutoff analysis limits the search space to the most interesting cutoffs, the search space may still remain relatively large. Several heuristics and some provably correct pruning rules are employed by the procedure. The first 3 pruning rules are always correct, the others are heuristics that attempt to consider the most promising candidates for combination into new longer rules. 1. If the sensitivity and specificity values of an expression are both less than the constraints then that expression does not contribute to any useful rules. 2. If an expression has less specificity than required, then any expression formed by ORing that expression with another will also have less specificity than required. 3. If an expression cannot be extended to one that contains all the mandatory tests, while satisfying the length constraint, it is immediately pruned. 4. If an expression has better positive and negative predictive values than another expression that differs from the first only by the constants in the expression, men the expression with lower predictive values is ignored. The heuristic procedure has been implemented in a computer program. Because of the underlying empirical nature of the problem, by examining hundreds of possibilities, the program should be able to find better logical expressions than the human experts when the samples are representative. This is particularly true when the human experimenter is examining new tests or performing an original experiment. Because of the heuristic nature of the search, one wonders about the optimality of the solutions. Several experiments were performed to test the capability of the program to find optimal or near optimal solutions. Several years after the appendicitis data used in our examples were reported in the medical literature, we reanalyzed the data. The samples consisted of 106 patients and 8 diagnostic tests. Because only 21 patients were normal, it is possible to construct an exhaustive procedure.24 In original study, the experimenters were interested in maximizing positive predictive value, subject to the constraint of 100% sensitivity. They cited a logical expression consisting of the disjunction of 3 diagnostic tests with positive predictive value of 89%. Using the heuristic procedure, the following results can be reported: @A superior logical expression composed of only 2 tests can be cited. This test has nositive predictive value of 91%. The analysis minutes of cpu time on a VAX 785. *takes 3 e Using exhaustive search, the optimal expression took 10 hours of cpu time on a VAX 78~5.~~ of length 3 or less is identical to the one found by the heuristic procedure. The exhaustive search 5. If there are rules shorter and better than a new candidate rule, compute the sum of their lengths. If this sum, including the length of the current rule, exceeds the maximum length possible for any rule, then ignore the new rule.22 Using a large data base of approximately 3000 cases, additional experiments were performed. These cases belong to a knowledge-based system that is being developed for a laboratory medicine application involving routine health screening.26 The data base consists almost exclusively of diagnostic laboratory tests. In several instances, there are relatively short, length 3 or less, rules in the knowledge base that reach specific conclusions. When a rule is the sole rule for a conclusion, we have a rule that has 100% sensitivity and 100% positive predictive value. Because this is an expert’s rule, we know that the rule has a strong scientific and experiential support. For experimental purposes we limit our task to finding the expert’s rule by analyzing the case data base. We have a situation where the optimal solution is known to us before an empirical analysis. For the five rules that we selected that had 100% sensitivity and 100% predictive value, we were able to match the expert’s rule.27 After all interesting expressions have been generated, the best ex ression in the expression table is offered as the answer. Y 3 Because all promising expressions are stored, a program that implements this procedure can readily determine its next best expression. If the constraints are made stricter, the expression table remains valid and the procedure’s new best expression should be immediately available. The results of these experiments are encouraging. While 221n the current implementation, the maximum rule length is fried as 6. As the expression length increases, the number of potential combinations greatly increases. The objective of this heuristic is to emphasize the most promising shorter rules that will be combined into lengthier rules. 23During expression g eneration, whenever- a superior expression is found, it is displayed. If no expression is found meeting the constraints, this is indicated when the search terminates. Depending on the allocated table space for storing intermediate expressions, the program may terminate from an overflow of the table. This is unlikely to occur with relatively small expressions. aIn this case, c=21 for Equation 2. =‘I’he result reported in the literature was WBC>10500 OR MBAP>ll% OR CRP>1.2. The optimal solution is NBC>8700 OR CRP>1.8. 26Unlike the appendicitis population, in this population the overwhelming number of samples are normal patients. 271n some instances, a shorter rule was found that was a subset of the expert’s rule. This is due to a relatively small number of cases in the H+ population. the optimal solution to the problem is clearly the goal, near- optimal solutions in reasonable times are also extremely valuable. In many practical situations, humans cannot solve this problem. For example, while combination tests are often cited in the diagnostic medical literature, in almost aJ.l instances the logical expression is found on the basis of previous experience, intuition, and trial and error analysis. We believe that the approach cited in this paper offers an opportunity to analyze data and present results in an optimal or near-optimal fashion. The examples cited here were from realistic and important diagnostic medical applications. In future years, we can expect that laboratory medicine and diagnostic tests will assume an ever more important role in diagnostic decision- making. While this form of diagnostic performance analysis, i.e. the five measures of performance, is the standard in the medical diagnostic literature, there is nothing that is specific to medical data. Because medical tests have a clear physiological basis, the expectation of continued performance on new populations is great. We have presented our work as the optimal fitting of a logical expression to existing data. Thus we have not addressed the question of experimental design or validation of results for a specific application. Unless one derives very highly predictive rules, this form of data analysis is subject to inaccuracies based on unrepresentative samples or prevalences. 28 As is done in pattern recognition applications, estimates of future performance can be made by train and test experiments or jackknifing [Efron, 19821. In terms of knowledge base acquisition, this approach can prove valuable in both acquiring new knowledge, refining existing knowledge [Wilkins and Buchanan, 1986, Ginsberg, Weiss, and Politakis, 19851, and verifying correctness of old knowledge. Because a knowledge base of rules summarizes much more experiential knowledge than is usually covered by a data base of cases, in many instances this approach can be thought of as supplementary to the knowledge engineering approach to knowledge acquisition in rule-based systems. We thank Casimir Kulikowski for his critique of an early draft of this paper, and his clarifications of the relationship of our work to statistical pattern recognition. We acknowledge the programming support of Kevin Kern, who programmed and tested many of the procedures described in this paper. 28Tbis point is alsO valid for knowledge-based reasoning Andrews, G. (1976). Encyclopedia of Mathematics and its Applications II - The Theory of Partitions. Reading, Mass.: Addison-Wesley. Clancey, W. (1985). Heuristic Classification. Journal of Artificial Intelligence, 27,289-350. Duda, R., and Hart, P. (1973). Pattern Classification and Scene Analysis. New York: Wiley. Efron, B. (1982). The Jackknife, the Bootstrap and Other Resampling Plans. SZAIkf. Philadelphia, Pa. Galen, R. and Gambino, S. (1975). Beyond Normality. New York: John Wiley and Sons. Ginsberg A., Weiss, S. and Politakis, P. (1985). SEEK2: A Generalized Approach to Automatic Knowledge Base Refinement. Proceedings IJCAI-85. pp. 367-374. Los Angeles, Ca. Gordon, J. and Shorthffe, E. (1985). A Method for Managing Evidential Reasoning in a Hierarchical Hypothesis Space. Journal of Artificial Intelligence, 26,323-327. Kumar, V. (1984). A General Bottom-up Procedure for Searching And/Or Graphs. 182- 187. Austin, Texas. Proceedings AAAI-84. pp. Kumar, V. and Kanal, L. (1983). The Composite Decision Process: A Unifying Formulation for Heuristic Search, Dynamic Programming and Branch & Bound Procedures. Proceedings AAAI-83. pp. 220-224. Washington D.C. Marchand, A., Van Lente, F., and Galen, R. (1983). The Assessment of Laboratory Tests in the Diagnosis of Acute ~$g~3;~citis. American Journal of Clinical Pathology, 80(3), Martelli, A. and Montanan, U. (1978). Optimizing Decision Trees Through Heuristically Guided Search. Comm. of the ACM, 21,1025-1039. Michalski, R., Mozetic, I., Hong, J., and Lavrac, N. (1986). The Multi-purpose Incremental Learning System AQl5 and its Testing Application to Three Medical Domains. Proceedings AAAZ-86. pp. 1041-1045. Philadelphia, Pa. Mitchell, T. (1982). Generalization as Search. Artificial Intelligence, 18,203-226. Pearl, J. (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley. Quinlan, J. (1986). The Effect of Noise on Concept Learning. In Michalski, R., Carbonell, J., and Mitchell, T. (Eds.), Machine Learning. Morgan Kaufmann. Weiss, S. and Kulikowski, C. (1984). A Practical Guide to Designing Expert Systems. Totowa, New Jersey: Rowman and Allanheld. Wilkins, D. and Buchanan, B. (1986). On Debugging Rule Sets When Reasoning Under Uncertainty. Proceedings AAAZ-86. pp. 448-454. Philadelphia, Pa. with uncertamy. 526 Machine Learning & Knowledge Acquisition
1987
81
678
Learning to Control a Dynamic Physical System Margaret E. Connell Paul E. Utgoff Department of Computer and Information Science University of Massachusetts, Amherst, MA 01003 Abstract This paper presents an approach to learning to control a dynamic physical system. The approach has been imple- mented in a program named CART, and applied to a simple physical system studied previously by several researchers. Experiments illustrate that a control method is learned in a.bout 16 trials, an improvement over previous learning pro- grams. I. Introduction One kind of human intelligence manifests itself in the abil- ity to learn to control a physical system. Such systems include the person’s own body, vehicles, machines, plants, and processes. This kind of problem is commonly called a control problem. This paper addresses the problem of building a computer program that learns to control a phys- ical system to achieve a stated performance task. From a practical point of view, learning algorithms may be useful in automatic construction of controllers [Fu, 19711. From a research perspective, a control problem presents a unique challenge for learning methods. First, the dynamics of a physical system impose the constraint that successor states cannot be chosen arbitrarily. This means that anticipation and prediction of future states become critical. Second, training information is often delayed, making credit assign- ment for individual actions difficult. The approach taken here is to investigate a control prob- lem that has been studied previously by connectionists and control theorists. In general, one would like to either re- move assumptions or exchange them for simpler or cheaper ones. The primary goal of the work reported here is to re- move a certain collection of starting assumptions that have been adopted in previous work. This requires a different knowledge representation and a new action selection mech- anism. HI. The Cart-Pole As illustrated in figure 1, the cart-pole balancing problem is: given a cart that travels left or right along a straight bounded track, with a pole that is hinged to the top of the cart and that can swing left or right, keep the pole balanced. To keep the pole balanced means both that the pole does not fall beyond 12 deg from straight up and that the cart does not exceed an end of track boundary. There are only two control actions available, to push the cart left or to push the cart right with a constant force. The learning problem is: given the cart-pole system, the ability to experiment with the cart-pole system, access to the state variables, and notification when the pole has fallen or the cart has reached an end of the track, determine a control method for balancing the pole indefinitely. The cart-pole system is simulated to enable the CART program to construct experiments. Four state variables represent the state of the dynamic system at any time step. Figure 1: The Cart-Pole System They are: x the position of the cart on the track i the velocity of the cart 8 the anglular position of the pole i the angular velocity of the pole For the simulator, the system is modelled by two second or- der differential equations that accurately approximate the real physical system.These equations of motion and param- eter values are given in [Barto et al, 19831. The values of the parameters, also given in [Selfridge et al, 19851, are: cart mass 1.0 kg pole mass 0.1 kg pole length 1 meter applied force f10 newtons, left or right Two additional parameters are the coefficients of friction for the cart and for the pole. The equations are solved numerically applying Euler’s method with a time step of 0.02 sec. Failure occurs when 1x1 > 2.4 meters or when 101 > 12deg. CART treats the simulator as a black box; it does not use any knowledge embedded in the simulator and it does not assume any interpretation of the cart-pole system’s four state variables. The principal challenge of the problem as a learning task is that the training information is very weak. Although the 456 Machine learning & Knowledge Acquisition From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. learning system stores the history of the states encountered during an experiment at balancing, it is only told when the pole has actually fallen, i.e. the pole has swung beyond l2deg from straight up or when the cart has reached an end of the track. Due to the dynamics of the cart and pole, and the limits imposed by the performance task, the cart- pole system can be in a state from which no sequence of control forces will keep the pole from falling or keep the cart from reaching an end of the track. Such a state is called “doomed”. -The learning problem does not assume the existence of a critic that immediately identifies good versus bad actions. Furthermore, even if one were able to characterize a state as doomed or not, it is not obvious how to avoid doomed states using the available control actions. . elated OSlS The problem was investigated in 1964 by Widrow and Smith [ Widrow & Smith, 19641. It has been studied by Michie and Chambers (Michie & Chambers, 19681, and also Anderson, Barto, Selfridge, and Sutton /Anderson, 1986; Barto et al, 1983; Selfridge et al, 19851. In all of these cases, the learn- ing problem has been to construct a program that learns to keep the pole balanced. Michie and Chambers [Michie & Chambers, 19681 built a program named BOXES-that learned to balance the pole. They mapped each of the four state variables of the cart- pole into a discrete value, depending on predefined ranges for each variable. The 5 ranges for the cart, 3 for the cart velocity, 5 for the pole, and 3 for the angular velocity pro- duced a total of 225 distinct regions. For each action and region, the average time until failure was updated from the experience of each trial. For a given region, BOXES chooses the action with the higher average time until failure. The program required about 600 trials to learn to balance the pole for 72,000 time steps (each .05 set). Michie and Chambers point out that the choice of ranges for the cart variables (the size of the boxes) is critical to the success of the BOXES program. A poor choice of ranges makes the system unable to learn to balance the pole. Hence, choosing ranges that permit learning to bal- ance the pole is a necessary step for this approach. Choos- ing these ranges requires experimentation or analysis and should therefore be considered part of the learning prob- lem. Dividing the state space into regions is exactly the set of starting assumptions that were eliminated in the CART program. Barto, Sutton and Anderson[Barto et al, 19831 improved on the results of Michie and Chambers by designing two neuronlike adaptive elements, that were used to solve the same balancing task. They also employed a division of the state space into predefined distinct regions. The action with the higher probability of keeping the pole balanced was the one chosen in each region. The system was able to balance the pole over 60,000 time steps before completing 100 tri- als. On the average by the 75th trial, the pole remained balanced over 8000 time steps (each .02 set). More recently, Anderson[Anderson, 19861 devised a con- nectionist system to learn to balance the pole. His system trains two predefined two-layer networks. One learns an evaluation function, and the other learns an action func- tion over the state-space. Learning occurs by successively adjusting both the weights of the evaluation and action networks. His system has the advantage that it is not nec- essary to provide well-chosen boxes ahead of time. This is achieved at considerable cost in terms of performance; his system takes an average of 10,000 trials to learn to balance the pole for approximately 7000 steps. rogram This section presents the CART program, which learns to balance the cart-pole system indefinitely after about 16 tri- als. The program is explained in terms of a classic learning model [Smith et al, 19771, which consists of four compo- nents considered necessary for a learning system: a Prob- lem Generator, a Performance Element, a Critic, and a Learning Element. The Problem Generator initializes a new experiment, called a trial. The Performance Element applies a left or right control force at each time step, attempting to balance the pole indefinitely. The Critic labels some of the states from the trial as desirable (value 1) or undesirable (value -1). Based on input from the Critic, the Learning Element updates its concept of the degree of desirability of a state. Because the Performance Element decides which control action to apply, based on its estimate of whether an action will lead to a more desirable state, learning to estimate the degree of desirability of a state improves performance. The cart program learns and employs the concept of the degree of desirability of a cart-pole state. A concept that represents degree of desirability is fundamentally different from one that represents only desirable or not desirable. The degree of desirability of a cart-pole state is represented by an explicit function of the 4 state variables of the cart- pole system. The function is modified by the Learning El- ement, as described below. For each trial at balancing, the function remains fixed. The degree of desirability of a cart- pole state is computed using Shepard’s function [Barnhill, 1977; Schumaker, 19761, which interpolates from known de- sirable and undesirable states that were supplied previously by the Critic. Shepard’s interpolation method was chosen because all interpolated values fall smoothly between the maximum and minimum observed values, making it well suited to the cart-pole problem. ’ Given the n points zi = (zr,;, x2,;, xs,i, . . . , x,,+) in m- dimensional space, with known values f(zi) = Fi for i = 1 . . . n, Shepard’s interpolating function is: where j=l,j#i (x1 - X*,j)2 + (X2 - X2,j)2 + e m s + (2, - X,,j)2, the distance from z to a known point zj, and p > 0. p = 2 was used at all times based on the recommendation of Schu- maker [Schumaker, 1976). Note that the known desirable Connell and Utgoff 457 and undesirable states are retained, that their values are preserved by the interpolating function, and that the de- gree of desirability of any state is determined solely by these known examples (states). It can be seen from the function that designated desir- - able and undesirable states that .are near to a given cart- pole state have greater influence on the function’s value than those at a distance. This is because the weights, wi, associated with the near sta\es are greater. A. Problem Generator The task of the Problem Generator is to initialize the cart- pole system so that an experimental trial at balancing can be performed. The cart-pole system is initialized with the cart near the center of the track and with the pole nearly upright. These values for the cart and the pole are selected to vary a small random amount from exactly centered and exactly vertical. The initial cart velocity and pole angular velocity are set to 0. This initialization procedure places the cart-pole system in a state from which indefinite balancing is possible. This fact is used by the Critic. B. Performance Element The task of the Performance Element is to choose a control action (push left or push right) at each time step so that the pole balances. The decision procedure selects an ac- tion which is expected to lead to a more desirable successor state. The dynamics of the system and the limited choice ;Z is the extended vector Figure 2: Continue Same Action at ss of control action impose the fundamental constraint that it is not possible to move to an arbitrary successor state. The action selection problem is further compounded because the Performance Element does not know the dynamics of the system or the effect of a control action. At every step the Performance Element decides whether to repeat the same action or to change to the other. If, by continuing with an action, it is estimated that the cart-pole system will move to a more desired state, then the same action is repeated. Otherwise, the other action is selected. To facilitate the decision, two vectors are computed at each point, the gradient and the extended vector. The direction of the extended vector, defined by continuing from the state in the same direction as the system is already travelling, is a useful estimate of the direction in which another application l A. Barto has pointed out that Shepard’s method is a of the method of potential functions.[Duda & Hart, 197.71 special case of the same action would take the system. The gradient of the interpolating function, evaluating desirability, is a 4- dimensional vector that points in the direction of maximum increase of the function at a point (state). Ideally, the action selection mechanism would choose the action that would cause the system to move to the state that more nearly lies in the direction of the gradient. Without the ability to predict successor states, it is neces- sary to use a decision strategy that is less than ideal. At each successive state the angle between the gradient at the point and the extended vector is computed by taking the inner product of the two vectors. If the angle decreases (because direction and gradient are better aligned), then the decision is to repeat the same action. The algorithm is illustrated in figure 2. The two control actions are labelled L and R. Since /a3 < /a2 (the angle between w’~ and grad3 is less than the angle between V> and grud2), the system is estimated to be moving in a desirable direction and the choice of action at s3 will again be L. If /a3 > /a2 then the continuation of the last action is estimated to lead away from desirability; as a result, the choice of action at s3 will change to R. cc. Critic The Critic must supply information that makes it possible for the Learning Element to improve its abilility to esti- mate the degree of desirability of all cart-pole states. This is done by labeling certain states in the trial as desirable or undesirable. Choosing a particular cart-pole state and determining whether it is desirable (value 1) or undesirable (value -1) is done in three ways. First, as described above, the cart-pole system is initial- ized to a state from which indefinite balancing is possible. The first state in each trial could be labelled as a desir- able state. As a shortcut, the CART program initializes the learning process by labelling the state with the pole straight-up, the cart centered, and 0 velocities a desirable point. It is the prototypical start state. Second, when the pole falls, an undesirable state has been reached. The Critic labels the state immediately pre- ceeding the failure as undesirable, unless the degree of de- sirability of the state is already less than -0.98. Third, when the cart-pole system has balanced longer than 100 time steps, it is inferred that some of the states were desirable. This is based on the fact that a random se- quence of control actions keeps the pole balanced for about 20 time steps. When a trial has ended and has lasted longer than 100 steps, the Critic searches the sequence of states for one that it will label as desirable. The algorithm is: backup 50 steps from the failure point; then keep backing up until a point is found at which 3 or more of the cart variables are decreasing in magnitude; label that point as desirable. This algorithm is based on the assumption that a state which occurs 50 time steps before failure is in a good position if it is a state from which the system is moving toward the prototypical start state (the point from which indefinite balancing is possible). These numeric parameters (100, 50, 3) were determined empirically through experimentation with the system. An improvement would be for a learning system to do this ex- 458 Machine learning & Knowledge Acquisition perimentation and determine these parameters itself. Ex- perience suggests that these parameters may be a function of the system performance prior to learning (e.g. The first parameter could be 5 times the length of a random trial.) . Learning Element The task of the Learning Element is to improve its accu- racy in estimating the desirability of a cart-pole state. The Critic provides specific training instances to the Learning Element. A training instance is a 4-dimensional point in cart-pole state space that has been labelled as desirable (1) or undesirable (-1). The Learning Element needs to gener- alize, so that it can estimate the desirability of points that it has not seen as training instances. A function defined by Shepard’s method requires a set of points from which to interpolate. Learning is therefore quite simple; add a new point (training instance), along with its label (+I or -l), to a list of all observed train- ing instances. Because the speed of Shepard’s formula is a function of the number of training instances (to evaluate the formula at any point the distance to every training in- stance must be calculated), it is important that the Critic deliver a small number of well chosen training instances to the Learning Element. For the cart-pole problem, not many observed states are necessary for developing a good inter- polation function and the Critic chooses the points well. A version of Shepard’s method was implemented that updates the symbolic formula incrementally. This is more efficient than rederiving the formula with each new training instance. With each new point, .zk = (~i,k,x~,k,. . . ,~~,k), only one new explicit distance formula is derived, RUN NUMBER 1 2 3 4 5 6 27 25 29 12 26 13 79 9 9 163 199 9 T 1 R 2 Al 4” Lj 5 6 N 7 u 8 M’9 B lo E 11 R 12 13 j 14 15 16 *intll 167 179 154 88 9 9 136 55 9 11 242 81 240 172 79 173 171 224 24 281 323 24 9 104 274 *11 195 19 6G4 229 5000 10000 252 5000 84 *11 139 150 10 *11 183 *11 582 12 5000 382 Figure 3: Number of Steps per Trial the cart-pole system developes a pattern of behavior repeats itself and is indicative of indefinite balancing. that The CART program was run 14 times. In every case, it learned to balance the pole. Furthermore, the system be- havior fell into a distinct pattern every time, suggesting that additional runs would not turn up anything new. As seen from a sampling of the runs in figure 3, the CART system learned the control task in 16 or fewer trials, some- times in as few as 9 trials. Of the runs tried, 10 were halted at 5000 time steps and 3 runs were halted at 10000 time steps. The final run was halted at 70000 time steps, the equivalent of 25 minutes of balancing. Figure 4 illustrates the cyclic pattern behavior that de- veloped. As the pole falls in one direction, the cart is pushed in the same direction to arrest the falling pole. This continues until the pole has sufficient velocity that it will necessarily start to move in the opposite direction. When dk = j/(x1 - xl,k)2 + (x2 - x2,k)2 + **. + (%I. - %,k)2, one more term is added to the product of the existing dis- tances and one new product of distances is evaluated. It is helpful to view the desirabilty of a cart-pole state as the height of a surface in S-dimensional space. The first four dimensions of a point on the surface designate the cart-pole state, and the fifth, its degree of desirability. The surface is changed after each trial as new points are supplied to the Learning Element. This means that subsequent perfor- mance is also likely to change. As a result, in successive trials different regions of the cart-pole state space are ex- plored. New trials force the system to learn nonrepetitive and useful information. On the first trial the pole typically falls while the cart remains near the track’s center. The resulting surface slopes down from the center toward the undesirable point. In the next trial the choice of actions will force the cart-pole system to move away from failure and the pole will fall in the opposite direction. After a few trials the cart will move out from the center of the track where the values on the surface are greater. When the cart reaches the track boundary in one direction, the surface will slope down toward that boundary forcing the cart to move in the opposite direction during the subsequent trial. As the learned surface improves, balancing time increases. Soon the number of steps exceeds 100 and a desirable point other than the center is determined. After 10 to 15 trials, X Figure 4: A Typical Run in 2-Dimensions Connell and Utgoff 459 this happens the cart is pushed in the opposite direction to stop the falling pole again. This balancing activity also keeps the cart from creeping toward an end of the track. The pattern depicted in the figure shows activity along the right diagonal, which corresponds to the behavior described above. Balancing occurs when a push decreases the velocity of both the cart and the pole. This will happen only when the cart and the pole are moving in opposite directions. An illustration usgng run 3 is shoyn in figure 4. x-axis position of the cart in meters e-axis position of the pole in radians -n an undesirable,state on nth trial +” a desirable state on nth trial +O the given central desirable point arrows show the repeated pattern points states at 50 step intervals Additional experiments showed that the system learns under different conditions. In every case described below balancing occurred in !ess than 18 trials. The variations ma-de in these experiments were the same as those made by Selfridge [Selfridge et al, 19851. One experiment was to increase the original mass of the pole by a factor of 10. Other experiments were to reduce the mass and length of the pole to two-thirds of the original values, to reduce the total length of the track to 2 meters from 4.8 meters, and to apply unequal forces left (12 newtons) and right (8 new- tons). The CART program demonstrates an algorithm for learn- ing a. control method to satisfy a particular performance task. An important objective was to build a program that does not depend on a predefined partition of a continuous state space into discrete regions. This was accomplished by representing the degree of desirability of a state by a con- tinuous interpolating function of the state variables. This representation necessitated a new action selection mecha- nism that makes use of the current state and the learned concept of the degree of desirability of the system state. More work is needed to explore the generality of the CART system. The system is general to the extent that it does not depend on an interpretation of the state variables; it simply learns to select control actions so that failure is avoided. The CART program does .take advantage of the fact that the system is initialized to a state from which indefinite balancing is possible. It also takes advantage of the continuity of the cart-pole system and the smooth be- havior of the function representing degree of desirability. An important characteristic of the learning problem is that there is no criticism at each time step. Reliable criticism is available only when the pole falls. It was necessary to construct a Critic that was able to classify cart-pole states as desirable or undesirable. Further work is needed to ex- plore the extent to which the Critic algorithm depends on characteristics of the cart-pole problem. cknowledgements Andy Barto generously provided his cart-pole simulator and was a valuable source for discussion and criticism. We thank Sharad Saxena, Chuck Anderson, and Peter Heitman for the helpful discussions and comments. The reviewers made several useful suggestions. eferences [Anderson, 19861 Anderson, C. W. Learning and Problem Solving with Multilayer Connection& Systems Uni- versity of Massachusetts Ph.D. Dissertation. COINS TECHNICAL REPORT 86-50: Amherst,MA. 1986. [Barnhill, 19771 Barnhill, R. E. “Representation and Ap- proximation of Surfaces” Mathematical Software III, Academic Press, 1977. (Barto et al, 19831 Barto, A. B., Sutton, R. S. and An- derson, C. W. “Neuronlike Adaptive Elements that can Solve Difficult Learning Control Problems.” IEEE Trans. on Systems, Man and Cybernetics,13 no 5., 1983. Duda & Hart, 19731 Duda, R. and Hart, P. Pattern Clas- sification and Scene Analysis, Wiley, N.Y. 1973. Fu, 19711 Fu, K. S. Pattern and Machine Learning Plenum Press, New York-London, 1971. Michie & Chambers, 19681 Michie D. and Chambers R., “Boxes An Experiment in Adaptive Control”, in Ma- chine Intelligence z?, E.Dale and D. Michie Eds. Oliver and Boyd, Edinburgh, 1968. [Schumaker, 19761 Schumaker, L. L. “Fitting Surfaces to Scattered Data” Approximation Theory II. Academic Press, 1976. [Selfridge et al, 19851 Selfridge 0. G., Sutton R. S., and Barto A. G. “Training and Tracking in Robotics” in Proceedings of the 9th International Joint Conference on Artificial Intelligence, Los Angeles, CA. 1985. Smith et al, 1977) Smith R. G., Mitchell T. M., Chestek R. and Buchanan B. G. “A Model For Learning Systems” in Proceedings of the 5th International Joint Conference on Artificial Intelligence Cambridge MA. 1977. Widrow & Smith, 1964) Widrow, B. and Smith F. W. “Pattern-recognizing Control Systems” in Computer and Information Sciences, J. Tou and R. Wilcox Eds. Clever Hume Press, 1964. 460 Machine learning & Knowledge Acquisition
1987
82
679
Improving Inference Throu Clustering Douglas Fisher Department of Information and Computer Science University of California Irvine, California 92717 abstract Conceptual clustering is an important way to summarize data in an understandable manner. However, the recency of the conceptual clustering paradigm has allowed little exploration of conceptual clustering as a means of improving performance. This paper presents COBWEB, a conceptual clustering system that organizes data to maximize inference abilities. It does this by capturing attribute inter-correlations at classification tree nodes and generating inferences as a by-product of classifica- tion. Results from the domains of soybean and thyroid disease diagnosis support the success of this approach. lUachine learning is concerned with improving perfor- mance through automated knowledge acquisition and re- finement [Dietterich, 19821. Learning filters and incorpo- rates environmental observations into a knowledge base that is used to facilitate performance at some task. As- sumptions about the environnnent, knowledge base, and performance task all have important ramifications on the design of a learning algorithm. This paper is con- cerned with conceptual clustering, a task of machine learn- ing that has not been traditionally discussed in the larger context of intelligent processing. Conceptual clustering systems [Michalski and Stepp, 1983; Fisher, 1985; Cheng and Fu, 19851 accept a num- ber of object descriptions (events, observations, facts), and produce a classification scheme over the observed objects. Importantly, conceptual clustering methods do not require the guidance of a teacher to direct the formation of the classification (as with learning bm examples), but use an evaluation function to discover classes with good concep- tual description. These evaluation functions generally fa- vor classes exhibiting many differences between objects of different classes, and few differences between objects of the same class. As with other forms of learning, the context surrounding the conceptual clustering task can have im- portant implications on the design of these systems. Perhaps the most important contextual factor surround- ing clustering is the performance task that benefits from conceptual clustering capabilities. While most systems do not explicitly address this task, exceptions do exist. In particular, Cheng and l?u [1985] and l?u and Buchanan [1985] have used clustering techniques to organize expert system knowledge. Generalizing on their use of conceptual clustering, classifications produced by conceptual cluster- ing systems can be a basis for effective inference of un- seen object properties. The generality of classification as a means of guiding inference is manifest in recent discussions of problem-solving as classification [Clancey, 19841. This paper describes the COBWEB system for concep- tual clustering. COBWEB’s design was motivated by both environmental and performance concerns. Bowever, this paper is primarily concerned with performance issues - in particular, with the utility of COBWEB classification trees to facilitate inference during classification.’ The following section motivates and develops an evaluation function used by COBWEB to guide class and concept formation. This measure, called category utility [Gluck and Corter, 19851, favors classes that maximize the amount of information that can be inferred from knowledge of class membership. Section 3 describes the COBWEB algorithm. The remain- der of the paper focuses on the utility of COBWEB gen- erated classification trees for inference, concentrating par- ticularly on soybean disease diagnosis. COBWEB uses a measure of concept quality called cat- egory utility [Gluck and Corter, 19851 to guide formation of object classes and concepts. While our primary inter- est in category utility is that it favors classes that maxi- mize inference ability, Gluck and Corter originally derived category utility as 8 means of predicting certain effects observed during human classification. These effects stem from a psychological construct called the b&c keel that occurs in hierarchical classification schemes and seems to be where inference abilities are maximized. ‘COBWEB is ralso distinguished from other systems in that it is incremental. Issues surrounding COBWEB’s performance as an incre- mental system are given in [Fisher, 19871. From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Category utility rewards information rich categories and is thus of generic value, but it can also be viewed (and more easily developed) as a tradeoff of intra-class object similar- ity and inter-class dissimilarity. Objects are described in terms of (nominal) attribute - value pairs (e.g., Color = red and Size = large). For an attribute value pair, Ai = Kj, and class, Ck, intra-class similarity is measured in terms of a conditio l-f probability P(A; = V&JCk); the larger this probability, t e greater the proportion of class members sharing the same value (xi), and thus the more predictable [Lebowitz, 19821 the value is of class members. Inter- class similarity is measured in terms of P(Ck IA; = Ki); the larger this probability, the fewer objects in contrasting classes that share this value, and thus the more predictive the value is of the class. Attribute value predictability and predictiveness are combined into a measure of partition quality. Specifically, ,E;Z=l Ci Cj P(& = Kj)P(Ck(Ai = &)-?‘(A = Vij(Ck), is a tradeoff of predictability and predictiveness that is summed for all classes’(k), attributes (i), and values (j). The probability P(Ai = V&) weights the importance of in- dividual values, in essence saying that it is more important to increase the class conditioned predictability and predic- tiveness of frequently occurring values than infrequently occurring values. This function can also be regarded as rewarding the in- ference potential of object class partitions. More precisely, note that for any i, j, and I%, P(Ai = V&)P(CklAi = I$) = p( Ck)P(Ai = KjlG) (B a Y es rule), so by substitution the above function equals Ci Cj P(A = vijlck) 2 is the expected number of attribute values that can be correctly guessed for an arbitrary mem- ber of class Ck. a Finally, Gluck and Carter define category utility to be the increase in the expected number of attribute val- ues that can be correctly guessed (P(Ck) C; Cj P(Ai = V<jlCk)‘) given knowledge of a partition {Cl, . ..> Cn3, over the expected number of correct guesses with no such knowl- edge (C<Cj P(& = Kj)2). Formally, CV({Cr, Cz, . . . . C,,)) = [CL P(CAG) Ci Cj P(A = KjlCk)2] - Ci Cj P(& = qj)’ . n The denominator, n, is the number of categories in a par- tition, and averaging over categories allows comparison of different size partitions. 2This assumes a probabditty matching guessing strategy, meaning that an attribute value is inessed with a probability equal to its prob- abiity of occuaring, as opposed to a probability maximizing strat- egy which assumes that the most frequently occurring value is always guessed (see [Fisher, 19871). Category utility can be computed given P(Ck) is known for each category 0f a partition, as is P( A; = V;j ICk) for all attribute values. Such a category representation is termed a probabilistic concept [Smith and Medin, 19811. Infor- mation on attribute value distributions distinguish prob- abilistic concepts from the logical (generally conjunctive) representations typically used in AI systems. Probabilistic represent ations subsume these types of logical represent a- tions, as there exists a simple mapping from probabilistic to logical representations. This increased generality comes at the cost of storing the probabilities, each of which can be computed from two integer counts, thus only increasing the proportionality constant of storage requirements. COBWEB incrementally incorporates objects into a classification hierarchy. Given an initially empty hierarchy, over an incrementally presented series of objects, a hierar- chical classification is formed, where each node is a proba- bilistic concept representing an object class (e.g., Birds - BodyCover = feathers (1.0) and Transport = fly (0.88) and . ..). The incorporation of an object is basically a process of classifying the object by descending the tree along an ap- propriate path, updating distributional information along the way, and performing one of several possible operators at each level. A. Placing an Object in an Existing Class Perhaps the most natural way of updating a partition of objects is to simply place a new object in an existing class. That is, after updating the distribution of attribute values at the root, the object may be incorporated into one of the root’s children. To determine which child ‘best’ hosts a new object, the object is tentatively placed in each child. The partition that results from adding the object to a given node is evaluated using category utility. The node to which adding the object results in the best partition is the best existing host for the new object. 33. Creating a New Class In addition to placing objects in existing classes, there is a way to create new classes. Specifically, the quality of the partition resulting from placing the object in the best existing host is compared to the partition resulting from creating a new singleton class containing the object. Class creation is performed if it yields a better partition (by category utility). This operator allows COBWEB to adjust the number of classes at a partition to fit the regularities of the environment; the number of classes is not bounded by a system parameter (e.g., as in CLUSTER/2). 6. Merging and Splitting While operators 1 and 2 are effective in many cases, by themselves they are very sensitive to initial input order- $62 Machine Learning & Knowledge Acquisition Table P: COBWEB control structure FUNCTION COBWEB (Object, Root ( of tree )) 1) Update counts of the Root 2) IF Root is a leaf THEN Return expanded leaf to accommodate the new object ELSE Find that child of Root that best hosts Object and perform one of the following 2a) Create a new class if appropriate 2b) Merge nodes if appropriate and call COBWEB (Object, Merged node) 2c) Split a node if appropriate and call COBWEB (Object, Root) 2d) IF none of the above (2a,b, or c) then call COBWEB (Object, ing. To guard against the effects of initially skewed data, COBWEB also includes two operators for node merging and splitting. The function of merging is to take two nodes of a level (of n nodes) and ‘combine’ them in hopes that the resultant partition (of n-l nodes) is of better quality. The merging of two nodes simply involves creating a new node and combining attribute distributions of the nodes being merged. The two original nodes are made children of the newly created node. Although merging could be attempted on all possible node pairs every time an object is observed, such a strategy would be unnecessarily redun- dant and costly. Instead, when an object is incorporated, only merging the two best hosts (as indicated by category utility) is evaluated. Besides node merging, node splitting may also serve to increase partition quality. A node of a partition (of n nodes) may be deleted and its children promoted, resulting in a partition of n+m-1 nodes, where the deleted node had m children. Splitting is considered only for the children of the best host among the existing categories. COBWEB’s control structure is summarized in Table 1. As an object is incorporated, at most one operator is ap- plied at each tree level. Compositions of these primitive operators can be viewed as transforming a single classifica- tion tree. Fisher [1987] adopts the view that COBWEB is hill-climbing (without backtracking) through the space of possible class&a&ion trees. In order to maintain robust- ness, operators are not restricted to building the tree in a strictly top-down or bottom-up fashion, but the inverse op- erators of merging and splitting allow COBWEB to move bidirectionally in this space, thus allowing an approxima- tion .of backtracking through operator application. This strategy keeps update cost small (B”ZogBn where B is the average branching factor of the tree and n is the number of previously classified objects), while maintaining learn- ing robustness. Fisher [1987] addresses the strengths and weaknesses of this approach in more detail. COBWEB forms classifications that tend to maximize the amount of information that can be inferred from cate- gory membership. This is a domain independent heuristic whose efficacy depends on the assumption that important properties are dependent on regularities or ‘hidden causes’ [Pearl, 1985; Cheng and Fu, 19851 in the environment, and that these regularities can be extracted and organized by a conceptual clustering system. The utility of classifica- tion trees for inference was tested in the several domains, including a set of 47 soybean disease cases [Stepp, 19841. Each case (object) was described along 35 attributes. Four soybean diseases were represented in the data - Diaporthe stem rot, Charcoal Rot, Rhizoctonia Root Rot, Phytoph- thora roe. These disease designations were also included in each object description, making a total of 36 attributes (e.g., Precipitation = low, Root-condition = rotted, . ..) Diagnostic-condition = Charcoal Rot).3 An experiment was conducted in which soybean disease cases were incrementally presented to COBWEB in order to see whether bhe resultant classification could be used for effective disease diagnosis. After incorporating every 5th instance, the remaining unseen cases were classified (but not incorporated) with respect to the classification tree conseruceed up until that point. Test instances being classified contained no information regarding 3iagnosCc condiCon’, but the value of this attribute was inferred as a byproduct of classification. Specifically, classification eer- minated when the test object was matched against a leaf of the classification tree. This leaf represented that previ- ously observed object that best matched the test object. The diagnostic condition of the test object was guessed to be the corresponding condition of the leaf. The experiment was terminated after’one half of the domain (of 47 cases) had been incorporated. The graph of Figure I gives the results of the exper- iment. The graph shows that after 5 randomly selected instances, the classification could be used to correctly di- agnose disease (over the remaining 42 unseen cases) 88% of the time. After 10 instances, 100% correct diagnosis was achieved and maintained. While these results seem impressive, they follow from the regularity of this domain. In fact, when COBWEB was run on the data with no in- formation of Diagnostic condition at all, the four classes were ‘rediscovered’ as nodes in the resultant tree. This in- dicates that Diagnostic condition participates in a network of attribute correlations. In organizing classes around the correlated network of attributes, classes corresponding to the various Diagnostic conditions are generated (Figure 2). 3While Diagnostic condition was included in each object descrip tion, it was simply treated as another attribute. Diagnostic condition was not treated as a teacher imposed class designation as in learning from examples. Fisher 463 % of Correct Diagnoses % of Correct Diagnoses loo- loo- 90. 90. 80. 80. 70. 70. 60. 60. 50. 50. 40. 40. 30. 30. 20. 20. lo- lo- OY # of Incorporated 5 10 b 5 20 25 oybean Cases Figure 1: Success at inferring ‘Diagnostic condition’ Figure 2: A partial tree over soybean cases The success at inferring Diagnostic condition implies a - relationship between an attribute’s dependence on other attributes and the utility of COBWEB classification trees for induction over that attribute. To further characterize this relationship, the induction test conducted for Diag- nostic condition was repeated for each of the remaining 35 attributes. The results (including Diagnostic condition) were averaged over all attributes and are presented in Fig- ure 3. On average, correct induction of attribute values for unseen objects levels off at 88% using the COBWEB generated classification tree. To put these results into perspective, Figure 3 also graphs the averaged results of a simpler, but reasonable inferencing strategy. This ‘frequency-based’ method dic- tates that one always guess the most frequently occurring value of the unknown attribute. Averaged results using this strategy level off at 72% correct prediction, placing it at 16% under the more complicated classification strategy. While averaged results are informative, the primary inter- est is determining a relationship between attribute inter- dependencies and the ability to correctly predict an at- tribute’s value. Dependence of an attribute, AM, on other attributes, Ai, is a given as a function of C~M[P(AM = VSkjx(Ai = J&i)’ - P(AM = VMjM)2], % of Correct Predictions 1001 90. 80. 70. 60. 50. 40. 30. 20. 10. cobweb -’ #5of Cl$lerve$5Cbj?$s 2’ Figure 3: Prediction over all attributes that is averaged over all attributes, A;, not equal to AM. This measures the average increase in the ability to guess a value of AM given one knows the value of a second at- tribute. If AM is independent of all other attributes, A;, then dependence is 0 since P(AM = VMj,lAi = xji) = P(AM = VMjM) for all Ai, and thus P(AM = VMjMIAi = Vrji)" - P(AM = Vnaj,)” = 0. In Figure 4 the advantage afforded by the COBWEB classification tree over the frequency-based method is shown as a function of attribute dependence. Each point repre- sents one of the 36 attributes used to describe soybean cases. The graph indicates a significant positive correla- tion between an attribute’s dependence on other attributes and the degree that COBWEB trees facilitate correct in- ference. For example, Diagnostic condition participates in dependencies with many other attributes and is also the most predictable attribute. ,% Increase in Correct Prediction 100 90 80 70 60 I 50 40 DiagnostiKcondition q m Root-condip ’ a 10 0 i : -1ol w 1 I 0 Attnbute Depe: q.05 cl 0.15 ence Figure 4: Prediction as function of attribute dependence 464 Machine learning & Knowledge Acquisition V. Concluding Remarks To summarize, the soybean data strongly suggests that CQBWEB captures the important inter-correlations be- tween attributes, and summarizes these correlations at clas- sification tree nodes. In doing so, COBWEB promotes in- ference of attributes in proportion to their participation in attribute inter-correlations. Similar results have been obtained using thyroid disease data [Fisher, 19871. Experimentation above assumed classification proceeded all the way to leaves before predicting a missing attribute value. Further studies have indicated however, that for the inductive task of predicting properties of previously unseen objects, classification need only proceed to about one eighth the depth of the tree to obtain comparable in- ference results. This behavior emerges as a result of us- ing intermediate node default values to determine when attribute value prediction is cost effective (cheap, but rea- sonably accurate). In COBWEB, default values occur at a level where the attribute approximates conditional inde- pendence from other attributes - in this case, knowing the value of other attributes will not aid in further classifica- tion, and prediction might as well occur at this level. In this light, COBWEB can be viewed as an incremental and satisficing version of a system by Pearl [1985]. Finally, this work casts conceptual clustering as a use- ful tool for problem-solving, by assigning it the generic, but well-defined performance task of inferring unknown attribute values. The future should find that as concep- tual clustering methods utilize more complex representa- tion languages [Stepp, 19841, so too can their behavior be interpreted as improving more sophisticated problem- solving tasks. Acknowledgements Discussions with Dennis Kibler suggested a performance task for conceptual clustering. Thanks also go to Jeff Schlimmer, Pat Langley, Rogers Hall, and anonymous AAAI reviewers for numerous ideas and helpful comments. This work was supported by Hughes Aircraft Company. eferences [Cheeseman, 19851 P. Cheeseman. In defense of probabil- ity. Proceedings of the Ninth International Joint Con- ference on Artificial Intelligence (pp. 1002-1009). Los ’ Angeles, CA: Morgan Kaufmann. [Cheng and Fu, 19851 Y. Cheng & King-sun Fu. Con- ceptual clustering in knowledge organization. 1EE.E Transactions on Pattern Analysis and Machine Intel- ligence, 7, 592-598. [Clancey, 19841 W. J. Clancey. Classification problem solv- ing. Proceedings of the National Conference on Arti- ficial Intelligence (pp. 49-55). Austin, TX: William Kaufmann, Inc. [Dietterich, 19821 T. Dietterich. Chapter 14: Lear&g and inductive inference. P. Cohen & E. Feigenbaum (Eds.), The handbook of artificial intelligence. Los Altos, CA: William Kaufmann, Inc. [Fisher, in press] D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning. [Fisher and Langley, 19851 D. Fisher & B. Langley. Ap- proaches to conceptual clustering. Proceedings of the Ninth International Conference on Artificial Intelli- gence (pp. 691-697). Los Angeles, CA: Morgan Kauf- mann. [Fu and Buchanan, 19851 L. Fu & B. Buchanan. Learn- ing intermediate concepts in constructing a hierarchi- cal knowledge base. Proceeding8 of the Ninth Hnterna- tional Joint Conference on Artificial Intelligence (pp- 659-666). Los Angeles, CA: Morgan Kaufmann. [Gluck and Carter, 19851 M. Gluck & J. Corter. Informa- tion, uncertainty, and the utility of categories. Proceed- ings of the Seventh Annual Conference of the Cognitive Science Society (pp. 283-287). Irvine, CA: Lawrence Erlbaum Associates. [Lebowitz, 19821 M. L e b owitz. Correcting erroneous gen- eralizations. Cognition and Brain Theory, 5, 367-381. [Michalski and Stepp, 19831 R. Michalski $t R. Stepp. Au- tomated construction of classifications: Conceptual clus- tering versus numerical taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5, 396- 409. [Pearl, 19851 J. Pearl. Learning hidden causes from em- pirical data. Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp. 567- 572). Los Angeles, CA: Morgan Kaufmann. [Smith and Medin, 19811 E. Smith & D. Medin. Cate- gories and concepts. Cambridge, MA: Harvard Uni- versity Press. [Stepp, 19841 R. Stepp. Conjunctive conceptual clustering: A methodology and experimentation (Technical Report UIUCDCS-R-84-1189), Doctoral Dissertation, Urbana, IL: University of Illinois, Department of Computer Sci- ence. Fisher 465
1987
83
680
David Haussler Department of Computer Science, University of California, Santa Cruz, CA 95064 USA ’ We study the problem of learning conjunctive concepts from examples on structural domains like the blocks world. This class of concepts is formally defined and it is shown that even for samples in which each example (positive or negative) is a two-object scene it is NF-complete to deter- mine if there is any concept in this class that is consistent with the sample. We demonstrate how tbis result affects the feasibility of Mitchell’s version space approach and how it shows that it is unlikely that this class of concepts is polyno- mially learnable from random examples in the sense of Vali- ant. On the other hand, we show that this class is polynomi- ally learnable if we allow a larger hypothesis space. This result holds for any fixed number of objects per scene, but the algorithm is not practical unless the number of objects per scene is very small. We also show that heuristic methods for learning from larger scenes are likely to give an accurate hypothesis if they produce a simple hypothesis con- sistent with a large enough random sample. Introduction Since the publication cf Winston’s results on learning blocks world concepts from examples (Winston, 1975), considerable effort has gone into improving and generalizing his learning algorithm, and into developing a more rigorous and general model of this and related AI learning problems (Vem, 1975; Hayes-Roth and McDer- mott, 1978; Knapman, 1978; Michalski, 1980, 1983; Dietterich and Michalski, 1983; Bundy et al., 1985; Sammut and Banerji, 1986; Kodratoff and Ganascia, 1986). Whereas much of the earlier leam- ing work, especially that associated with the field of Pattern Recog nition (see e.g. Duda and Hart, 1973), relied on an attribute-based domain in which each instance of a concept is characterized solely by a vector of values for a given set of attributes, this work uses a structurul domain in whiclr each instance is composed of many objects, and is characterized not only by the attributes of the indivi- dual objects it contains, but by the relationships among these objects. The classic example is Winston’s arch concept, defined as any scene that contains three blocks, two having the attributes required of posts and a third having the attributes required of a lintel, with each of the posts supporting the lintel and the posts set apart from each other. This concept can be formalized by inventing variables x and y for the posts and z for the lintel and giving an expression in the predi- cate calculus roughly of tbe form “them exist distinct x , y , z such thatfr andfz and . . . andf,“, where the fi ‘s are atomic formulae in the variables x, y and z that describe attributes of and relations between the objects represented by these variables. A concept of this Eyp will be called an existential conjunctive concept. The mtions of an instance space in a structural domain and the class of existen- tial conjunctive concepts over this instance space am defined for- tchell shows how this learning task (and related tasks) can be solved by maintaining only two subsets of the version space: the set 5 of the most specific hypotheses in the version space and the set G of the most general hypotheses..These sets are updated with each new example. There are two computational problems associated with this method. The first is that in order to update the sets S and G we must have an efficient procedure for testing whether or not one hypothesis is more general than another, and whether or not a hypothesis contains a given instance. Indeed, the latter would seem to be a requirement for the existence of any practical learning method. Unfortunately, both of these problems are NP-complete if we allow arbitrarily many objects in scenes and arbitrarily many variables in existential conjunctive hypotheses (see Hayes-Roth and McDermott, 1978). This problem is avoided by fixing the maximum number of objects in a scene (and hence variables in a consistent concept) to a reasonably small number. For example, Mitchell uses two objects per scene in the running example of (Mitchell, 1982). The second problem is that the size of the sets S and G can become unmanageably large. In (Haussler, 1986) it is shown that even using the hypothesis space of conjunctive concepts in an attribute-based domain (corresponding to existential conjunctive concepts on scenes with only one object), if the number of attributes is large then the size of G can grow exponentially in the number of examples. However, in this case S never contains more than one hypothesis (see Bundy et al., 1985), so the learning task described above can still be solved efficiently by computing only S (using the ’ The author gratefully acknowledges the support of ONR grant NOW14-86-K-0454. $66 Machine Learning & Knowledge Acquisition From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. positive examples) and then checking to see if any negative example is contained in S in a second pass through the sample. We show that it is unlikely that such an efficient strategy exists for existential con- junctive concepts on domains with more than one object per scene. Mom precisely, even if we restrict ourselves to instance spaces like the one in Mitchell’s paper in which 1. each scene has exactly two objects, 2. there are no binary relations defined between the objects and 3. each object has only two-valued (Boolean) attributes, then using the hypothesis space of existential conjunctive c4MImptS and letting the number of attributes grow, not only can the size of both S and G grow exponentially in the number of examples, but it is unlikely that any efficient method (version space or not) exists for solving the learning task above, since the version space emptiness problem is FJP-complete, i.e. it is NP-complete to determine if there is any existential conjunctive concept consistent with a given sample (Theorem I). The version space paradigm of learning from examples is a rather demanding one in that it aims at either exact identification of the target concept (by nmning the algorithm until the version space is either empty or reduced to one concept) or an exact description of the set of consistent hypotheses in the case that the number of exam- ples is insufficient for exact identification. Another paradigm has recently been introduced by Valiant in which the goal of learning is merely to find a hypothesis that is a good approximation to the target concept in a probabilistic sense (Valiant, 1984; Valiant, 1985). Using the techniques of (Pitt and Valiant, 1986), we show (Theorem 2) that it is also unlikely that there is an efficient learning algorithm for existential conjunctive concepts using random examples in the sense defined by Valiant, even with the same restrictions imposed above (i.e. 2 objects per scene, no binary relations, Boolean attri- butes). To balance these negative learning results, we also obtain some positive results. First, we show that for any fixed maximum number k of objects per scene, existential conjunctive concepts can be efficiently learned from random examples in the sense of Valiant if we use an extended hypothesis space, i.e. if we restrict the target concept to be existential conjunctive with at most k variables but allow the hypothesis to be chosen from a larger class of concepts (Theorem 3). Similar results am given for other types of concept classes in (Pitt and Valiant, 1986). The intuition behind this type of result is that sometimes by replacing a detailed and precise hypothesis space by a larger but more crudely organized one, our search for a consistent hypothesis may become easier. However, because our algorithm uses a brute force translation from a structural domain into an attribute-based domain (considering all possible bindings of objects to variables), it is not practical fork larger than 2 or 3. In addition to being computationally expensive when them are many objects per scene, the algorithm used in Theorem 3 also requires more random examples to obtain a given level of confidence in the accuracy of the hypothesis produced than would a method that produced consistent existential conjunctive hypotheses. This is because a “shift” to a mom weakly biased hypothesis space (Utgoff, 1986) also weakens the statistical leverage we have in establishing the accuracy of the hypothesis within a given confidence interval. We can avoid both of these problems by restricting ourselves to existential conjunctive hypotheses as before, but using heuristics to prune the search for a consistent hypothesis (Vere, 1975; Hayes-Roth and McDermott, 197g2; Michalski, 1980; Dietterich and Michalski, 1983). From our NP completeness results, we do not expect that any efficient heuristic algorithms will always find a consistent hypothesis whenever them is one. However, we show that when a heuristic algorithm does find a simple hypothesis consistent with a large z Here only positive examples are used consistent concept meeting certain criteria. and the object is to find specific enough random sample, then this ity be a good approximation of hypothesis will with high the target concept in the probabil- sense of Valiant (Valiant, 1984), regardless of the method used to find it (Theorem 4). This theorem is established using the methodology of (Haussler, 1986), in which the bias of a hypothesis space is quantified by measuring its Vapnik-Chervonenkis dimension. Then, using a general probabilistic result (Vapnik and Chervonenkis, 1971; Blumer et al., 1986), this dimension is converted into the number of random examples required to guarantee that any consistent hypothesis is accurate with high probability. Summary of Definitions We define a set of attributes for which each object we consider has particular values. For example, we might have attributes shape, color and size, and a particular object (a small red square) might be characterized as having the value square for the attribute shape ) red for color and 2 for size. The values an attribute can have are defined ia priori, as is its value structure, which may be either tree-structured or linear (Michalski, 1983). In a tree-structured attribute the values am ordered hierarchically as illustrated in Figure 1 for the attribute shape. The lowest or leaf values of this tree are the only observable values, i.e. actual objects must have one of these values for the attribute shape, The other values am used only in logical formulae that represent concepts, as defined below. The values of a linear attribute are all directly observable and are linearly ordered, as in the attribute size, which may be defined, for example, to take only integer values between 1 and 5. shape: YYdh”\ convex non-convex triangle 5 hexagon square proper-ellipse circle crescent channel Figure 1. A scene that contains several objects is characterized not only by the attributes of its objects but by the relations between its objects. Here we will restrict ourselves to binary relations, but, for consistency with our treatment of attributes (henceforth viewed as umuy relations) we will all ow these binary relations to take on any of several values, with the same two types of possible value structures. To illustrate the flexibility of this model, we give a few examples of binary relations that might be used to characterize the spatial rela- tionship between an ordered pair of objects in a two dimensional scene. Hint, the relation distance-between may be defined as a linear binary relation in analogy with the attribute size, perhaps using the Euclidean distance between the centers of mass. Pn addition, the relative position in the z-y plane of two objects might be character- ized similarly using two linear binary relations delta-x and deltag, that give the difference in x coordinates and the difference in y coor- aussler 467 dinates of the centers of mass. Alternatively, a more qualitative binary relation to describe spatial relationship is given by the tree- structured relation reZ_pos illustrated in Figure 2. relqos: any-relqos where s 2 1 and each xi, 1 li Sr, is a variable and each fi, 1 5 i I s) is an atom over R involving either a single variable or an otdered pair of distinct variables as defined above. We have dropped the names of the variables appearing in the individual atoms to sim- plify the notation. The first part of this expression (up to the colon) may be read “them exist distinct objects x1 up to x, such that . ..‘I Thus a scene satisfies 4 if it contains r distinct objects side-by-side above/below 4 A overlapping none of these -- . obj 1, . . . . obi, such that for every i, lsi IS, if pi =fi(%j) then objj satisfies fi and if f i = f i(XjJk) then tie ordered pair (objj ,objk) satisfies f i . l’?ote that the scene may also contain objects other than these r objects. left-of right-of on~top~of under inside contains proper-overlap attributes: size: 1,2,3,4,5 shaded: yes, no, ? shape: (see Figure 1) (linear) (tree-structured) yes no binary relations: distance-be%veen: touching, relps: (see Figure 2) close, far Figure 2. Henceforth we will assume a fixed set R of relations consisting of n attributes A 1, . . . . A, and I binary relations B 1, . . . . Bt . Under this assumption, a scene with k objects is represented as a complete directed graph on k nodes (i.e. there are two directed edges between El aa 1 every pair of nodes, one going each way), with each node represent- \ \ ing an object in the scene and labeled by the n-tuple that gives the 4 \ \ observed value of each attribute for that object, and a directed edge an ifs& renresentiltipn I from a node representing objl to a node representing obj2 labeled (numbers represent size) \ @I I I with an I -tuple that gives the observed values of each binarv relation (a) on the ordered pair (objl,objz). This representation is illukated in Figures 3a and 3b, where the triples in the nodes of Figure 36 give the values of the attributes size, shaded and shape, respectively and the pairs on the edges the values of the relations relsos and distance-between, respectively. By using variables to denote unknown objects, we can define the set of (elementary) atomic formulae (atoms) over R as in (Michalslci, 1983). Atomic formulae are either unary or binary. A unary atom f(x), where x is a variable, has either the form (A (x) = v ), where A is a tree-structured attribute in R and v is a value of A, or the form (v 1 I A (X ) I v 2) where A is a linear attribute inR andvi,vZarevaluesofA suchthatvi<vZ. Intheformercase the atom f (n) restricts the value of A for the object x to be in the set of observable values in the tree for A that lie in the subtree below v , including v itself if v is observable. In the later case the value of A is restricted to be between vr and ~2, inclusive, with respect to the linear order on A. An object satisfies f(x) if its value for the attri- bute A complies with the restrictions in f(x). A binary atom f (n,y), where x and y are distinct variables, has either the form (B (x ,y ) = v ), where B is a tree-stnctumd binary relation in R and v isavalueofB,ortheform(vi~B((x,y)5v2)whereB isalinear binary relation in R and v i,v2amvaluesofB suchthatviIv2. An ordered pair of objects (objl,objz) in a scene satisfies the atom f (X ,y ) if the binary relation B between these objects complies with the restrictions in f (x ,y ). \ 1 \ I I 3 * x,y : (shape(x) = 0) and (I r; size(x) 5 3) and I (shape(y) = convex) and (mlqos (x,y) = inside) and (relqos (y,x) = contains) an existential coniunctive extression its conceut arauh Figure 3. Cd) The set of all scmes over R that satisfy 4 is called the concept represented by 4, and the class of all such sets (varying c$) is referred to as the class of existential conjunctive concepts. The expression 6$ defined above can also be represented as a complete directed graph on r nodes, similar to the way a scene is represented (see Figure 3d). In this case, each node represents a variable of $ and the labels of nodes and edges represent restrictions imposed by the atoms of 9. Thus to label the graph, in addition to tuples of observable values we will allow tuples that include abstract values for tree-structured rela- tions and ranges of the form v t..vz, with v 1 I ~2, for linear relations. (whenv*= v2 only a single value will be used.) When no atom is present for a given variable or pair of variables that involves a given relation, we use the mot value of a tree structured relation and the entire range of a linear relation. Such a graph is called a concept graph. AII existential conjunctive expression over R (see Figure 3c) is a formula Q of the form Ebnl,...,xr :flandf2and *-a andf,, 468 Machine learning & Knowledge Acquisition The graphical representation of existential conjunctive con- cepts is very useful for placing these concepts into a partial order from the most specific concepts to the most general concepts, as is used in the version space framework mentioned in the introduction. This partial order is just the set containment relation: a concept $1 is (the same as or) more general than another concept +2 if $2 E $1. However, since 41 and +2 are in general infinite sets, this is not a use- ful definition from a computational point of view. To define this rela- tion on concept graphs, let us first say that if It and 12 are tuples of restrictions labeling nodes or edges in two different graphs, then bt is stronger than 12 if every component of It represents a set of values that is contained in the set of values represented by the correspond- ing component of 12. If 6 t and G2 am the graphs of existential con- junctive concepts, then it is easily verified that Gt is more general than 62 if and only if them is a l-l mapping Q from the set of nodes of G 1 into the set of nodes of G2 such that each node in G2 in the range of 8 is labeled with a stronger tuple of restrictions than the corresponding node in G 1 and each directed edge between two nodes in G2 in the range of Q is labeled with a stronger tuple of restrictions then the corresponding edge in G 1. Furthermore, we have used the “single representation trick” (Cohen and Feigenbaum, 1982), representing both scenes and concepts with the same type of graph, and thus it is easily verified that we can also check if a concept is satisfied by a given scene by checking if the concept graph is more general than the graph cortesponding to the scene. The two dashed lines between the nodes iu Figure 3d and the corresponding nodes in Figure 3b illustrate a mapping that shows that the scene in Figune 3a is an instance of the concept in Figure 3c. Summry of Theorems Theorem 1 and Corollary 1 may be taken as evidence that existential conjunctive concepts are perhaps inherently difficult to learn, even when only a few objects am involved. Following (Pitt and Valiant, 1986), can formalize this tentative conclusion. In analogy with the class P of problems solvable in polynomial time by a deterministic algorithm, the class is defined as the class of problems thae can be solved “probabilistically” in polynomial time by a deterministic algorithm that is also allowed to flip a fair coin to decide its next move (Gill, 1977). Here we say thae the algorithm solves the problem probabilistically if whenever there is no solution, it answers truthfully, saying that there is no solution, and whenever there is a solution, it finds one (or indicates that one exists) with pro- bability at least 1 - 6, where 6 can be made arbitrarily small. Rabin’s probabilistic algorithm for testing if an integer is composite (i.e. not prime) is a classic example of such an algorithm (Rabin, 1976). ass of problems solvable machine. I%uthermore, NIP it is also strongly ehae unless RF= NP, ve concepts ate not poly- nomially learnable from random examples in the sense first defined by Valiant in (Valiant, 1984) (see (Haussler, 1987) for a formal definition of Valiant’s learning framework from an AI peerspective). Theorem 2. If existential conjunctive concepts ally learnable from two-object random examples then In other words, while we cannot prove that existential conjunc- tive concepts am not polynomially learnable from random examples in the sense of (Valiant, 1984), we can show that an efficient algo- rithm for learning existential conjunctive concepts from random examples would amount to a major breakthrough in complexity Theorem I. The problem of determining if there is an existen- theory, similar to teesolving the P verses N tial conjunctive concept consistent with a sequence of p1z of examples results of this type, for different concept classes, are given in (Pitt over an instance space defined by rr attributes (where m and tl are and Valiant, 1986). variable) is W-complete, even when there are no binary relations In contrast to this result, using other techniques from (Pitt and defined, each attribute is Boolean valued, and each example contains Valiant, 1986), we can show exactly two objects. 0 Theorem 3. Existential conjunctive concepts are polynomially One sidelight of the proof of the above theorem is that it actu- learnable from k-object random examples for any fixed a if we allow ally shows that the problem in question is NP-complete even if, in our learning algorithm eo produce a hypothesis that is not existential addition eo the restrictions listed in the statement of the theorem, we conjunctive. Cl restrict ourselves to existential conjunctive concepts with expres- sions that have only one variable. This may appear contradictory ae The proof of this result involves transforming the problem of first, since such expressions are essentially equivalent to variable- learning existential conjunctive concepts on an instance space with k fn~ pure conjunctive expressions, e.g. as studied in (Haussler, 1986), objects per scene into the problem of learning k!-CNF concepts for which there are many known learning algorithms. Wowever, (Conjunctive Normal Form concepts with at most k! atoms per these algorithms work only in the attribute-based domain, where clause) in an attribute-based instance space. Since only a small frac- them is only one object in each example and hence no ambiguity tion of such Cl’dF concepts are needed to represent existential con- regarding the mapping of attributes in the example to attributes in junctive concepts from the original instance space, this is actually a the hypothesis. The above result shows that as soon as we introduce much larger hypothesis space. Techniques of (Valiant, 1984) or even the minimal amount of ambiguity, i.e. by having two objects in (Haussler, 1986) can be used to find k !-CNF concepts that, with high each example instead of just one, then the problem of finding a con- probability, approximate the existential conjunctive target concept to sistent hypothesis becomes substantially more difficult. any desired accuracy. The drawback is thae the time requited for these techniques grows exponentially in k !, and hence the algorithm Another interesting sidelight of the above proof is that it indi- is not really practical for k larger than 2 or 3. cates how to construct samples in which the size of the sets S and G of Mitchell’s version space algorithm are exponential. For larger R, the best available general learning algorithms are still the ones that use the hypothesis space of all existential conjunc- Corollary 1. The size of the sets S and G maintained in tive concepts, but employ heuristics to prune the search for a con- Mitchell’s version space algorithm for existential conjunctive con- sistent hypothesis in this space, as mentioned in the introduction. As cepts can, in the worst case, be exponential in the number m of in the Valiant framework, let us assume that our sample is produced examples and the number n of attributes defined on objects in these by drawing random examples of an unknown existential conjunctive examples, even when there are no binary relations defined, each aetri- target concept. The error of a hypothesis is defied as the probabil- bute is Boolean valued, and each example contains exactly two ity that it will misclassify a randomly drawn example. objects. 0 Theorem 4. Consider k -object examples on an instance spaces (Cohen and Peigenbaum, 1982) P. Cohen and E. Feigenbaurn. Handbook defined by n relations (unary or binary). There is a sample size m of Artificial Intelligence Vol. III. William Kaufmann, 1982. that is (Dietterich and Michalski, 1983) T.G. Dietterich and R.S. Michalski. A 0 [ slog$&+r) log SlO~~~fr) , 1 comparative review of selected methods for learning from examples. In Machine learning: an artificial intelligence approach, Tioga Press, Palo Alto, CA, pages 41-$1,1983. such that for any target concept c, given m independent random (Duda and Hart, 1973) R. Duda and P. Hart. Pattern Clussification and examples of c, the probability that all consistent existential Scene Analysis. Wiley, 1973. conjunctive hypotheses with at most s atoms have error less than E is at least 1 - 6. Moteover, this holds independent of the choice of the (Gill, 1977) J. Gill. Probabilistic Turing machines. SIAeM J. Comput., 6 (4): 675-695, 1977. probability distribution on the instance space governing the genera- tion of examples. q Since s is a measure of simplicity for existential conjunctive hypotheses, this result essentially says that if the sample size is large enough, then all simple hypotheses that are poor approximations to the target concept will be explicitly contradicted by an example. Thus the remaining (i.e. consistent) simple hypotheses (if any) will all be good approximations to the target concept. Hence the simpli- city of the hypothesis produced by a heuristic learning algorithm can have a significant effect on the confidence we have in its accuracy, a form of Occam’s Razor (see also Pearl, 1978; Blumer et al., 1986). In (Haussler, 1986) similar results were obtained for pure con- junctive concepts in an attribute-based domain, with sample size of 0 [ slo?) log “lo;$rfz’ 1 (Haussler, 1986) D. Waussler. Quantifying the inductive bias in concept learning. In Proc. AAAI ‘86, pages 485489, Philadelphia, PA, 1986. (HaussPer, 1987) D. Haussler. Bias, Version Spaces and Valiant’s Learn- ing Framework. In Proc. 4th Int. Workshop on Machine Learning, Irvine, CA, June 1987, to appear. (Hayes-Roth and McDermott, 1978) F. Hayes-Roth and J. McDermott. An interference matching technique for inducing abstractions. ln Comm. ACM, 21(5): 401-410,1978. (Kodratoff and Ganascia, 1986) Y. Kodratoff and J. Ganascia. Improv- ing the generalization step in learning. In Machine Learning II,, pages 215- 244, R. Michalski, J. Carbonell and T. Mitchell, eds., Morgan Kaufmann, Los Altos, CA, 1986. (Knapman, 1978) J. Knapman. A critical review of Winston’s learning structural descriptions from examples. AISB Quarterly, 31: 319-320,1978. (Michalski, 1980) R.S. Michalski. Pattern Recognition as rule-guided inductive inference. IEEE PAMI, 2 (4): 349-361,198O. In fact these results are a special case of Theorem 4 with k = 1, corresponding to the case when each scene contains exactly one object, and hence the structural domain is reduced to an attribute- based domain. What is significant is that in structural domains, the sample size required grows only logarithmically as the number k of objects per scene is increased. The key problem that remains is finding the best heuristic. Theorem 2 sbows that it is unlikely that we will find a heuristic that is guaranteed to work on random examples. However, it still might be the case that by the addition of queries of the type discussed in (Angluin, 1986), a polynomial learning algorithm for existential con- junctive concepts could be found that always produces simple, con- sistent existential conjunctive hypotheses, and whose performance does not degrade badly with increasing k like the algorithm of Theorem 3 above. The work in (Sammut and Banerji, 1986) is a step in this direction, but as yet mere has been no careful performance analysis of the teclmiques used them. Acknowledgements. I would like to thank Les Valiant, Nick Little- stone, Manfred Warmutb and Pat Langley for helpful conversations concerning tbis material. (Michalski, 1983) W.S. Michalski. A theory and methodology of inductive learning. l[n Machine learning: QII artificial intelligence approach, pages 83-134. Tioga Press, Palo Alto, CA. 1983. (MitcbeU, 1982) T.M. Mitchell. Generalization as search. Art. Intell., 18: 203-226, 1982. (Pearl, 1978) J. Pearl. On the connection between the complexity and credibility of inferred models. Int. J. Gen. Sys., 4: 255-264, 1978. (pita and Valiant, 19&S) L. Pitt and L.G. Valiant, Computational Limita- tions on Learning fpom Examples. Technical Report TR-05-86, Aiken Computing Lab., Harvard University, 1986. (Rabin, 1976) MO. Rabin. Probabilistic Algorithms. In Algotithms and Complexity: New Directions wrd Recent Results, pages 21-39, J.F. Traub, Ed., Academic Press, New York, 1976. (Sammut and Banerji, 1986) C. Sammut and R. Baneji. Learning con- cepts by asking questions. In Machine Learning II. R. Michalski, J. Car- bone11 and T. Mitchell, eds., Morgan Kaufmann, Los Altos, CA, 1986. (Utgoff, 1986) P. Utgoff. Shift of Bias for inductive Concept Learning. In Muchine Learning II. R. Michalski, J. Carhonell and T. Mitchell, eds., Morgan Kaufmann, Los Altos, CA, 1986. (Valiant, 1984) LG. Valiant. A theory of the learnable. Covnm. ACM, 27 (11): 1134-1142,1984. References: (Valiant, 1985) L.G. Valiant. Learning disjunctions of conjunctions. In Proc. 9th IJCAI, vol. 1, pages 560-566, Los Angeles, CA, August 1985. (Ar@uin, 1986) D. Angluin. Types of queries for concept learning. Techn- ical Report YALEU/DCS/TR-479, Yale University, 1986. (Vapnik and Chervonenkii, 1971) V.N. Vapnik and A.Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their pro- (Blumer et aL, 1986) A. Blumer, A. Ehrenfeucht, D. Haussler and M. Warmuth. Classifying learnable geometric concepts with the Vapnik- Chervonenkis dimension. In 18th ACM Symp. Theor: Comp., Berk&y, CA, 1986. (Bundy et al., 1985) A. Bundy, B. Silver, and D. Plumrner. An analytical comparison of some rule-learning programs. Artif. Intel., 27: 137-181, bahilities. Th. Prob. and its Appl., 16 (2): 264280, 197 1. (Vere, 1975) S.A. Vere. Induction of concepts in the predicate calculus. In Proc. 4th IJCAI, pages 281-287, Tbilisi, USSR, 1975. (Winston, l975) P. Winston. Learning structural descriptions from exam- ples. In Thd! Psychology of Computer Vision. McGraw-Hill, New York, 1975. 1985. 470 Machine Learning & Knowledge Acquisition
1987
84
681
enrion, cision Science, and Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pa 15213. straea provides a set of techniques for structuring and encoding expert knowledge, comparable with knowledge engineering techniques for rule-based expert systems. In order to compare the expert systems and decision analysis approach, each was applied to the same task, namely the diagnosis and treatment of root disorders in apple trees. This experiment illustrates a variety of theoretical and practical differences between them, including the semantics of the network representations (inference net vs. influence diagram or Bayes’ belief net), approaches to modelling uncertainty and preferences, the relative effort required, and their attitudes to human reasoning under uncertainty, as the ideal to be emulated or as unreliable and to be improved upon? As schemes for representing uncertainty in Al proliferate and the debate about their various merits intensifies, [Kanal & Lemmer, 1986; Gale, 19861, it is becoming increasingly important to understand their relative advantages and drawbacks. One major axis of contention has been between proponents of various heuristic, qualitative, and fuzzy logic schemes, who argue that these are more compatible with human mental representations and consequently more practical to build and explain [Buchanan & Shortliffe, 1984; Cohen, 1985; Zadeh, 49861, and advocates of probabilistic schemes, who emphasize the virtues of being based on a normative theory of decision making under uncertainty [Pearl, 1985; Cheeseman, 1985; Spiegelhalter, 19861. The latter have argued the advantages of approaches that are coherent, i.e. strictly consistent with the axioms of probability, over the earlier approximate Bayesian schemes developed for Mycin and Prospector [Duda et a/., 19761. So far, comparisons have focused primarily on differences in theoretical assumptions [Bonissone, 1986; Horvitz, Heckerman & Langlotz, 1986; Henrion, 1987a], although there have been a few experimental studies which compare the performance of different ‘This work was supported by the National Science Foundation under grant ET-8603493 to Carnegie Mellon. uncertain inference schemes given knowledge formalized as a small rule-set [Tong & Shapiro, 1985; Wise & Henrion, 1986; Vadrick et al, 1986; Wise, 19861. The informal experience of knowledge engineers and decision analysts alike suggests that choices about the structuring and encoding process that formalizes expert knowledge may have more impact on the final results than the numerical details of the uncertainty calculus employed. In the past, coherent probabilistic schemes have been criticized as intractable for significant practical applications, but recent developments have appear to have improved their practicality for construction and computation. These include influence diagrams [Howard & Matheson, 19841 and Bayesian belief nets [Pearl, 19861. These are graphical tools which facilitate the qualitative structuring of uncertain knowledge and provide a framework for the numerical encoding of probabilistic relations in a form guaranteed to be coherent. The term knowledge engineering seems as appropriate for describing the activity of the decision analyst in building a probabilistic decision model to represent uncertain beliefs and preferences as it is for the construction of an expert system. The purpose of this paper is twofold: First to illustrate the knowledge engineering process employing such a decision analytic approach, and second, to compare it with a rule-based expert system approach applied to the same problem. Since most readers will be more familiar with the latter approach, we shall provide greater detail on the former. The task selected involved the diagnosis and treatment of root disorders of apple trees. We considered several causes of root damage, including water stress from waterlogged soil, cold stress from a severe winter, and the fungus, phytophthora. These problems are of major commercial significance to orchardists, and often lead to damage and destruction of apple trees. Moderate cases of phytophthora can be controlled by applying a fungicide. Other treatments include tiling and draining the area to control water damage, and bridge-grafting. If the damage has progressed too far, reducing apple production permanently by more than about 25%, the most efficient solution may be to destroy the trees and replant. The consultant plant pathologist uses a wide variety of evidence about the tree, environmental conditions, observable symptoms, and laboratory tests to diagnose Henrion and Cooley 471 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. the cause of root damage, and so recommend treatments. Figure 1 lists some of the elements of the problem. iagnoses: Phytophthora, cold stress, water stress. reatments: Fungicide, tiling and drainage, uproot and replant, wait and see. Exaiqdes of evidential wariables (with values): Winter cold episodes without snow cover (yes, no) e Soil texture (light, moderate, heavy) 8 Wetland vegetation (yes, no) 0 Phytophthera resistant root stock (yes, no) 0 Delineated root cankers (yes, no) 8 Root tissue damage (None, little, moderate, severe) Figure 1: Selected elements of the apple root problem Two decision support systems were constructed to diagnose root disorders in apple trees and recommend treatment. One, which we will term the “ES model”, was built by an knowledge engineer experienced in building knowledge-based expert systems. The other, which we will term the “DA model”, was built by an experienced decision analyst. Both were developed on the basis of extensive interviews with a plant pathologist (D.R.C.), who has ten years experience as a specialist in this area. The initial structuring, encoding and implementation phases of the knowledge engineering process were carried out over an intensive four-day period, during which the two knowledge engineers alternated in working with the expert. The full implementation, testing, and refinement of the systems were completed over a longer time frame. The ES model was implemented in KEE (Intellicorp) as a standard inference network with data-directed control. Diagnostic relationships are represented as rules giving the degree of belief in intermediate hypotheses and disorders based on Boolean combinations of data (evidential variables). Additional rules provide support for various treatments based on the diagnoses and other evidence. The DA model employs an influence diagram to represent the expert’s beliefs about how possible root disorders and treatments might affect the tree productivity and costs. It incorporates a Bayesian belief net to diagnose the disorders based on the available evidence. During the initial interview period, part of the influence diagram was used to construct a decision tree, which was implemented in Arborist (Texas Instruments) for preliminary analysis. Subsequently, the entire influence diagram including the diagnostic belief net was implemented using a combination of algorithms for propagating evidence through Bayes’ nets [Pearl, 1986; Henrion, 1987b]. The initial phase for both approaches was to identify the objects in the domain, that is, the root disorders, treatments, and evidential variables. Both knowledge engineers worked with the expert to draw directed graphs which represent qualitative evidential links between these elements. The first to be acquired was the influence diagram for the DA model. From this, and further discussion with the expert, an inference net was derived. These networks allow the decomposition of the expert’s domain knowledge into separable local relationships. The initial influence diagram had 30 nodes, and the inference net had 25, of which 20 were common to both. Figure 2 shows a fragment of both networks superimposed for comparison. Although they are topologically similar, there is a fundamental difference in the interpretation of the links. In the inference net, the direction of the links corresponds to the anticipated direction of inference, from evidence to disorders to treatments. In the influence diagram the direction of the links generally represents the believed direction of causal influence, for example abiotic stress increases susceptibility to phytophthora, and either of these can cause root tissue damage. Note that the influence diagram does not need to represent causal influences in full scientific detail (e.g. the physiology of how phytophthora produces root tissue damage) even when this is known, unless it seems likely to significantly improve the inference results. The influence diagram is also a way to express qualitative judgments about probabilistic independence: Two unlinked variables with a common cause (e.g. the phytopthora lab test and root tissue damage) are conditionally independent of each other and also of indirect antecedents (e.g. resistant root stock) given their immediate cause (e.g. phytophthora Figure 2: Fragment of influence diagram (solid arrows) with corresponding inference network (hatched arrows) infection). As Figure 2 shows, the links in the two representations may go in the same direction, such as where resistant root stock decreases susceptiblity to phytophthora infection and (therefore) provides evidence 472 Machine Learning & Knowledge Acquisition against phytophthora. More often they go in opposite directions, such as where observable symptoms are caused by a disease and therefore they are used as evidence for it. For example, phytophthora can cause delineated cankers, and so cankers are evidence for phytophthora. For both approaches, the directed links help in the subsequent encoding of the relationships. For the ES approach, arrows converging on a node indicate a potential diagnostic rule with the antecedent nodes to appear in the condition and the destination node to appear in the action. For the DA approach, uncertain influences are encoded as conditional distributions with the antecedent nodes as the conditioning variables. Psychological research suggests it is generally easier to assess the probabilities of effects conditional on their causes (e.g. symptoms given diseases) than vice versa [Kahneman, Slavic & Tversky, 19821, This ES model, like almost all rule-based expert systems, can only propagate evidence in the direction in which it is encoded, (no matter whether the inference is controlled by forward or backward chaining.) In contrast, in the DA model the influence diagram does not determine the direction of inference. By taking expectations over the conditions, this may be in the causal direction, or, by application of Bayes’ rule, it may be the opposite, diagnostic direction, according to requirements of the application. For example, it is possible to determine the current probability of an unexamined symptom, based on observations of other symptoms, before deciding whether it is likely to be worthwhile to examine it. the orchardist was assumed to be risk neutral. The DA model computes the expected net cost of each treatment and recommends the treatment which minimizes it. The ES model relies on heuristic rules for making inferences about what treatments to recommend based on the degrees of belief in the diagnoses without explicit consideration of costs. This approach seemed more natural for the expert, and was certainly much easier. Since the costs of treatment, tree replacement, and lost production for a given outcome do not vary greatly from one orchard to another, one can argue that such general rules may be widely applicable, somewhat analogous to the way it has been suggested that Mycin rules might be justified by decision analysis [Langlotz, Shortliffe, & Fagan, 19861. However, as we shall see, at least in this case, formal analysis raises doubts about the adequacy of such informal analysis. s In the ES model, the relationships in the inference network were encoded as sets of rules. The “degree of belief” in a hypothesis is one from the following ordered set of seven values: {confirmed, strongly-supported, supported, neutral, detracted, strongly-detracted, disconfirmed}. Each rule specifies the degree of belief in a conclusion based on combinations of its antecedents. For example, the rules in Figure 4 specify the degree of belief in phytophthora damage based on the values of six possible sources of evidence. These rules encode only those combinations of evidence thought by the expert to be important. Figure : Influence diagram showing relation between root problems, treatment and outcome costs. The decision is enclosed in a rectangle, and the criterion variable is enclosed in a diamond. ellin sts at-i references The DA approach developed an explicit quantitative model to estimate the costs and values of each combination of outcomes and treatments (Figure 3). Costs over multiple years were combined to obtain a discounted present value (a replacement tree takes 5 years to achieve full production). For the initial model, if phyto-resistance is low then supported if reduced-fine-root-hairs is yes then supported if reduced-fine-root-hairs is no then detracted if cold-stress is at least supported or water-stress is at least supported then supported if tissue-discoloration-below-soil is delineated-canker or tissue-discoloration-above-soil is delineated-canker then strongly-supported Figure 4: Example diagnostic rules from the ES model for level of belief in phytophthora infection Each uncertain influence is encoded as a probability distribution for the consequent, conditional on all its antecedents. To quantify each distribution, the expert was first asked for a verbal expression to get a rough idea, and then for explicit numbers. For example, evidence PI, wetland vegetation, was judged to be causally influenced by hypothe ’ judged it “quite probable” that wet site. The expert nd vegetation would be observed at a wet site, “impossible” if it was not a wet site. These judgments w re then quantified as the two conditional probabilities, 04 w = 0.7, p(wI-w) = 0. Sensitivity analysis was used to examine the importance of accuracy in such assessments. In the vast majority of cases, a very rough assessment is perfectly adequate. Henrion and Cooley 473 explicitly, and so requires more extensive testing and refinement. Fungicide Abiotic Phytoph. Damage Conditional treatment stress infection to tree probability None None None 1 0.2 None I Temporary 0.5 Permanent 0.3 Non- recov Replace 1 I None 0.25 Treat I Temporary 0.35 Permanent 0.35 Reolace 0.2 Figure 5: Partial decision tree for treatment decision The number of parameters of the conditional distribution increases exponentially with the number of conditioning variables, but qualitative knowledge can usually reduce the assessment effort drastically. For example, the damage to the tree has 4 levels, conditioned on the severity of the phytophthora infection and abiotic stress, each at 3 levels, and the fungicide treatment decision (yes or no), giving a potential of 4 x 3 x3x2 = 72 parameters to be judged. However, ,many Iare impossible, certain, or otherwise constrained by qualitative considerations, and there are actually only 12 different numbers requiring assessment. Figure 5 shows half of the decision tree whose terminal branches represent this conditional distribution. VU. Testing and Refinement In both approaches, initial implementations were tested to see if their conclusions were reasonable in the judgment of the expert, and the models were elaborated and tuned in the light of these tests. During the construction of the DA model, the use of conditional distributions to encode influences requires the expert to consider the impact of all possible combinations of evidence and decisions for each influence. The approach to encoding diagnostic rules for the ES model was much less demanding in its initial requirements of the expert. But it is consequently more likely to encounter combinations of events that had not been considered The two approaches differ fundamentally in their response to unexpected results. For the ES model, rules were modified or added to obtain results that agreed more closely with the original expectations of the expert, since the primary goal was to emulate his judgment. In the DA approach, after initial rechecking of the relevant model structure and assessed probabilities, the probabilistic reasoning leading to the surprising conclusions was explained to the expert. If this lead him to accept them, the system was left unmodified. This was clearly illustrated by the following example. Preliminary sensitivity analysis of the decision tree in Figure 5 showed that treating with fungicide had positive expected value, since the treatment is cheap (about $0.58 per tree) relative to the cost of replacing a tree (about $85). But the actual increment in expected value turned out to be very small, so small that it might often be outweighed by considerations not explicit in the model, such as environmental side-effects of the fungicide. This result was initially surprising to the expert, but examination of the model provided an explanation: The fungicide’s effectiveness in controlling phytophthora was judged to be modest, and the probability that a tree is curable, i.e. that an infection is both present and not already beyond recovery is quite low. On reflection, he found this explanation convincing, and the result likely to be of considerable practical interest. No single knowledge engineer or decision analyst can claim that their approach is completely representative of all practitioners of their respective crafts. Certain aspects of both the approaches used here are somewhat atypical. The qualitative representation of uncertainty used in the ES model is less common than heuristic numerical schemes, such as Certainty Factors. The particular techniques applied here for evaluating influence diagrams are recent and not yet in general use. The size of both models and the effort devoted to their construction were quite modest. Nevertheless, several important points of comparison which are of general applicability to the two approaches, are clearly illustrated by this experiment. In the initial structuring phase there are significant similarities between the approaches in the identification of the key elements, and the use of graphs to represent their interrelationships, but it is important to understand the fundamental differences in meaning between the inference network and influence diagram. Of course, research in expert systems has developed a rich array of techniques for knowledge represention and categorical reasoning that have not been a formal concern of decision analysis. It is specifically in the approaches to inference under uncertainty and decision making that the comparison is interesting. Here the main advantage of the ES approach is in the greater ease in initially encoding uncertain dependencies. This arises from the informal, heuristic nature of the language whether qualitative (as in this experiment) or quantitative, and the 474 Machine Learning & Knowledge Acquisition willingness to accept partial specifications, with inference rules conditioned on only the most salient combinations of evidence rather than the exhaustive combinations required for the DA model. The greater ease of encoding means that, for a given expenditure of effort, it is possible to build a system that deals with a larger number of sources of evidence, diagnoses and treatments than with the more rigorous DA approach. The downside of this more relaxed approach is that the ES model is likely to require more extensive testing, debugging, and tuning to ensure that it performs adequately, and can handle common and impoflant situations. Whether this additional effort will, in general, entirely cancel out its initial advantage is unclear from a single experiment and will depend on the criterion for adequacy. Many in the Al community have believed coherent probabilistic approaches to be essentially intractable for problems involving the explicit representation of large bodies of expert knowledge. Part of this belief may stem from the traditional decision tree representation used by decision analysts, which grows exponentially with the number of uncertain events and decision variables. However, as illustrated, the influence diagram and Bayes’ belief net provide tools for structuring and probabilistic inference which, if used judiciously, may have only linear complexity, albeit with a higher constant than the ES approach. A second common misgiving about the DA approach is the quantity of numerical judgments needed for assessing the probabilities. As we have seen, this may be greatly reduced by careful structuring and use of qualitative knowledge. Moreover, the vast majority of the numbers have small impact on the results, and so rough judgments will be adequate. Sensitivity analysis can help identify those few where significant assessment effort may be worthwhile. Many of the advantages of the DA approach arise from its clearer separation of domain knowledge, obtained from the expert, and its general methods for inference under uncertainty based on 5ayesian decision theory. The modelling of causal influences instead of inference rules provides an isotropic representation of domain knowledge with no preferred direction of inference [Henrion, 1987a]. Where the ES rules support inference only in the direction encoded, coherent Bayesian inference can perform causal, or diagnostic inference, as the occasion demands, operating on the same representation. The fact that causal models turn out to have advantages in representing uncertainty suggests an interesting relationship with other work on causal modelling for explanation and categorical diagnosis, for example in medical Al [Patil & Szolovits, 19811. The treatment of discrepancies between the conclusions of the model and the expert illustrates a basic difference in philosophy. In the ES approach the performance of the expert is considered the ‘“gold standard” which we seek to emulate. Discrepancies are therefore taken as a sign of a deficiency, and must be remedied by modifying or adding inference rules. The decision analyst also relies on the judgment of the expert, at least in those areas for which the expert has direct experience or knowledge. However, the decision analyst tends to be more impressed by the psychological findings on the limitations of human reasoning under uncertainty [Kahneman, Slavic & Tversky, 19821, and so is more skeptical about the expert’s inferences beyond his immediate experience. If they disagree with the inferences made by the model, then the decision analyst may well prefer the results of the model. Indeed the possibility that the formal model may improve on the intuitive inferences of the expert is a major motivation for constructing it. This was dramatically illustrated in the experiment by the low expected value of the fungicide treatment which the expert initially found counter-intuitive. After rechecking the assumptions and understanding the reasoning, he accepted its validity and modified his intuition. He found this an important insight likely to be of general value to other specialists in the area. The possibility of obtaining new results which go beyond current expert opinion requires a basis in some normative theory of decision making. Since the expert’s original belief about the worth of the fungicide treatment was consistent with current opinion, and not liable to obvious empirical contradiction, such an insight would be hard to obtain by informal means. R RUSi We have argued that the knowledge engineering effort required for a decision analytic approach is less than widely believed, and have demonstrated its feasibility for a significant application. However, although the influence diagram developed here is perhaps the largest yet reported, it remains two orders of magnitude smaller in scope than the largest expert systems reported (e.g. InternistXaduceus), and it would be premature to make general claims about its capacity for major scale- UP. Although decision analysis has been practiced successfully for over fifteen years, software to support influence diagram structuring and evaluation is still its infancy, though developing rapidly [Wiecha, 1986; Holtzman, 1985; Shachter, 1986; Henrion, 1987b]. The knowledge engineering effort for the decision analytic approach, will no doubt be significantly reduced, although is likely to remain somewhat greater than for rule-based expert systems. However, there are considerable advantages to methods based on normative theory. It facilitates a consistent integration of causal and diagnostic inference. Uncertain beliefs and preferences are clearly differentiated. It provides a cleaner separation between domain knowledge and inference methods, and so may improve on the fallible reasoning of the expert. Whether these advantages outweigh the extra effort involved depends on the problem domain and task, and will remain partly a matter of taste. In the future, the dilemma may be resolved by systems that integrate ideas and techniques from both approaches to provide a richer range of options combining the advantages of each. Wenrion and Cooley References . Bonissone, P. P. (1986). Plausible Reasoning: Coping with Uncertainty in Expert Systems. In Encyclopedia of/l/. John Wiley & Sons. Patil, R.S. & Szolovits, P. (1981). Causal understanding of patient illness in medical diagnosis. Proceedings of IJCAI-87: Vancouver, Canada. , International-Joint Committe for Al. Buchanan, B. G. & Shortliffe, E. H. (1984). Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison- Wesley, Reading, Mass. Pearl, J. (1985). How to do with probabilities what people say you can’t (Tech. Rep. CSD-850031). Cognitive Systems Laboratory, Computer Science Department, University of California, Los Angeles, CA. Cheeseman, P. (1985). In Defense of Probability. Proceedings of 9th International Joint Conference on Pearl, J. ( 1986). Fusion, Propagation, and Structuring Al. Los Angeles, Ca. in Belief Networks. Artificial Intelligence, 29(3), 241-288. Cohen, P.R. (1985). Heuristic Reasoning about Uncertainty: An A/ Approach. Pitman: Boston. Duda, R. O., Hart, P. E., and Nilsson, N. J. (1976). Subjective Bayesian Methods for Rule-Based Inference Systems (Tech. Rep. SRI Technical Note 124). SRI International. Shachter, R.D. (1986). David: Influence Diagram Processing System for the Macintosh. In Proceedings of Second Workshop on Uncertainty in Artificial Intelligence. AAAI. Gale, W. A. (ed.). (1986). A/ and Statistics. Addison- Wesley: Reading, MA. Spiegelhalter, D. J. (1986). A statistical view of uncertainty in expert systems. In W. Gale (Eds.), Al and Statistics. Addison-Wesley: Reading, MA. Henrion, M. (1987). Uncertainty in Artificial Intelligence: Is probability epistemologically and heuristically adequate? In J. Mumpower (Ed.), Expert Systems and Expert Judgment. NATO ISI Series: Springer- Verlag. Tong, R. M., Shapiro, D. G. (1985). Experimental Investigations of Uncertainty in a Rule-based System for Information Retrieval. international Journal of Man-Machine Studies . Henrion, M (1987). Propagation of Uncertainty by Logic Sampling in Bayes’ Networks. In L.N. Kanal & 4. Lemmer (Eds.), Uncertainty in Artificia/ Intelligence. Amsterdam: North-Holland. Wiecha, C. (1986). An empirical study of how visua: programming aids in comprehending quantitative policy models. Doctoral dissertation, Carnegie Mellon University, Department of Engineering and Public Policy, Holtzman, S. (1985). Intelligent Decision Systems. Doctoral dissertation, Engineering-Economic Systems, Stanford University, .- Wise, B.P. (1986). An experimental comparison of Uncertain Inference Systems. Doctoral dissertation, Carnegie Mellon University, The Robotics Institute and Department of Engineering and Public Policy, Horvitz, E.J., D.E. Heckerman & C.P. Langlotz. (1986). A framework for comparing alternative formalisms for plausible reasoning. Proceedings of AAAI-86: Philadelphia, PA. American Association for Artificial Intelligence. Wise, B.P. & Henrion, M. (1986). A Framework for Comparing Uncertain Inference Systems to Probability. In Kanal, L.N. & Lemmer, J. (Eds.), Uncertainty in Artificial Intelligence. North-Holland: Amsterdam. Howard, R.A. & J. E. Matheson. (1984). Influence Diagrams. In Howard, R.A. & J. E. Matheson (Eds.), The Principles and Applications of Decision Analysis: vol /I. Strategic Decisions Group, Menlo Park, CA. Yadrick, R. M., Vaughan, D. S., Perrin, B. M., Holden, P. D., Kempf, K. G. (1986). Evaluation of Uncertain Inference Models I: PROSPECTOR. In Second Workshop on Uncertainty in Al. AAAI. Kahneman, D.; Slavic, P. & Tversky, A. (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Zadeh, L. (1986). Is probability theory sufficient for dealing with uncertainty in Al? A negative view. In L.N. Kanal & J. Lemmer (Eds.), Uncertainty in Artificial Intelligence. Amsterdam: North-Holland. Kanal, L.N. & Lemmer, J. (eds.). (1986). Uncertainty in Artificial Intelligence. North-Holland: Amsterdam. Langlotz, C.P., E.H. Shortliffe, & L.M. Fagan. (1986). Using decision theory to justify heuristics. Proceedings of AAAI-86. American Association for Artificial Intelligence. 476 Machine Learning & Knowledge Acquisition
1987
85
682
Smadar T. Kedar-Cabelli Department of Computer Science Rutgers University New Brunswick, NJ 08903 ARPAnet: kedar-cabelli@rutgers.arpa Abstract Explanation-Based Generalization (EBG) has been recently a much-explored method of generalization. By utilizing domain knowledge, and knowledge of the concept being learned, EBG produced a valid gen- eralization from a single example. Most EBG sys- tems are currently provided with the concept being learned - or targef concept - as a fixed input. A more robust generalization mechanism needs the ability to automatically formulate appropriate target con- cepts based on the purpose of the learning, since con- cepts learned for one purpose may not be appropriate for another. This paper introduced a technique and an implemented system that automatically formulate target concepts and their specialized definitions. In particular, the technique derives definitions of every- day artifacts (e.g. CUP), from information about the purpose for which agents intend to use them (e.g. to satisfy their thirst). Given two different purposes for which an agent might use a cup (e.g. as an ornament, versus to satisfy thirst), two different definitions can be derived. Explanation-Based Generalization (EBG) has been re- cently a much-explored method of generalization (e.g. [Mitchell et al, 19861, [DeJong and Mooney, 1986]). By utilizing domain knowledge, and knowledge of the concept being learned, this method produces a valid generalization from a single example. The key power of EBG derives from its ability to extract just those features relevant to concept membership based on an explanation of how the example is a member of the concept being learned. Most EBG systems are currently provided with the concept being learned - or target concept - as a fixed in- put. A more robust generalization mechanism needs the ability to automatically formulate appropriate target con- cepts based on the purpose of the learning, since concepts learned for one purpose may not be appropriate for an- other. This paper introduces a technique, purposive concept formulation, and an implemented system, PurForm, that address the above limitation by using a specialized notion of purpose to automatically formulate target concepts and their definitions. In particular, the technique derives defi- nitions of artifacts (e.g. CUP) from information about the purpose for which agents intend to use them (e.g. to sat- isfy their thirst). Given two different purposes for which an agent might use a cup (e.g. as an ornament, versus to satisfy thirst), two different definitions can be derived. Consider the CUP scenario from [Mitchell et al., 19861, based on [Winston et al., 19831: A structural definition of CUP is extracted by explaining how an example of CUP satisfies some pre-defined functional definition. Given the functional definition of CUP as a stable, liftable, open ves- sel, and given one example (say a green mug), the resulting structural definition states that CUPS are any light, flat- bottomed object with an upward-pointing concavity and a handle. Suppose instead that an agent wants to use the cup for the purpose of drinking hot liquids, or to use it as an ornament. Given explicit knowledge of purpose, the purpo- sive concept formulation technique can automatically de- rive a definition appropriate for that purpose, rather than have it as a fixed input as in Winston’s system. A cup for the purpose of drinking hot liquids, then, need not only be liftable, stable, and an open vessel, but also needs to insulate heat. In the balance of the paper, we discuss the tech- nique and the implemented system. Section II. presents an overview of the technique. Section III. describes the sys- tem, PurForm, in terms of inputs, outputs, and processes of each of the modules. Section IV. discusses related work. We conclude in section V. with limitations and some future research issues. A. hat is 6 Concepts often arise because of a need. An agent wants to achieve a goal, and needs to identify objects that would facilitate that goal. A description of an object whose prop- erties enable an agent’s goal becomes a useful concept to acquire. As a step toward automatically formulating target concepts, we examine a specialized class of concepts, those that describe everyday artifacts. Artifacts can be viewed as objects designed to enable agents’ goals (chairs are to be seated on, pens are to write with, and so on). More Kedar-Cabelli 477 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. precisely, using conventional AI planning terminology, the specialized notion of purpose of an artifact is to enable a plan of actions to achieve an agent’s goal. The artifact will enable such a plan if it satisfies those preconditions of the plan in which it is involved. The purposive concept for- mulation technique uses standard algorithms of planning and goal regression to compute the weakest preconditions of a plan. The target concept is then formulated by isolat- ing those preconditions of the plan that describe properties inherent to the artifact that make it useable in the plan. Given a different goal or plan, a different target concept would be formulated. 2. Formulate purposive functional definition of target concept 3. Cperationalize the definition of target concept The purposive concept formulation technique consists of three steps. It first isolates a useful target concept to acquire. For example, if the agent’s goal is to find some- thing from which to drink hot tea, then an artifact that can be used to drink from becomes the target concept to acquire. Next, given the target concept, its purpose (to enable the agent’s goal) and a plan to achieve the goal, the purpo- sive functional definition is formulated by collecting those properties inherent to the target concept that make it use- able in the plan. For example, the plan for drinking hot tea might be to POUR the hot tea into a container, GRASP it with the hot tea in order to PICKUP, and finally DRINK the tea from it. The class of artifacts useable in the plan (HOT-CUPS) are those open containers that can contain hot liquid, and at the same time can be grasped and picked up by the agent, and can be emptied of the hot liquid. IS. Statement of the Problem In order to more precisely define the purposive concept formulation problem, we introduce some terminology. A state description associates a state of the world with a list of facts, represented by ground atomic predicates. A goal formula is represented by a conjunction of atomic predi- cates that are desired to hold in some distinguished final state. An operator describes an action, and is represented in a STRIPS-like formalism with conjunctive formulae for precondition, add and delete lists. A plan is defined as a sequence of operators that transform an initial state into a state matching the goal formula. A domain theory is a set of rules, ground atomic predicates, and operators, that represent the general axioms, facts, and actions of the do- main, respectively. A concept is represented as a predi- cate over some universe of objects, and characterizes some subset of these objects. Each object is described by a col- lection of ground atomic predicates. A concept definition describes sufficient conditions for concept membership. An object that satisfies the concept definition is called an ez- ample of that concept. A generalization of an example is a concept definition that describes a set containing that ex- ample. A concept definition is functional when it refers to the requirements on the action for which examples of the concept are used. A concept definition is structural when it refers to physical properties that satisfy the functional requirements. The purposive concept formulation problem, and the technique for solving it, are defined as follows: Purposive Concept Formulation Given: e Goal e Plan to achieve goal e Domain theory Determine: o Target concept 8 Purposive functional definition of target concept e Operational definition of target concept Technique: In the third and final step, the purposive functional definition is reformulated, or operats’onalized [Mostow, 19831 [Keller, 1987a] into a more useable form, with the aid of an example of a HOT-CUP. The definition will be more operational if the system can use it to more efficiently recognize members of the target concept. Using a blue ce- ramic mug as an example of a IIQT-CUP, the resulting operational definition states that HOT-CUPS are artifacts that are light weight, have an open concavity that is cylin- drical, non-porous, and made of ceramic material, and have a flat bottom and a handle. PurForm is the prototype system that implements the technique. In this section we describe the details of the system, including representation, inputs and outputs, and the process of each step. We illustrate each step with our case study of formulating the definition of HOT-CUP. Pur- Form was implemented in PROEOG. PurForm is invoked after a problem solver has cre- ated a plan to achieve a given goal. The problem solver is a backward chaining planner (from [Nilsson, 19801 [Kowal- ski, 19791). G iven the goal formula ingested(robbie,hot_tea,X) and an initial state description in which robbie the robot is in the kitchen; mugs, bowls, glasses, et cetera, are on shelves in the kitchen; and a tea urn in the kitchen is filled with hot tea; the planner finds a plan in which mug1 is used to drink: 1. Isolate the target concept 478 Machine Learning & Knowledge Acquisition [pour(Pobbie,hot_tea,Lea_urni,mnugf), grasp(robbie,mugl), pickup(robbie,mugi), ingest(robbia,hot-tea,mugl)l PurlForm is given the goal, and a plan that achieves the goal, as input (along with a domain theory). It per- forms purposive concept formulation by isolating a target concept to acquire, formulating a purposive functional def- inition for it, and operations&zing the definition. Step 1: Isolating the Trarget concept. PurForm first isolates a useful target concept to acquire. Any of the arguments in the goal or plan are candidate target concepts. We have simplified this step by choosing a target concept that is initially unknown (i.e. a variable argument in the goal), yet is useful in enabling the goal. Given the goal formula: ingssted(robbie,hot-tsa,X) X is the unknown argument, and becomes the target concept to acquire. Step 2: Formulating the Purposive Ccmeept Definition. Given the target concept, its purpose (to en- able the agent’s goal), and the plan, this step formulates the purposive functional definition by reasoning about the role of the target concept in enabling the plan to achieve the goal. In order to satisfy the plan, the artifact must satisfy the conjuncts in the preconditions of those plan ac- tions in which it is involved. These conjuncts are collected together by regreming the goal through the plan, and then analyzing the role of the artifact in the regressed expres- sion. Given a goal formula and a plan as input, the goal re- gression algorithm [Nilsson, 19801 produces a description of all initial states such that applying the plan to any of these states produces a final state matching the goal for- mula. The resulting regressed expression consists of all those goal conjunets and preconditions that the operators did not achieve, and therefore must be true even before the operator sequence is applied, that is, in the initial state, Given the goal, with generalized arguments ingested(Grasper-Robot,Mot-Drink,X) and the generalized plan (with constant arguments replaced by variables), the regressed expression is: open(From-Container), can(pour(6rasper-Robot,Hot-Drinks From-Container,X)), can(contain(From-Container,Hot-Drink))B empty(X), open(X) B can(contain(X,Hot-Drink))B grasper-empty(Grasper-Robot), can(grasp(Graspsr-Robot,X)), ungrasped(X9, can(pickup(Grasper-Robot,X)), on(X, Surface) I same-loc(Grasper-Robot,X,location), can(ingsat(Grasper-Robot,Hot-Drinks x9>. Analyze-Role is at the heart of the purposive concept formulation technique. It formulates the purposive func- tional definition of the artifact by analyzing what role the artifact plays in the regressed expression. Two heuristics are used to choose the conjuncts from the regressed expres- sion to formulate the definition. We want to formulate the definition of the artifact using only those conjuncts of the regressed expression that describe properties relevant to the artifact (and not the agent, say), and that are the in- trinsic properties of that artifact. A property is relevant to the artifact if it mentions the artifact. A property is intrin- sic to an artifact if actions that manipulate that artifact do not easily create or destroy these properties. Intrinsic properties, then, are those properties that do not appear on the add or delete lists of any manipulation operatorg. For example, being graspable is considered intrinsic to the artifact since no manipulation operator in our database can transform an ungraspable artifact into a graspable one (e.g. build a handle). On the other hand, being at the same location as an agent is not intrinsic to an artifact since a ‘MOVE-TO’ manipulation operator can move the agent and artifact to the same location if they are not there already. This is similar to the notion of ‘criticality’ in ABSTRIPS [Sacerdoti, 19741. Given the above regressed expression, the following conjuncts are not relevant to the artifact since they do not mention X as one of their arguments: open(From-Container), can(eontain(From-Container,Hot-Drink)), grasper-empty (Grazper-Robot ) . The following conjuncts are not 5ntrinaic since they appear on the add or delete list of some manipulation op- erator in the database: empty(X) is on the add list of ‘POUR’, ungrasped(X) is OR the add list of ‘PUTDOWM9, on(X,Surface) is on the add list of "PUTDOWN9, same-loc(Grasper-Robot,X,location) is on the add list of “HOTJE-TO9 The remaining conjuncts form the purposive func- tional definition of HOT-CUP (with a new gensym’d con- cept name) : concept22(X) e= can(pour(Grasper-Robot,Hot-Dr.$nk# From-Container,X)), opdx9 9 can(contain(X,Hot-Drink)), can(grasp (Grasper-Robot,X)), can(pickup(Grasper-Robot,X)), can(ingest(Grasper-RobotSHot-Drink,X)). Kedar-Cab& 47 Step 3: Operationalizing the Definition. The purposive functional definition for the artifact has been formulated, yet it is not in a form that enables the sys- tem to efficiently recognize a particular example of such an artifact (it is non-operational). In our case study, recog- nition is assumed to be efficient if the concept definition is in observable, structural terms; inefficient - if it is in functional terms. ([Keller, 1987a] presents a more general operationality criterion.) Given the purposive functional definition and domain theory as input, this final step operationalizes the defini- tion of the target concept. The EBG algorithm performs this step [Mitchell et al., 19861. EBG explains (proves) how a particular example is a member of the target concept, and generalizes to form an operational definition. The implementation of EBG, PROLOG-EBG, [Kedar-Cabelli and McCarty, 19871 produces an explanation and general- ization in one pass, by storing both a specific and general trace of the PROLOG theorem prover as it proves that the purposive functional definition is satisfied by an exam- ple. The proof is generalized by retaining constraints only among the proof rules. The leaves of the generalized proof tree become the operational definition, and characterizes all those examples that have a proof of concept member- ship of the same structure (the same proof rules, applied in the same order). Given the target concept and its purposive functional definition as above, and given an example, mugl, repre- sented by the following attributes: manufacturer(mugl,abc-co). serialnumber(mugi,72ll8). color(mugl,blue). material(mugl,ceramic). weight(mugf,b,oz). has-part (mugi,cylinderl). has-part(mugl,bottoml). has-part (mugi, handlsf) . . . . mug1 is shown to be an example of the HOT-CUP (concept22) by proving that it satisfies the purposive func- tional definition by certain structural characteristics (sat- isfied, in turn, by specific attributes). The domain the- ory contains axioms used to link attributes to functional requirements they satisfy (e.g. ‘graspable(X) e has- handle(X)‘). The essence of the proof is as follows: Since the shape of mug1 is an open cylinder, it has an open concavity, that allows hot tea to be poured into it. The ceramic material of mug1 provides a non-porous material that also insulates the heat, and the flat shape of its bot- tom makes it stable - all of which enable mug1 to contain the hot tea. Its handle and insulating material make it graspable. Its weight (6 oz.) makes it light, that enables it to be picked up by the agent. Finally, having an open Toncavity enables the ceramic mug to be emptied of the hot tea (i.e. enables the agent to drink the hot tea from the mug). tree The proof is generalized, and the leaves of the become the structural (operational) definition: proof concept22 (X) + type(B,flat-bottom),type(Brsealed-bottom), has-part(X,B), type(H,handle),has-part(X,H) S weight(X,W,oz),less(W,l6). The proof relies on certain attributes being true of Grasper-Robot and Hot-Drink as well. The definition only holds if these additional assumptions ark satisfied. These are conjoined together and associated with the definition: assumptions(concept22(X), type(M,mouth),has-part(Grasper-Robot,W), type(A,arm),has-part(Grasper-RobotSA), type(Wot-Drink,liquid), temperature(IIot-Drinh,hot)). A. iseussio To summarize, the novel contribution of PurFormlies in its use of standard algorithms of planning and goal regression to automatically formulate a target concept sensitive to a given plan and goal. Most EBG systems, on the other hand, are supplied with the target concept as a fixed input independent of the purpose for which it is to be used. Several design decisions in the representation and im- plementation have been made to facilitate target concept formulation. We chose to represent some of the con- juncts on the precondition/add/delete lists of the oper- ators as functional preconditions of the form ‘can(P)’ (e.g. ‘can(grasp( J)‘). Tm 1 ‘s eve1 of abstraction is a deliberate design choice to enable the definition to be formulated in functional terms. The resulting purposive functional def- inition covers a broader class of objects since it can be represented by alternative structural definitions. Another design decision was to generalize the con- stants to variables in the goal and the plan, using type hierarchy information. This is justified since the rules that apply to the specific constant also apply to all constants of a specific type. This design choice was made so that the resulting regressed expression would be more general. As a result, the definition derived from it is also more general. For example, since any rules that apply to ‘robbie’ also ap- ply to any robot with a grasper, ‘robbie’ was generalized to ‘grasper robot’. e Only a few other research efforts have focused on automati- cally formulating target concepts. A parallel research effort has been [Keller, 1987bl. Keller presents a scenario for au- tomatically formulating the target concept USEFUL-OP (operators that are useful in leading to a solution), a &red 480 Machine Learning & Knowledge Acquisition input in LEX2, a system that learns heuristics for sym- bolic integration. Keller’s derivation of USEFUL-OP can be viewed as analogous to purposive concept formulation. Given a problem-solver (SOLVER) that initidly performs exhaustive forward search, the need to improve SOLVER’s efficiency corresponds to our goaZ input. The pdan corre- sponds to SOLVER’s actions, that are described by a flow graph of program components. A desirable target concept is first isolated. It is a new filter component in SOLVER that would reduce the number of nodes expanded during search. A description of the filter is formulated by reason- ing about its role in improving SOLVER’s efficiency (by a process similar to goal regression and analyze-role). The filter should recognize operators that are useful in leading to a solution (USEFUL-OPs). Filtering just those opera- tors would lead to a solution more often, and thus improve efficiency. To become an efficient recognizer, the filter de- scription is operutionalized into easily recognizable descrip- tions of classes of such useful operators. Purposive concept formulation contrasts sharply with another system that formulates target concepts. SOAR [Laird et al., 19863 f ormulates concepts as a by-product of problem-solving, while our technique formulates concepts following problem-solving as an intentional activity to im- prove problem-solving performance. Each time SOAR en- counters and solves a subgoal, it formulates an implicit target concept: the general conditions under which it can reuse the solution to this subgoal. SOAR formulates and operationalizes target concepts at every impasse, without an explicit analysis of how they might be useful in improv- ing performance. Purposive concept formulation is related to EBG in that both are analytic techniques that reformulate knowl- edge from one level of description to another. Purposive concept formulation reformulates a purposive description, while EBG reformulates a functional description. Purpo- sive concept formulation uses goal regression over a plan to produce a purposive functional definition. EBG uses gen- eralization over a proof to produce a structural definition. The purposive concept formulation technique requires ad- ditional research to become robust. For one, we need to experiment with case studies of learning concepts for alter- native purposes (e.g. cup to be used as an ornament). In addition, future work includes extensions to handle other notions of purpose (such as purpose of the agent, purpose of the learner); augmentations to formulate other useful target concepts that appear in the plan (e.g. the class of agents, the class of liquids); and techniques to formulate the fixed inputs to EBG other than the target concept. go to Thorne McCarty, Jack Mostow, and Chris Tong for their thoughtful comments on the research and on drafts of this paper. In addition, Armand Prieditis, Sridhar Mahadevan, Prasad Tadepalli, Lou Steinberg, Mike Sims, Steven Minton, Kai Percher, Juergen Koenemann, and other Rutgers colleagues provided helpful suggestions and critiques. Jan Chomicki was helpful with Quintus PRO- LOG, and Chun Liew was helpful with l&T+ This work is being supported by NSF under Grant Number DCR-83- 51523-02. (DeJong and Mooney, 19861 G. DeJong and R. Mooney. Explanation based learning: an alternative view. Mu- chisae Learnkg, 1(2):145-176, 1986. [Kedar-Cabelli and McCarty, 19871 S. T. Kedar-Cabelli and L. T. McCarty. Explanation-based generaliza-, tion as resolution theorem proving. In Proceedings of the Fourth International Machine Learning Work- shop, Morgan Kaufinann, University of California at Irvine, June 1987. [Keller, 1987a] R. M. Keller. Defining operationdity for explanation-based learning. In Proceedings AA&-87, Seattle, WA, July 1987. [Keller, 1987b] R. M. Keller. The pole of explicit contex- tual Hnowkdge in learning concepts to improve per- formance. PhD thesis, Department of Computer Sci- ence, Rutgers University, New Brunswick, NJ, 1987. [Kowalski, 19791 R. Kowalski. Logic foT Problem Solving. Elsevier North Holland, New York, NY, 1979. [Laird et uZ., 19861 J. E. Laird, P. S. Rosenbloom, and A. Newell. Chunking in soar: the anatomy of a general learning mechanism. Machine Learning, l(l):ll-46, 1986. [Mitchell et al., 1986) T. M. Mitchell, R. M. Keller, and S. T. Kedar-Cabelli. Explanation-based generaliza- tion: a unifying view. Machine Learning, 1(1):47-$0, 1986. [Mostow, 19831 D. J. Mostow. Machine transformation of advice into a heuristic search procedure. In Machine Learning: An Artificial Intelligence Approach, Vol. 1, Tioga, Palo Alto, CA, 1983. [Nilsson, 19801 N. J. Nilsson. Brinciples of Artificial In- telligence. Tioga, Palo Alto, CA, 1980. [Sacerdoti, 19741 E. D. Sacerdoti. Planning in a hier- archy of abstraction spaces. Artificial Intelligence, 5(2):115-135, 1974. [Winston et al., 19831 P. H. Winston, T. 0. Binford, B. Katz, and M. Lowry. Learning physical descriptions from functional definitions, examples, and preceddnts. In Proceedinga AAAI-83, pages 433-439, Washington, DC, August 1983. I would like to thank Tom Mitchell, my advisor, and Rich Keller, who strongly influenced this research. Thank also Kedar-Cabelli 4%1
1987
86
683
perationality for Ex Richard M. Keller ,Rutgers University Department of Computer Science New Brunswick, New Jersey 08903 Internet: Keller@Rutgers.Edu Abstrdct Operationality is the key property that distinguishes the final description lecarned in an explanation-based system from the initial concept description input to the system. Yet most existing systems fail to define operationality with necessary precision. In particular, attempts to define operationality in terms of “efficient instance recognition” tacitly incorporate several un- realistic, simplifying assumptions about the learner’s performance task and the type of performance im- provement desired. Over time, these assumptions are likely to be violated, and the learning system’s effec- tiveness will deteriorate. We survey how operational- ity is defined and assessed in several explanation- based systems, and then present a more comprehen- sive definition of operationality. We also describe an implemented system that incorporates our new defini- tion and overcomes some of the limitations exhibited by current operationality assessment schemes. In recent years, the field of machine learning has experi- enced a surge of interest in a class of analytic concept learn- ing methods called explanation-based methods [Mitchell et al., 19861. In contrast to empirical learning meth- ods, which perform a simple syntactic analysis of similar- ities and differences among large numbers of training in- stances, explanation-based methods perform an in-depth, knowledge-intensive analysis of a single training example - typically a positive instance. That analysis involves first explaining why the positive training instance is an exam- ple of the concept to be learned (the target concept), and then generalizing the explanation in a principled manner so it is valid for a larger class of instances than the orig- inal instance. Finally, a description of that larger class of instances is extracted from the generalized explanation structure. The description constitutes a generalization of the original instance. A seeming paradox of the explanation-based paradigm is that in order to produce its final description of the tar- get concept, the learning system must possess an initial description of that same concept. In fact, without “know- ing” an initial description of the target concept, it would be impossible for the system to explain why the given train- ing instance is an example of the target concept. So if an initial target concept description is a prerequisite1 for explanation-based methods, then why is learning necessary in the first place ? What is wrong with the initial descrip- tion? What is there to be learned? These questions are at the very heart of the “explanation-based paradox”. The way to untangle the paradox is to acknowledge that learning can involve knowledge transformation, as well as knowledge acquisition. Explanation-based methods do not acquire “new” knowledge, per se, but rather transform existing knowledge that is unusable or impracticable into a form that is usable [Keller, 1983, Dietterich, 19861. Pn particular, although the initial target concept description given to an explanation-based system generally is correct ( i.e., it covers the correct set of instances), the descrip- tion is in non-operational form. Informally, this means the description cannot be used effectively by the learner to improve task performance. There is a significant differ- ence between having a concept description and being able to use it; the task of an explanation-based system is to narrow that gap by transforming, or operationalizing, the initial description. As a concrete example for illustration, consider Win- ston et al’s analogy system, which uses explanation-based methods to learn a description for CUP [Winston et al., 19831. In this case, an initial CUP description is given to the system, expressed in functional terms: “a CUP is an open, stable, liftable vessel.” This description is non- operational because it does not contain the necessary infor- mation to enable a vision system (the performance system) to improve its performance in recognizing CUPS. In order for the description to be useful in improving performance, the learning system must transform the functional descrip- tion into a structural description, composed of primitives the vision system is designed to recognize: “a CUP is a light object with a handle, a flat bottom, and an upward- pointing concavity.” Although operationality is the key property that dis- tinguishes the final description learned in an explanation- based system from the initial target concept description. most existing systems fail to define operationality with nec- essary precision. Current attempts to define operationality ‘The initial target concept description may be represented in the learning system explicitly (’ IX., in terms of declarative structures) or implicitly (i.e., within the system’s procedures) [Keller, 1987b]. 482 Machine Learning & Knowledge Acquisition From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. tacitly incorporate several unrealistic, simplifying assump- tions that may not hold throughout the course of learning. In particular, many explanation-based systems treat oper- ationality as an independent, static, binary-valued prop- erty of concept descriptions. Actually, operationality is a dynamic, continuous-valued property of descriptions, de- pendent on the learner’s performance task and the type of performance improvement desired. A more thorough understanding of operationality is necessary to construct sophisticated explanation-based systems that can function properly in dynamic, real-world environments. In what follows, we define operationality more pre- cisely and analyze how operationality is assessed in sev- eral explanation-based systems. Then we describe how the MetaLEX system [Keller, 1987b, Keller, 1987a] overcomes some of the limitations exhibited by current operationality assessment schemes, by incorporating our new definition of operation&y. This section introduces some terminology aud establishes a common framework to serve as a basis for our discussion of operationality. We begin by distinguishing between a concept and its description. A concept represents a subset of instances drawn from some universe. A concept is denoted inten- sionally by a concept description, which is a predicate over the universe of instances. Two concept descriptions are considered synonymous if they denote the same concept. Figure 1 illustrates these relationships. In the center, the figure depicts the space of all possible concepts that can be described by a given learning program. Each point in concept space represents a unique set of instances drawn from instance space, shown at right. At left, the figure de- picts the space of all possible descriptions of the concepts in concept space. The description space is partitioned into operational and non-operational regions. Note that there is a one-to-many correspondence between a point in con- cept space and the points in concept description space. As illustrated, D1 and D2 are two synonymous descriptions, both describing concept C, which covers instances I1,12, and 13. However D2 is considered operational, whereas D1 is not. So when.there exist different ways to describe the same concept, operationality defines the criterion for preferring one description over another. Figure 1: csncepe The role of operationality with respect to explanation- based learning is clarified by viewing explanation-based learning as a search through the concept description space. Suppose D1 in Figure 1 is the initial, non-operational de- scription provided to an explanation-based system, and D2 is the final, operational description learned. Then we can as the starting node in a search, D2 as a solution node, explanation as the means for traversing the space, and operationality as the search termination criterion. We call the process of transforming Dl into D2 “concept op- erationalization” [Keller, 1987b, Keller, 19831. Mitchell first described learning as a search process [Mitchell, 19821. A crucial distinction between his formu- lation of the problem and ours is that Mitchell effectively equates a concept with its description, thereby masking the . issue of operationality. Thus he effectively characterizes learning as a search through a concept space, not a concept deac&ption space. This characterization is insufficient for describing explanation-based learning. Explanation-based learning involves no searching through the concept space because the initial description (Dl) and the final descrip- tion (D2) denote the same concept (6). a ity The definition of operationality most commonly cited in describing explanation-based systems is the following: Caakrena% Opera%iondi%y efaa.: A concept de- scription is operational if it can be used efficiently to recognize instances of the concept it denotes. Below we review how this definition is instantiated in several systems that use explanation-based methods. Note that operationality has not been defined explicitly in sev- eral of these systems, so the following definitions are based on our retrospective analysis and reconstruction. ins&cm et de95 nc%icm-S%ruc%use system s system, the target concept is the set of drinking CUPS and the initial description is a functional description: “a CUP is any open, stable, liftable vessel.” Instances, however, consist of physical descrip- tions of CUBS from the real world, expressed in terms of s tructud properties, such as “flat” ) “handle”, “concave”, etc. Therefore, an operational CUP description is defined in this system as a description stated solely in structural terms, so it can be easily matched against instances. 2 [Mitchell et ad., 19861: In LEXB, the target concept is the set of USEFUL problem solving moves to apply during search. The initial description of USEFUL given to LEX2 states that “USEFUL moves lead imme- diately or eventually to a solution.” Instances consist of calculus problem solving moves generated while solving ac- tual. problems. TQ facilitate matching against instances, an operational description of USEFUL is defined in LEXB as a description expressed in terms of the calculus features used to describe instances (e.g., “sin”, “3”, ‘“product”, etc.), or in terms of features easily derivable from them (e.g., “trig- function”, “integer99 9 6‘po1ynomia199 9 etc.) S Keller 483 o PRODIGY [Minton, 19861: This system learns the “efficient instance recognition” definition tacitly incor- a variety of target concepts related to problem solving, porates several restrictive assumptions about how the final including the USEFUL concept learned by LEXB. One concept description will be used to improve performance. of PRODIGY’s problem solving domains is a machine- First, the definition assumes that the concept descrip- shop scheduling domain, in which raw materials are trans- tion will be used to “recognize instances”. Although in- formed into finished goods using operators like LATHE, stance recognition represents one typical use of a concept CLAMP, POLISH, etc. PRODIGY, for example, can learn description, there are other uses, including instance gen- conditions under which applying these operators is UN- e&ion. For example, in the CUP domain, we might be SUCCESSFUL. Instances correspond to different states of interested either in recognizing cups (e.g., if we want to the machine-shop environment in which the operators are drink a beverage) or in generating cups (e.g., if we are de- applied. An operational description must be phrased in signing new types of cups). An operational description for terms of directly observable features of the raw materi- the purposes of generation is functional, rather than struc- als and the machine-shop equipment in the environment, tural, because a larger number of novel cup designs can be such as “shape”, “temperature”, “idle”, “busy”. The di- generated from the abstract functional description. rect observability requirement assures that UNSUCCESS- Second, the “efficient instance recognition” definition FUL conditions can be recognized quickly and operator application can be avoided in a real-time environment. e GENESIS [Mooney and DeJong, 19851: In one of GENESIS’s application domains, the target concept cor- responds to WEALTH-ACQUISITION-SCENARIO, and an initial (although not explicitly stated) description of the target concept is that “a WEALTH-ACQUISITION- SCENARIO consists of any sequence of actions that culmi- nate in an agent’s acquisition of wealth.” Instances consist of natural language text describing stories involving acqui- sition of wealth, such as stories involving inheritance, kid- napping, arson, etc. Unlike in the three systems described above, an operational description for GENESIS is not stated in terms of the low-level features present in the in- stances, but rather in terms of abstract schemata possessed by the system. A high-level description of WEALTH- ACQUISITION-SCENARIO facilitates “efficient instance recognition” because the story understanding component in GENESIS parses stories (i.e., instances) efficiently in top-down fashion. e SOAR [Rosenbloom and Laird, 19861: In SOAR, a form of explanation-based learning is an integral part of its chunking mechanism. Each time SOAR completes problem solving activity for a specific subgoal, the system attempts to construct a generalized production to achieve “similar” subgoals without resorting to problem solving. For our purposes, we can consider a specific subgoal (along with the current processing state) as a training instance, the class of “similar” subgoals as SOAR’s target concept, and the generalized production’s conditions as its operational description. An operational description of the target con- cept is one that can be used to efficiently recognize whether a “similar” subgoal is true in a given processing state. In other words, in SOAR an operational production consists solely of conditions that can be easily evaluated without problem solving, including conditions initially present in SOAR’s working memory, and conditions that are evalu- used by most systems assumes that execution time is the proper measure of performance to use in evaluating oper- ationality. However, there are other types of ‘6efficiency9’ that may be just as appropriate or more appropriate for evaluating performance, including space efficiency. A de- scription that is operational with respect to time efficiency may not be operational with respect to space efficiency. Furthermore, aside from efficiency, there are arbitrarily large numbers of other criteria that might be relevant to performance, including cost, elegance, simplicity, etc. The way to eliminate these restrictive assumptions from the definition of operationality is to redefine it in terms of the performance system that uses the learned con- cept description, and in terms of the criteria for evaluating that system’s performance. Table 1 gives our revised def- inition of operationality. There are two requirements on operationality in the revised definition: usability and utd- dty. The usability requirement ensures that the description can be used by the performance system. This means that the description must be expressed in terms of capabili- ties possessed by the system, and in terms of data known or computable by the system. (The usability requirement corresponds to the original notion of operationality intro- duced in [Mostow, 19811.) The utility requirement takes usability one step further: the description must not only Table 1: Revisedl Operationallity Definition Given: QB A concept description CB A performance ayatem that makes use of the descrip- tion to improve performance l Performance objectives specifying the type and extent of system improvement desired Tiren: the concept description is considered operational if it satisfies the following two requirements: ated using chunks acquired during problem solving. - 1. usability: the description must be usable by the per- The notion of “efficient instance recognition” em- ployed in these systems is a suitable starting point for defining operationality, but is in several ways inadequate as a general-purpose definition. A basic problem is that formance system 2. utility: when the description is used by the perfor- mance system, the system’s performance must im- prove in accordance with the specified objectives. 484 Machine learning & Knowledge Acquisition be usable, but also worth using. In particular, using the description must improve the behavior of the performance system, as defined by its performance objectives. As an example of how this revised operationality defi- nition might be instantiated for existing explanation-based systems, consider once again Winston et al’s CUP domain. In this domain, the performance system might consist of a mobile robot searching for cups in a room. The perfor- mance objectives for the robot might be to improve the speed with which it can recognize and retrieve cups. For a CUP description to be “usable” by the robot, it must be expressed in terms of object properties that can be de. tected by the robot’s sensory systems. Those properties correspond to structural properties. For a CUP descrip- tion to be “utile”, as well as “usable”, the robot must be able to easily evaluate the properties used in the descrip- tion. Therefore, a structural property such as “specific- gravity”, for instance, would not be permitted as part of an operational description. With the revised definition, the notion of operational- ity adjusts to fit the learning situation. Continuing with the above example, suppose instead we are learning about cups in a design context. In this case, the performance system might consist of a design system containing a li- brary of functional design primitives. The performance objective might be to increase the number of cup designs the system can generate. Now the revised operationality definition correctly pinpoints an operational description as one expressed in terms of the functional design primitives known by the system, instead of structural primitives. In this section, we discuss how operationality is assessed in explanation-based systems. Thus we draw a distinction between how operationality is defined and how it is eval- uated in practice. Conceptually, each explanation-based system contains an operationality aa.9essment procedure, which evaluates a concept description and produces a mea- sure of its operationality as output. Below, we describe three dimensions - variability, granularity, and certainty - which characterize the operationality measurements pro- duced by an assessment procedure. A comparison of vari- ous systems’ assessment procedures along these dimensions is given in Table 2. (The table includes the systems de- scribed in the previous section, as well as the MetaLEX system, which is described in the next section.) eVn&bility: A dimension characterizing whether operationahty assessment varies with time. Values: datdc or dynamic. As learning progresses, a description that is initially non-operational may become operational, and vice versa, due to changes in the performance environment. An ac- curate assessment of operationality depends on when the assessment is made. For example, consider a performance system consisting of a mobile robot equipped with a black and white camera. For this system, any object descrip- tions which specify color attributes should be considered non-operational for recognition. However, if the camera were replaced with a color camera, these same descriptions should be considered operational for the updated system. Some of the systems surveyed in the previous section perform a dynamic assessment of operationality, whereas others do not. Assessment in GENESIS and SOAR is dy- namic because operation&y is defined in terms of the ezistdng set of schemata or chunks, respectively. As these b systems acquire additional schemata or chunks, the set of descriptions considered operational is enlarged. Similarly, the set of operational descriptions in LEX2 is augmented when the STABB subsystem [Utgoff9 19861 adds a new term to the system’s generalization language. Note, how- ever, that the set of operational descriptions in Winston et al’s system and in PRODIGY remains static throughout the course of learning, so these systems cannot automati- cally adjust to changes in the performance environment. e Granularity: A dimension characterizing the as- sessment measure produced. Values: binary or continuous. Most of the systems surveyed produce a binary as- sessment of operationality: either “operational99 or “non- operational”. Rowever, continuous-valued assessment has distinct advantages over binary assessment because it al- lows the learning system to assess degrees of operational- ity. In situations where there exist several synonymous, operational descriptions of the target concept, a metric on operationality enables the system to learn the “best” (i.e., most effective) description. Additionally, continuous- valued assessment facilitates attempts to guide the search through concept description space by providing a measure of progress through the space. PRODIGY is one system that features continuous- valued assessment. In other words, PRODIGY can as- sess how efficient a given description is for the purposes of recognition. The system bases this assessment on an a priori estimate of the matching costs associated with each “observable” feature in the machine;shop environ- ment. Given two synonymous, operational descriptions of the target concept, PRODIGY evaluates which is more op- erational using the matching cost estimates. Certrainty: A dimension characterizing confidence in the measurement produced by the operationality assess- ment procedure. Values: unguaranteed or gumznleed. None of the systems surveyed can guarantee the accu- racjr of their assessment measurements, because none of the systems directly assess operationality. In other words, the Keller 485 systems do not actually test whether a given description is “efficient to use for recognizing instances’9. Instead, the systems base their assessments on whether the description being assessed is contained within a pre-defined language of operational descriptions, which is specified by the learn- ing system’s human designer. This language includes only terms that the designer decides can be evaluated efficiently by the performance system in order to accomplish instance recognition. In a sense, the language is a “compiled” form of the designer’s knowledge about the performance system [Keller, 1987b]. For LEX2, that language includes descrip- tions expressed in terms of a variety of “syntactic” features of calculus problem solving states. For GENESIS, the lan- guage includes any description composed of schemata pos- sessed by the system. The problem with using a “compiled” language to as- sess operationality, is that the original performance as- sumptions upon which the designer based the language may become invalid. In fact 9 this is inevitable as the perfor- mance system’s capabilities change over time. The result is that the original language definition no longer correctly identifies operational descriptions. Consider an example from SOAR for clarification. As discussed in the previous section, SOAR’s operational lan- guage consists of descriptions involving production condi- tions that have already been chunked, and thus are pre- sumed efficient to use in recognition. However, SOAR’s problem solving behavior will likely deteriorate over time as more chunks are learned and matching costs increase. (This type of difficulty has been documented - with plans, rather than chunks - for STRIPS-type problem solvers [Minton, 19851.) E ventually, descriptions involving any ar- bitrary chunked condition will no longer be efficient to use. At this point, it may be necessary to redefine operational- ity so that only the “most efficient” chunks are permit- ted within operational descriptions. But SOAR lacks the proper perspective over its own problem solving/learning behavior to identify that the operational language defini- tion has become invalid over time. Moreover, SOAR can- not automatically “recompile” a’new language definition, so changing the definition requires human intervention. As is evident from Table 2, the MetaLEX program is in a different equivalence class with respect to these three dimensions. The next section discusses MetaLEX and its treatment of operationality. V. MetaL The MetaLEX program [Keller, 1987b, Keller, 1987a] is a successor to LEX2, designed to explore the concept oper- ationalization paradigm introduced in [Keller, 19831. Met- aLEX approaches essentially the same learning task as LEXP, but solves it using a different technique. In partic- ular, both MetaLEX and LEX2 learn a description of the set of USEFUL problem solving moves to execute during forward search. Both systems start with the same non- operational target concept description (“a USEFUL prob- lem solving move leads eventually to a solution state”), and attempt to transform it into an operational descrip- tion. Where the systems differ is in their methods for ac- complishing the transformation. LEX2 transforms the initial target concept descrip- tion using explanation-based methods. In a sense, LEX2 conducts a bidirectional search of the concept description space, with the initial target concept description anchoring the search in the non-operational region and a training in- stance description anchoring the search in the operational region. The explanation is used as a means of traversing the search space. In contrast, MetaLEX searches out from the initial description in the direction of an operational description using a form of hill-climbing. We do not describe the hill- climbing algorithm or the operators used to search the space in this paper. Instead, we focus on how MetaLEX assesses the operationality of a concept description. The definition of operationality used by MetaLEX is given in Table 3. Note that there are two objectives spec- ified for the SOLVER performance system, one involving efficiency and the other involving effectiveness. The ob- jectives require that an operational description improve SOLVER’s problem solving efficiency while also maintain- ing its effectiveness. The efficiency and effectiveness per- formance measures establish a metric over the description space that guides the hill-climbing search. To assess operationality for a description of the USE- FUL concept, MetaLEX inserts that description into SOLVER and observes SOLVER’s behavior on a set of benchmark calculus problems. During SOLVER’s execu- tion, the USEFUL description guides expansion of prob- lem solving moves: any move recognized as USEFUL is expanded, while other moves are pruned. SOLVER’s ef- ficiency and effectiveness are monitored during the prob- lem solving session. If SOLVER’s performance meets the Table 3: MetaLEX Operationality Definition Given: Concept description: Description of class of USEFUL problem solving moves to expand during search Performance system: SOLVER, a forward search problem solving system Performance objectives: Improve SOLVER’s run-time efficiency on a set of benchmark calculus problems by X%, while maintaining its effectiveness in solving those problems correctly Then: the USEFUL description is considered operational if it satisfies the following two sequirements: 1. usability: the description is usable (i.e., evaluable) by SOLVER and 2. utility: using the description, SOLVER’s efficiency achieves an X% improvement without a deterioration in effectiveness. 486 Machine learning & Knowledge Acquisition established performance objectives, the USEFUL descrip- tion is considered operational. If SOLVER’s perform=ce fails to attain the desired levels of efficiency or effective- ness, MetaLEX can evaluate how far from operational the description is, and can assess whether the operationaliza- tion search is headed in the right direction. In the terminology of the previous section, MetaLEX’s operationality assessment method can be characterized as: ~9 “dynamic” - because operationality assessment de- pends on the current state of SOLVER and the current performance objectives; @ “continuous” - because assessment yields a measure of the degree of efficiency (in CPU seconds) and the degree of effectiveness (in number of benchmark prob- lems solved); and e “guaranteed - because assessment is accomplished by directly executing SOLVER, and observing whether its performance meets the stated objectives. However, MetaLEX’s assessment method is also extremely expensive because the performance system must be tested each time operationality assessment is required. Met- aLEX reduces these costs by estimating system perfor- mance whenever possible in lieu of executing a system test. Operationality is the key feature that distinguishes the out- put concept description from the input concept description in an explanation-based system. As such, operationality is at the heart of what it means for an explanation-based system (or more generally, for a knowledge transforma- tion system) to “learn.” Yet current methods for assessing operationality tacitly depend on simplifying performance assumptions that likely will be violated as learning pro- gresses. The MetaLEX program circumvents the problems associated with violated assumptions by defining opera- tionality directly in terms of the performance system. The handling of operationality in MetaLEX may provide in- sight into how to construct general purpose explanation- based learning systems - systems that are more sophis- ticated in their operationality assessment capabilities, and that function properly over time, and for tasks other than “efficient instance recognition. Thanks go to my thesis advisor, Tom Mitchell, and to Jack Mostow, both of whom provided invaluable guidance in directing this research. Smadar Kedar-Cabelli provided encouragement and helpful comments on earlier drafts of this paper. Discussions with Steve Minton helped clar- ify the presentation of PRODIGY and SOAR. Chun Liew provided formatting assistance. Funding to support this research was provided by NSF grant #DCS83-51523 and DARPA contract #N00014-85-K-0116. [Dietterich, 1986] T. G. Dietterich. Learning at the knowl- edge level. Machine Learning, 1(3):287-316, 1986. [Keller, 19831 R. M. K e 11 er. Learning by re-expressing con- cepts for efficient recognition. In Proceedings AAAI- 89, pages 182-186, Washington, D.C., August 1983. [Keller, 1987a] R. M. Keller. Concept learning in context. In Proc. 4th International &facAine Learning Work- shop, University of Galifornia, Irvine, June 1987. [Keller, 1987bj R. M. Keller. The Role of Explicit Con- textual Knowledge in Learning Concepts to Improve Performance. PhD thesis, Rutgers University, Jan- uary 1987. Technical Report #ML-TR-7. [Minton, 19851 S. Minton. Selectively generalizing plans for problem-solving. In Proceedings IJCd I-9, pages 596-599, Los Angeles, CA, August 1985. [Minton, 19861 S. Minton. Improving the effectiveness of explanation-based learning. In Proceedings of the Workshop on Knowledge Compilation, Oregon State University, Corvallis, September 1986. [Mitchell, 19821 T. M. Mitchell. Generalization as search. Artificial Intelligence, 18(2):203-226, March 1982. [Mitchell et aI., 19861 T. M. Mitchell, R. M. Keller, and S. T. Kedar-Cabelli. Explanation-based generaliza- tion: a unifying view. Machine &earning, l(l), 1986. [Mooney and DeJong, 19851 R. Mooney and G. DeJong. Learning schemata for natural language processing. In Proceedings IJCdI-8, pages 681-687, Los Angeles, CA, August 1985. [Mostow, 19811 D. J. Mostow. Mechanical Transformation of Task Heuristics into Operational Procedures. PhD thesis, Comp.Sci.Dept., @MU, 1981. [Rosenbloom and Laird, 19861 P. S. Rosenbloom and J. E. Laird. Mapping explanation-based generalization onto soar. In Proceedings AddI-86, AAAI, Philadel- phia, PA, August 1986. [Utgoff, 19861 P. E. Utgoff. Machine Learning of Inductive Bs’as. Kluwer Academic, Hingham, MA, 1986. [Winston et al., 19831 P. H. Winston, T. 0. Binford, B. Katz, and M. Lowry. Learning physical descriptions from functional definitions, examples, and precedents. In Proceeds’ngs AdAI-89, pages 433-439, Washington, D.C., August 1983. Keller 487
1987
87
684
for u kit i Georg Klinker, Casey Boyd, Serge Genetet, and John McDermott Department of Computer Science Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 KNACK is a knowledge acquisition tool that generates expert systems for evaluating designs of electromechanical systems. An important feature of KNACK is that it acquires knowledge from domain experts without presupposing knowledge engineering skills on their part. This is achieved by incorporating general knowledge about evaluation tasks in KNACK. Using that knowledge, KNACK builds a conceptual model of the domain through an interview process with the expert. KNACK expects the expert to communicate a portion of his knowledge as a sample report and divides the report into small fragments. It asks the expert for strategies of how to customize the fragments for different applications. KNACK generalizes the fragments and strategies, displays several instantiations of them, and the expert edits any of these that need it. The corrections motivate and guide KNACK in refining the knowledge base. Finally, KNACK examines the acquired knowledge for incompleteness and inconsistency. This process of abstraction and completion results in a knowledge base containing a large collection of generalized report fragments more broadly applicable than the sample report.’ I. lntroductian A key issue in developing any expert system is how to update its large and growing knowledge base. A commonly proposed solution is the construction and use of a knowledge acquisition tool, e.g., KAS [Reboh, 19811, TEIRESIAS [Davis, 19821, ETS [Boose, 19841, MORE [Kahn, 19851, SALT [Marcus, 19851, SEAR [van de Brug, 19861, MOLE [Eshelman, 19861, KNACK [Klinker, 19871. Such a tool typically interacts with domain experts, organizes the knowledge it acquires, and generates an expert system. A knowledge acquisition tool also can be used to test and maintain the knowledge base of the program it generates. A critical feature of such a tool is that a domain expert can use it to update a knowledge base without having to know about the underlying Al technology. A large knowledge base can be kept maintainable by organizing it according to the different ‘This research was sponsored by the Defense Nuclear Agency (DNA) -and the Harry Diamond Laboratories (HDL) under contract DNAOOl-85- C-0027. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DNA or HDL. roles that knowledge plays [Chandrasekaran, 19831, [Clancey, 19831, [Neches, 19841. Knowledge roles, the organizational units of the knowledge base, are made explicit by defining a problem solving method. KNACK is a knowledge acquisition tool that assists an expert in creating expert systems that evaluate the designs of electromechanical systems. KNACK gains power by exploiting a domain model and its understanding of the assumed problem solving methods for gathering information and evaluating designs, and the different roles played by knowledge in those methods. This enables KNACK to provide the control knowledge and the implementation details needed in the target expert system. It also helps to minimize the amount of information the expert must provide to define a piece of knowledge for the expert system. Section 2 describes the expert systems generated by KNACK. Section 3 summarizes the characteristics of KNACK. Sections 4 through 8 explicate the steps of KNACK’s knowledge acquisition approach. Each of the expert systems produced by KNACK is called a WRINGER. The domain of the WRINGERS we have generated so far is nuclear hardening. Nuclear hardening involves the use of specific engineering design practices to increase the resistance of an electromechanical system to the environmental effects generated by a nuclear detonation. Designers of electromechanical systems usually have little or no knowledge about the specialized analytical methods and engineering practices of the hardening domain. The purpose of a WRINGER is to assist a designer in developing a hardened system and in presenting this design, together with a preliminary system evaluation, in the form of a report. A WRINGER first gathers the information necessary for the evaluation of an electromechanical system. To accomplish this goal, a WRINGER uses strategies (discussed in section 6) to elicit information from the designer or to infer it. Every collected item is a value instantiating a concept of the hardening domain for a particular application. As it progresses, the gathering of information is driven by previously elicited information. This is a data-driven approach that modifies a WRINGER’s behavior according to the information specific to each electromechanical system it is 488 Machine learning & Knowledge Acquisition From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. applied to. The collected information is evaluated by the WRINGER for validity, consistency, completeness, and possible design flaws. If indications of design flaws are found, a WRINGER points them out to the designer together with suggestions for improving the system design. Finally, when the designer is satisfied with the design of the system, a WRINGER instantiates all report fragments relevant to the particular application with the acquired values and generates a report describing and evaluating the system design. In the fall of 1986 a first version of KNACK, reported in [Klinker, 19871, was used to develop two expert systems called WRINGERS. Both are dedicated to evaluating electromechanical systems’ resistance to nuclear environmental effects. The first WRINGER generates a PROGRAM PLAN - the primary, top management report covering all phases of a project. Starting with several well chosen sample reports, it took one person-week to create the PROGRAM PLAN writer with KNACK. The average PROGRAM PLAN is organized in 237 fragments and contains 2248 words, 7.5% of which are values instantiating concepts of the hardening domain. The second WRINGER produces a DESIGN PARAMETERS REPORT - containing a detailed description of the electromechanical system. The basis for the expert system was a single sample report and a series of interactions with our EMP expert. it took three person-weeks to create it with KNACK. The average DESIGN PARAMETERS REPORT is organized in 455 fragments and contains 6675 words, 8.7% of which are values instantiating concepts of the hardening domain. The present implementation of KNACK and the version used to generate the two WRINGERS assume that an expert can express knowledge in the form of a report. This implies that an expert knows what information is relevant to the task, how to evaluate this information, and how a designer presents this information. This assumption holds for a variety of evaluation tasks since, in general, someone who evaluates the work of others must have comprehensive and precise knowledge about that work. The present implementation of KNACK refines the approach the previous version took to acquire knowledge. It combines existing Al techniques and uses them for knowledge acquisition. General knowledge about evaluating designs of electromechanical systems is incorporated into KNACK. In an initial interview process with the expert KNACK customizes that knowledge and builds a conceptual model describing the concepts and the vocabulary experts use in performing an evaluation task. KNACK also asks the expert for a sample report describing and evaluating some simple, but typical, electromechanical system. Once the sample report is typed In, KNACK develops expertise in evaluating the designs of electromechanical systems by integrating the specific sample report with the conceptual model in successive interactions with the expert. This is a process of abstraction (constants in the report fragments of the sample report or strategies are variabilized) and completion (signs of incompleteness cause elicitation of additional report fragments or strategies). This integration process generalizes the sample report, making it applicable to different electromechanical systems. To demonstrate its understanding of the sample report, KNACK instantiates the generalized report with representatives of the concepts it detected for interactive review by the domain expert. The expert’s feedback provides additional knowledge used by KNACK to correct its generalizations and refine the conceptual model. Once the expert accepts KNACK’s understanding of the sample report, KNACK elicits knowledge of how to customize the generalized sample report for a particular application. The expert defines strategies that a WRINGER will use to acquire values instantiating the concepts detected in the generalized fragments. As with the sample report, the expert does this by providing sample strategies. Strategies can be questions, formulas, inferences, and other forms. KNACK generalizes the strategies and displays some example instantiations of them for review and correction by the expert. Finally, KNACK examines the resulting knowledge base for parts of the generalized report or strategies that indicate gaps or conflicts with the conceptual model. If a possible flaw is found, KNACK asks the expert to correct the report, the strategies, or the conceptual model. The following detailed description of KNACK’s knowledge acquisition approach is organized around an example of an actual KNACK case. It leads through the process of typing in a small part of a sample report, acquiring a partial conceptual model, generalizing the part of the sample report, defining strategies, and reviewing the acquired knowledge. In the interest of brevity, the excerpts used as examples are only a tiny fraction of a full KNACK case. The sample report exemplifies what the expert intends the WRINGER to produce. It may be written specially for this purpose by a domain expert or group of experts, or selected from existing reports. Figure 4-l illustrates a part of a sample report for the DESIGN PARAMETERS REPORT writer, evaluating the hardness of a specific electromechanical system to the EMP effect of a nuclear blast.2 The report is typed in to a file by any person familiar with fext editors. KNACK divides the report into fragments corresponding to paragraphs. In the tiny example of Figure 4-1, this results in three report fragments. 1. 11.2.3. EMP Leakage thtmgh Windows 2. The Window is inductance oft R rotected by a wire-mesh. The transfer e wiremesh is 6.7%IO Henries. 3. The Power Cable pnetrates the S-280C enclosure and induces 0.4 Volts on the Window of thb enclosure. Figure 4-I : Part of a Sample Report The sample report describes a particular electromechanical system. To generalize the sample report, making it applicable to other electromechanical systems, KNACK needs a 21n this and following figures, the expert’s input appears in bold italics; the implementation details (for rules) and the prompts (of KNACK) appear in lowercase and uppercase. Default responses, enclosed by brackets, are used when the user types only a carriage return. Kllinker, Boyd, Genetet, and McDermott 489 conceptual model of the domain. To acquire the model, KNACK conducts an interview with the expert. The interview is driven by KNACK’s understanding of the evaluation task. KNACK views evaluation as partly analytic (i.e., determine whether a system will function in a given environment) and partly constructive (i.e., improve a system design so that it will function in a given environment). This understanding has the following basis: a An electromechanical system performs a set of functions and comprises a set of interrelated components. 0 An environment produces a set of conditions under which an electromechanical system must function, each of which may affect system components via a set of media. l The *effect of a condition on system components may be modrfted by some provisions, each of which can comprise provision components which, in turn, can be affected by a set of conditions via a set of media. KNACK implements these principles as generic questions to elicit knowledge about the domain concepts representing system components, environments, conditions, media, and provisions. The following sample interaction defines some of the concepts needed to generalize the sample report in Figure 4-l. At this point in the interview, KNACK has already acquired part of the conceptual model. How would you refer to possible provisions wia which a SUBSYSTEM can meet the COUPLING condition produced by the EMP environment? enclosure9 tenninatprotection device List some examples for a NAME of an ENCLOSURE: S-28Oe, metal box What are the terms describing the characteristics of an ENCLOSURE which affect its reaction to EMP? material, thickness, r@iativ@ conductivity How would you refer to the provision components of an ENCLOSURE which affect its reaction to EMP? apertures, seams List some examples for a NAME of an APERTURE: window, cable entry panel How would you refer to possible provisions via which a WINDOW of an ENCLOSURE can meet the COUPLING condition produced by the EMP environment? wire-mesh, optical coating What are the terms describing the characteristics of a WIRE- MESH which affect its reaction to the COUPLING condition? transfer inductance The expert’s responses are added to KNACK’s internal representation of the conceptual model, implemented as a semantic network. The nodes describe a taxonomy of concepts and concept properties used by domain experts to describe and evaluate electromechanical systems and their environments. The links encode structural and functional domain knowledge. Figure 4-2 shows part of the conceptual model corresponding to the above questioning session. ENCLOSURE .NAME: = S-28OC, Metal Box comprises v APERTURE --- meets ---> .NAME = Window, Cable Entry Panel produces v COUPLING .PEAK-VOLTAGE 5. eneralizing the Sa le Repoti KNACK interacts with the domain expert to generalize the sample fragments through a process of abstraction. The report’s basic structure is extracted and fragments are parsed to detect text strings that match the entries in the conceptual model. The technique employs simple heuristics to infer the concepts each fragment mentions, based on detection of keywords and representative names of concepts in the fragment, combined with knowledge of relations between candidate concepts. In the first aspect of this process KNACK looks for keywords (e.g., chapter, section, subsection, heading, itemize, enumerate, bold, underline), instances of keywords (e.g., 2. for chapter, 2.3.2. for subsection, (1) for enumerate), and the form of the input (only a few words in a line separated from the remaining text by blank lines). From this analysis KNACK generates a skeletal report defining the form of the sample report. It includes the outline and special formats (e.g., table of contents, itemizations, enumerations, filled or unfilled environments) encoded as commands for a document formatting system. In the second aspect of the generalization process KNACK converts fixed report text into generalizations representing the concepts detected in the fragment. Cues to locate and identify concepts in a report fragment are numbers representing the value of quantitative parameters and non- numeric symbols denoting tokens of known concepts in the conceptual model. The heuristics provide sufficient analytical power to acquire knowledge without turning to a sophisticated natural language interface. There are limitations though. The heuristics mistakenly identify some concepts and miss others. The errors are dealt with when the expert critiques instantiations of the generalized fragments as described in Section 7. The generalization process results in a collection of generalized report fragments more broadly applicable than the sample report. A generalized report fragment describes a small possible piece of an actual report. It includes fixed text strings to be printed exactly as formulated by the expert, concepts to be instantiated by the WRINGER, knowledge about incorporating the gathered concept representatives into the report, and keywords specifying the type and form of the report fragment (e.g., simple paragraph, figure, table, and title). Generalizations are internal constructs for KNACK’s use. Consonant with the research goal of reducing the knowledge engineering skills needed for knowledge acquisition, the expert sees only instantiated generalizations as demonstrated in section 7. The sample report fragments in Figure 4-l yield the generalized report fragments shown in Figure 5-1. The angle brackets enclose concepts detected in a fragment. Figure 4-2: Part of a Conceptual Model 490 Machine Learning 2% Knowledge AC SUBSECTION cENVIRONMENT.NAME> Leakage through <APERTURE.NAME> The <APERTURE.NAME> is protected by a <PROVISION.NAME> . The transfer inductance of the cPROVISION.NAME> is <PROVISION.TRANSFER- INDUCTANCE> Henries. The <CABLE.NAME> penetrates the <ENCLOSURE.NAME> enclosure and induces ?eCOUPLING.PEAK-VOLTAGE>? Volts on the <APERTURE.NAME> of this enclosure. Figure 5-l : Sample of Generalized Report Fragments In fragment 1, EMP is inferred to be a NAME of an ENVIRONMENT due to a unique match with the conceptual model. In general, a number is inferred to be a representative of some numerical characteristic of a concept. If the text adjacent to a number refers to a known concept and characteristic, the number is replaced with the corresponding concept. In fragment 2, WIRE-MESH matches the NAME of PROVISION and “transfer inductance” was encountered in the fragment text. Although more than one concept has the characteristic TRANSFER-INDUCTANCE, 6.7e-10 is inferred from context to be the TRANSFER-INDUCTANCE of a PROVISION. When helpful clues are not present in adjacent text, KNACK simply guesses the concept from the ambiguous set of matches. Such guesses can be mistaken and KNACK indicates this when the instantiated generalization is displayed for review by the expert (demonstrated in section 7). Fragment 3 of Figure 5-1 contains the guess <COUPLING.PEAK-VOLTAGE>. Generalized report fragments also include conditions which determine when to include each fragment in an actual WRINGER report. KNACK uses simple heuristics to create the conditions from the concepts in the fragments and the conceptual model. Each report fragment constitutes an OPS5 rule [Forgy, 198lj. Figure 5-2 shows an English translation of the rule for report fragment 2 in Figure 5-l. If an ENVIRONMENT with NAME F.MP is known, and some COUPLING is known, and an APERTURE with NAME other than CABLE ENTRY PANEL is known, and a PROVISION with some NAME, and with some TRANSFER-INDUCTANCE is known, and the ENVIRONMENT produces COUPLING, and the APERTURE meets the COUPLING via the PROVISION, then print: The XAPERTURE.NAXE> is protected by a <PROVISION.NAME>. The transfer inductance of the <PROVISION.NAME9 is <PROVISION.TRANSFER-INDUCTANCE9 Henries. Figure 5-2: Sample Report Fragment Rule Concepts in the generalized fragments must be instantiated with values describing a particular system design when a WRINGER evaluates a design and writes its report. KNACK asks the expert to define strategies for a WRINGER to acquire or produce the instantiation values. Experts define strategies in the same way that report fragments are defined, by typing in samples. Each strategy describes a way to determine a representative of a concept and includes instructions about valid possible values. Relying on previously elicited information and other prior knowledge, KNACK defines the circumstances in which these methods can be applied. KNACK asks the expert to define at least one strategy for each concept in the report fragments. A strategy can acquire representatives by asking questions, interpreting a graphical design description, asking the designer to fill in the slots of a table or diagram, or asking the user to choose from the items in a menu. It can infer representatives by directly applying specific domain knowledge, computing numeric values using formulas, or referring to a database. Figure 6-1 demonstrates KNACK gathering the knowledge needed for a question strategy to instantiate the TRANSFER-INDUCTANCE characteristic of a WIRE-MESH PROVISION. How can the TRANSFER-INDUCTANCE PROVISION be determined? of a WIRE-MESH [constant, question, inference, table, menu, graphics, formula, database, postpone, quit] [ QUFSTION 1: question text........: inducfance of the wire-mesh What is the transfer possible answers...... [ NUMBER 1: default answer........ [ 6.7e-10 I: unknown status of the answer.. [ NOT-MANDATORY 1: Figure 6-d : Defining a Question Strategy KNACK parses the text of the question in an attempt to generalize it. It knows that WIRE-MESH is a representative of a NAME of a PROVISION. But a strategy must be discriminating enough to result in the instantiation of the right concept. KNACK uses heuristics to make the text of a question strategy more specific. Since the conceptual model states that an APERTURE meets a COUPLING condition via a PROVISION, KNACK extends the text of the generalized question to: What is the transfer inductance of the <PROVISION.NAfvlE> provision of the cAPERTURE.NAME> aperture The specialization of the question text is guessed by KNACK and can be wrong or unnecessary. Section 7 describes how KNACK displays the result of the generalization process and takes advantage of the expert’s editing. 7. SfP erstan an es KNACK predicts and exemplifies the performance an expert can expect from the WRINGER he is working to create. It instantiates the concepts of the generalized fragments with known concept representatives taken from the conceptual model and displays several differently instantiated examples of each generalized report fragment. The expert edits any examples that make implausible statements about the domain. KNACK treats such events as incorrect use of the knowledge base and interprets the correcfions as new knowledge to update the generalization and improve the conceptual model. For example if the expert indicates that values from the conceptual model combine too loosely, KNACK adds a constraint to the model, restricting possible combinations. A correction also can imply that an uncet?ain guess of KNACK’s about the identity of a concept is wrong, leading to its retraction and the introduction of a new, initialy less probable guess. Applying the new knowledge, the generalization is instantiated again and display of several Klinker, Boyd, Genetet, and McDermott 491 examples gives the expert immediate of the knowledge base modification. feedback on the effects from its generalization, shown in Figure 5-1, that its guess for the concept representing the number “0.4” might be wrong. KNACK extends the conceptual model whenever the editing adds variability between examples that it cannot parse. Extensions can be new concepts,. new characteristics for known concepts, and restrictions on existing relations between representatives of two concepts. The model serves as a collectioni: of examples suggesting guesses for KNACK as to the form of the extensibn. The following examples illustrate the editing process with some of the generalized report fragments of Figure 5-1. The generalization of the first fragment in Figure 5-l is a subsection heading. KNACK displays different instantiations of the cENVIRONMENT.NAME> and <APERTURE.NAME> concepts detected in that fragment. The expert edits the examples by restricting them to the EMP ENVIRONMENT and to APERTURES other than CABLE ENTRY PANEL. The correction is used to refine KNACK’s conceptual model. 11.2.3. EMP Leakage through Windows 11.23. Thermal Leakagf! through Windows 11.2.3. EMP Leakage through Cable Entry Panels Corrections? [ NONE 1: point the mouse to EMP in example 1 and command that this value only be used, point the mouse to Cable Entry Panels in example 3 and command that this value never be used Continuing this example, KNACK knows that ENVIRONMENTS produce COUPLING, and that no other relation links ENVIRONMENT to any other concept. KNACK extends the conceptual model in adding the restriction that ENVIRONMENTS other than EMP do not produce COUPLING. This extension of the conceptual model is internal to KNACK and does not require asking the expert for confirmation. But when KNACK attempts to add another restriction, that a CABLE ENTRY PANEL APERTURE does not meet COUPLING via a PROVISION, it cannot decide with certainty which relation to restrict because more than one relation interrelates APERTURE with other concepts. KNACK guesses a restriction to one of the known relations involving APERTURE. It assumes that its guess is right, until a correction of an instantiation later in the interaction indicates the opposite. KNACK then revises its earlier decision and restricts another relation. Since the generalized fragment represents a subsection heading and KNACK assumes that the topic within a subsection will not change, KNACK constrains the remaining fragments of the subsection to the EMP ENVIRONMENT and APERTURES different from CABLE ‘ENTRY PANEL. For example, KNACK displays the following instantiations of the third generalized report fragment shown in Figure 5-l : The Power Cable penetrates the S-280C enclosure and induces 0.4 Volts on the Window of this enclosure. The Signal Cable penetrates the S-280C enclosure and induces 0.4 Volts on the Window of this enclosure. The Power Cable penetrates the Metal Box enclosure induces 0.4 Volts on the Window of this enclosure. and Corrections’? [ NONE 1: 0.4 is assumed to be a PEAK VOLTAGE of a COUPLING. Correct? [ YES 1: KNACK asks the expert for confirmation because it knows KNACK’s knowledge acquisition approach described in the preceding sections generalizes a specific sample report. This results in a knowledge base the generated WRINGER expert system can use to evaluate a range of electromechanical systems. However, the sample report covers only one simple system and inevitably lacks concepts necessary to evaluate a broad range of systems. For this reason, KNACK searches the knowledge base for report fragments or strategies that indicate gaps or conflicts with its conceptual model. This review of the knowledge base is most relevant at the end of the acquisition process, because an apparent gap found during the process might be filled in later in the process. When a conflict was detected or an indication of a gap was found, KNACK asks the expert to correct either the fragment, the strategy, or the conceptual model. In cases where the conceptual model is changed, KNACK reviews all fragments or strategies that use the changed concept or relation to propagate the change through the knowledge base automatically, making guesses when ambiguities arise. On the other hand, when the expert adds or changes report fragments or strategies, KNACK processes them through the integration of the conceptual model, display of examples, strategy definition, and checking. The remaining part of this section demonstrates some of the heuristics KNACK uses to identify incompleteness and inconsistency in its knowledge base. A flaw is indicated if a concept or a representative for a concept was introduced into the model but never used. For example, the conceptual model contains the concept FUNCTION, which is not integrated with any report fragment. KNACK reminds the expert of that. The knowledge base might be incomplete if the conceptual model indicates a relation between two concepts, but every fragment containing one concept consistently contains the other one: A SUBSYSTEM meets a COUPLING condition via an ENCLOSURE. No report fragment was defined dealing with SUBSYSTEMS without ENCLOSURES. Do you want to define one now? [ YES 1: no Gaps exist whenever the expert inadvertently leaves out some concepts or representative values for them. For each concept figuring in relations with several others, KNACK asks for possible extensions to that set: A COUPLING condition affects SUBSYSTEMS via a CABLE. Do you know any other media for a COUPLING condition to affect a SUBSYSTEM? [ NO 1: antenna This introduces a new concept: ANTENNA. KNACK integrates new concepts into the model using the process described in section 4. KNACK then examines the generalized sample report to find fragments mentioning the ANTENNA concept. As the conceptual model previously did not include knowledge about ANTENNAS, any occurrences in the sample report fragments were treated as fixed text in the generalizations. KNACK now variabilizes the new concept in 492 Machine Learning & Knowledge Acquisition those fragments and displays instantiated examples. If there are no fragments mentioning the new concept, KNACK looks for related concepts in the conceptual model. It then integrates the new concept with fragments dealing with the related concept and displays instantiations for confirmation by the expert. 9. Thi uced the approach KNACK takes to acquire knowledge for evaluating designs of electromechanical systems. An important goal in this research is that domain experts interacting with KNACK do not need knowledge engineering skills. However, KNACK must generate the highly structured knowledge base of the WRINGER expert systems. To bridge this gap, KNACK takes advantage of some presupposed knowledge about evaluating electromechanical systems. The general knowledge is used to acquire a conceptual model of the domain during an initial questioning session. The conceptual model gives KNACK the leverage to generalize a sample report and sample strategies, and to display several instantiated generalizations. The expert’s corrections of the instantiated generalizations provide additional knowledge with which KNACK extends the conceptual model. Finally, KNACK examines the resulting knowledge base to check for incompleteness and inconsistency. remiss if we did not mention our co-workers in this project. Don Kosy, Gilbert Caplain, Beatrice Paoli-Julliat, and David Dong are members of the group and made significant contributions. Tom Mitchell reviewed an earlier draft of this paper. Rodney Perala of Electra Magnetic Applications (EMA) served as our domain expert. We would also like to thank Alex Stewart (HDL) for his support. Boose, J. Personal construct theory and the transfer of human expertise. In Proceedings of the National Conference on Artificial Intelligence. Austin, Texas, 1984. Chandrasekaran, B. Towards a taxonomy of problem solving types. Al Magazine, 1983, 4(l), 9-17. Clancey, W. The advantages of abstract control knowledge in expert system design. In Proceedings of the 3rd National Conference on Artificial Intelligence. Washington, D.C., 1983. Davis, R. and D. Lenat. Knowledge-Based Systems in Artificial Intelligence. McGraw-Hill, 1982. Eshelman, L. and J. McDermott. MOLE: a knowledge acquisition tool that uses its head. In Proceedings of the 5th National Conference on Artificial Intelligence. Philadelphia, PA, 1986. Forgy, C.L. OPS5 user’s manual (Tech. Rep.). Carnegie- Mellon University, Department of Computer Science, 1981. Kahn, G., S. Nowlan and J. McDermott. MORE: an intelligent knowledge acquisition tool. In Proceedings of Ninth lnternational Conference on Artificial Intelligence. Los Angeles, California, 1985. Klinker, G., J. Bentolila, S. Genetet, M. Grimes, and J. McDermott. KNACK - Report-Driven Knowledge Acquisition. International Journal of Man-Machine Studies, to appear 1987, Vol. 0. Marcus, S., J. McDermott and T. Wang. Knowledge acquisition for constructive systems. In Proceedings of Ninth International Conference on Artificial Intelligence. Los Angeles, California, 1985. Neches, Ft., W. Swartout, and J. Moore. Enhanced maintenance and explanation of expert systems through explicit models of their development. In Proceedings of IEEE Workshop on Principles of Knowledge-based Systems. Denver, Colorado, 1984. Reboh, R. Knowledge Engineering Techniques and Tools in the Prospector Environment (Technical Note 243). SRI International, Artificial Intelligence Center, 1981. van de Brug, A., J. Bachant, J. McDermott. The Taming of RI. IEEE Expert, 1986, Vol. 7(3). Khker, Boyd, Cenetet, and McDermoti
1987
88
685
ArEnaand E. P&ditis and Jack Mostow Department of Computea: Science Rutgers University, New Brunswick, NJ 08903 Abstract An adaptive inteTpTeter for a programming language adapts to particular applications by learning from execution experience. This paper describes PRO- LEARN, a prototype adaptive interpreter for a subset of Prolog. It uses two methods to speed up a given program: explanation-based generalization and par- tial evaluation. The generalization of computed re- sults differentiates PROLEARN from programs that cache and reuse specific values. We illustrate PRO- LEARN on several simple programs and evaluate its capabilities and limitations. The effects of adding a learning component to Prolog can be summarized as follows: the more search and subroutine calling in the original query, the more speedup after learning; a learned subroutine may slow down queries that match its head but fail its body. I How c0l.M an interpreter adapt to its execution environ- ment by learning from execution experience and thereby customize itself toward particular apphcations? T&a pa- per describes PROLEARN, a prototype crdaptive intep- preter for a subset of Prolog. PROLEARN handles all of Prolog except for the cut symbol (used to control back- tracking) and side-effects (primitives that cause input or output or change the database). PROLEARN learns new subroutines that represent justifiable gener&zations of ex- ample executions. (We will use “subroutine” to mean a user-defined Prolog rule, as opposed to a primitive or a fact .) Search reduction is an important way to increase the performance of search-based problem-solving system [Mitchell et al., 1986, Mitchell et al., 1983, Minton, 1985, Langley, 1983, Laird et al., 198623, Fikes et al., 1972, Ma- ha&v=, 1985, Korf, 19851. Because Prolog uses search as a basic mechanism and has built-in unification, it is a natural vehicle for developing adaptive interpreters. PROLEARN combines explanation-based generdiwa- tion (EBC) [Mitchell et al., 19861 with partial evaluation [Kahn and Carlsson, 1984, Kahn, 19841 to learn from ex- ecution experience and thereby reduce future search. As ’ TlGs work 1s Lupported by NSF under Grant Number DMC- 8810507, and by the Rutgers Center for Computer Aids to Industrial Productivity, as well as by DARPA under Contract Number N60014- 85-K-0116 PROLEARN interprets each subroutine call, it uses EBG to compute the general class of queries solved by the same execution trace. Partial evaluation techniques are used to simplify the generalized execution trace for more efficient execution. The rest of this paper is organized as follows. Section II describes EBG in PRQLEARN and Section III describes partial evaluation in PROLEARN. PROLEARN is only a prototype for a practical adaptive interpreter; Section IV discusses some of its shortcomings and suggests possible improvements. Section V summarizes the empirical re- sults. Section VI discusses related work. Finally, Section VII summarizes the research contributions of this work. in As it executes a program, PROLEARN treats every sub- routine call as a goal concept for EBG to operationalize in terms of primitives and facts. Using the specific execution trace as a template, it constructs a customized version of the subroutine, as follows. PROLEARN records all calls to primitive subroutines and user-defined facts performed in the course of executing the subroutine, generalizes them to remove the dependence on the particular arguments passed to the subroutine, and conjoins the generalized calls. Thr resulting conjunction is used to define a new special case version of the original subroutine. To illustrate, consider the following database (from [Mitchell et al., 19861): on(boxf ,, tablei). voluwte(boxl,lO). isa(boxi S box) . iaa(tablei o endtable). color(box1 ,red) . color (table1 B blue) . density(boxl, i0) S safe-to-stack(X,Y) :- lighter(X,Y);not(fragile(Y)). lighter(X,Y) : - weight (X,Wl) S weight (Y ,W2), Wl < w2. Machine Learning 2% Knowledge Acquisitiorn From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. weight(X,Y) :- wollune~x,~~, density(X,D) D Y iz V * D. weight(K,50O) :- isa(X,endtabae). Given the query safe-to-staek(boxl,tablel), PROLEARN learns the subroutine definition safe-to-stack(X,Y) :- claus~(wollame(X,v),true), elause(density&X,B)otsue), IL4 ie B * D, That is, object X is safe to stack on object Y if the product of X’s volume and density is less than 500 and Y is an endtable. The learned subroutine uses clause to ensure that user-defined predicates like wolunw, density, and iza are evaluated solely by matching stored facts; oth- erwise the learned subroutine might invoke arbitrarily ex- pensive subroutines for these predicates, thereby defeating its efficiency. PROLEARN also learns similar specialized versions of each subroutine invoked in the course of exe- cuting safe-to-&a& (e.g., lighter). EBG in PROLEARN amounts to subroutine unfold- ing with generalization. Unfolding a query into its primi- tive calls effectively caches the result of searching for the intermediate subroutines needed to answer the query. Par example, when the query weight (tablei ,W2) is evaluated in the course of executing safe-to-stach(box1 ,tablel) 9 PROLEARN must determine which definition of weight to use. As it turns out, the first definition fails and it must use the second one instead, namely isa(obj 2 0 endtable). The learned subroutine, which is unfolded in terms of prim- itive calls like iaa(Y ,endtabls), eliminates this search. While subroutine unfolding provides some speedup in pro- cedural languages, in Prolog the speedup can be much greater be&use-of this search reduction. While the learned subroutine is a special case of the original definition, it is a generalization of the execution trace generated for the specific query. Therefore it will ap- ply to queries other than the one that led to its creation, - _ and such should speed up interpret ation for the entire class of queries. This class is restricted to queries whose ex- ecution would have followed the same execution path as the original query. Relaxing this restriction would require allowing learned subroutines to call user-defined subrou- tines, which would make learned subroutines more general but possibly less efficient. PROLEARN relies on EBG to guarantee that tius generalization is justifiable. An execution trace constitutes a proof of a query in the “theory” defined by the subrou- tines and facts in the program. EBG generalizes away only those details of the trace ignored by this “proof.99 The learned subroutine should therefore P ,,rz for any query it matches, that is, produce the sam1~, oehavior as the original program. (Section IV discusses the problem of preserving correctness of the original program.) EBG sometimes produces a conjunction of terms that can be simplified by exploiting constraints about the learned subroutine. This simplification is similar to partial evalua- tion without execution as described in [Kahn and Carlsson, 19841 and [Bloch, 19841. The need for partial evaluation of the generalization becomes evident in the following example. The program below solves the Towers of Manoi puzzle-where N 2 0 disks are move’d from the From pole to the To pole via the Using pole. The program uses a standard recursive formulation of the solution: move N-l disks from the From pole to the Using pole, move one disk from the From pole to the To pole, and finally move N-l disks from the Using pole to the To pole. The resulting plan is bound to the variable plan. BlOV@(O,- ,- 9- , Cl). mower(l,FPo~,To,Using,plan) :- N > 0, M is N-1, Izlovg(PI,From,UP3ing,To,Subplanl), mover(H,lJeing,To,From,Subplan2), append(Subplan1, CCFr~~~,T~ll,Frontplan), append(Frontplan,Subplan2,Plan). app5ndC ClJJ9. appsnd(CHIT1 ,L, [HIUI) :- append(T,L,U). Given the query move(3,lsft,right,cgnter,Plan), Plan is bound to: C[left,rightl o i&ft,centerl 9 [right D center] s Cleft ,ri [centsr,rightl , [left ,ri Since rsove contains recursive calls to stove, PRO- LEARN learns three move subroutines. In particular it learns the following subroutine: mowe(M,From,To,Using,C[From,Tol D [From,Usingl B [To ,Usingl o CFrom,Tol D CUsing,Froml , TO] ) [F~oRI,ToI 1) : - i5 N-l, pI>O, L is M-f, L>O, 0 is L-l, L>O, 0 is L-l, M>O, K is M-l, K>O, 0 is K-f, K>O, 0 is K-I. The saove definition learned by EBG represents a spe- cialized subroutine for ’ 3 disks from the From pole to the To pole via th pole. The newly learned move” z for two disks and one disk are similarly verbose. Worse still, future queries with move of N > 3 disks will match the learned rule and not fail until they reach the conjunct 0 is L-l. The term 0 is K-l (in the above conjunction) could be eliminated by binding K to 1 instead. In general, if two of the unknowns in an expression like X is Y - Z are known then the third can be deduced. PROLEARN uses Prieditis and Mostow 4% such rules to simplify primitive subroutine calls. PRO- LEARN actually learns the above move as: move(3,From,To,Using, C[F~O~,TO] a [Frorn,Usingl, [To ,Usingl , CF~OIU,TOI , [Using ,Frod , [Using I To] , [From, To] 1) . PROLEARN partially evaluates the following kinds of terms: terms like 3 < 5 (that are always true) are elimi- nated, terms like X == Y (a test on whether X and Y are the same variable) and X = Y (a test on whether X has the same value as Y) are eliminated by binding X to Y, and terms like 12 is X * 4 are eliminated by binding X to 3. Terms like 4 is y/3 are simplified to conjunctions like ((Y > 119, (Y < 1599 t o exploit PROLEARN’s integer division. Early cutoff occurs when Y 5 11. After partial evaluation, the learned subroutine is added to the database. To ensure that it is tried before the less efficient original version from which it was derived, PROLEARN inserts it above existing subroutines. V Limitations PROLEARN exhibits a number of shortcomings common to other problem-solving architectures that learn. A. The Search Bottleneck PaobBem As PROLEARN learns more subroutines, search time gradually increases. As an example, consider member, a commonly-used Prolog subroutine that tests for member- ship of an item in a list: member(X, [Xi- I). member(X, C-IT]) :- xnember(X,T). Giventhe querymember(a, [b,c,d,al), PROLEARN learns the following three member definitions (since zaexnber is called recursively three times). They represent member- ship of an item at the fourth, third and second positions in an arbitrary length list. mmber(X, [Tl,TB,T3,Xl- I). member(X, [TI,T2,Xl- 1). member(X, CTl,Xl- I). Given the query mexnber(a, [bpc,d,e,al), PRO- LEARN must search through all the learned member defi- nitions to get to the general case for member. A query test- ing the membership of an object in the list of a hundred items may generate ninety-nine new member definitions- clogging up the interpreter if most of them are executed again. A number of researchers have observed that uncon- trolled learning of macro-operators can cause search bot- tlenecks [Minton, 1985, Iba, 1985, Fikes et al., 19721. Minton’s programused the frequency of use and the heuris- tic usefulness of macro-operators as a filter to control learn- ing. Iba’s program learned only those operator sequences leading from one state with peak heuristic value to another. 33. The Learning rth Problem Given the tradeoffs between storing and computing, be- tween the cost of learning and the resulting speedup, and between speeding up some queries at the cost of slowing down others, what is worth learning and remembering? As an example of possibly worthless learning, consider the following database (after [Mahadevan, 19851): equiv(X,X). equiw(llX,Y) :- equiv(X,Y). equiw(l(X V Y) ,M) :- equiv(lX AlY,M) . equiv(l(X A Y) ,M) :- equiv(lX V-IY,M) . equiw(X A Y,M A I) :- equiv(X,M), equiv(Y,N). equiv(X V Y,M V N) :- equiv(X,M),equiv(Y,N). The subroutine equiv is used to test whether its two arguments are equivalent boolean expressions (with the standard logical operators of 1, A and V). Consider the following query: equiw(ll(a V b) A -(c V d) s(a V b) A (c V d)) Prolog interpreters typically index on the subroutine’s name (equiv) and the first argument’s principal functor (A) to find relevant subroutines. Here the only match is the subroutine whose head is equiv(X A Y ,M A N) . In fact, there is only one relevant subroutine at each node of the entire search tree for this query, so its branching factor is 1. When branching factor is this low, there is no search to eliminate. Here, speedup derives only from elimination of subroutine calls in the learned rules, namely: equiw(llX A-lY,X A Y) . equiv(llX,X 9. Whether a query is worth learning from depends on its execution cost (search and subroutine calling) without learning, the cost of the learning process, and the distri- bution of subsequent queries. PROLEARN simply learns from every execution without considering these factors. 6. The Correctness and Is the set of programs still correct after PROLEARN learns new subroutines? That is, does the new program preserve the user-intended behavior? Two problems, redundancy and over-generalization, make this question difficult to an- swer. PROLEARN disallows subroutines with side-effects to avoid redundancy. Since every learned subroutine in PROLEARN p re resents another way to execute a partic- ular query, the database of original subroutines already describes how to execute the query (perhaps in a less effi- cient manner). The danger lies in side-effects that may be repeated and thus change program behavior in a way that violates the intent of the original program. PROLEARN also disallows the cut symbol (used to control backtracking and define negation-as-failure). If PROLEARN allowed the cut symbol, learning new subrou- tines would, in general, be impossible without considering 4% Machine Learning & Knowledge Acquisition the context of other subroutines (e.g., cut and fail com- binations that were executed during a particular query). While applying PROLEARN to a program with side effects is not guaranteed to preserve its behavior, neither does it necessarily lead to disaster. Demanding that adap- tive interpreters preserve program behavior is unnecessar- ily restrictive, since not all behavior changes are unac- ceptable [Mostow and Cohen, 1985, Cohen, 19861. Fur- ther work is needed to characterize and expand the class of programs PROLEARN can interpret without changing program behavior in unacceptable ways. PROLEARN is also theoretically subject to the same over-generalization problem as SOAR [Laird et al, 1986a], in which a learned rule masks a pre-existing special-case rule that didn’t apply when the rule was learned. It ap- pears possible (though unwieldy in Prolog) to overcome this problem by placing each learned subroutine imme- diately above the subroutine from which it was derived rather than at the top of the database. safe-to-staclt 23 23 1.3 6 RlOVB 11 94 1 36 member 4 4 1 3 equiv 2 2 1 6 Table 1: Speedup Factors Over Original Query in Prolog Table 1 lists the speedup factors in CPU time over the original query in Prolog for the examples presented above. For each example, the table also lists the average branch- ing factor and number of calls to user-defined subroutines for the original query. The results can be summarized as follows: B For EBG alone, the example with the largest aver- age branching factor (safe-to-stack) produced the greatest speedup. The next greatest speedup resulted from eliminating the user subroutine calls (move). The two examples with branching factor 1 and few user subroutine calls (member and equiv) had the least speedup. Any speedup from EBG resulted from sub- routine ca3l elimination, not search reduction. a The one example (move) where partial evaluation helped was sped up by a factor of 8.5 over EBG be- cause costly arithmetic operations were eliminated. , Since each learned subroutine applies to a whole class of queries, the speedup results apply to the entire class. However, the results presented above are incomplete, since they do not show the slowdown for queries outside this class. Such slowdown occurs when a query matches a learned subroutine and then fails after executing some of the terms in its definition. We measured speedup by comparing Prolog execution time before and after learning. This comparison measures how much speedup would be achieved if PROLEARN’s techniques were eficiently implemented in the Prolog in- terpreter itself. PROLEARN is actually implemented as a simple but inefficient Prolog meta-interpreter. The extra level of interpretation imposes a considerable performance penalty, largely because it loses the efficiency of Prolog’s indexing. Although learning speeds up PROLEARN’s ex- ecution of the example queries by factors ranging from 4 to 32, this speedup is outweighed by the interpretation penalty, which renders PRCLEARN 54 to 156 times slower than Prolog. Like Soar [Laird et a!., 1986b], PROLEARN is an s’ncb- dental learning system because it learns as a side-effect of problem-solving. While PROLEARN’s cached subroutines resemble Soar’s chunks, Soar assumes that multiple occur- rences of a constant are instances of the same variable, which can cause it to learn chunks that are more specific than necessary. PROLEARN uses EBG, which avoids this assumption. Also, Soar’s chunking mechanism does not use partid evaluation to simplify learned chunks. Unlike systems that cache specific values [Mostow and Cohen, 1985, Lenat et al., 19791, PROLEARN stores gen- eralized procedures. Both approaches pay for decreased execution time with increased space costs and lookup time. PROLEARN adapts programs to their execution en- vironments more dynamically than typical program opti- mizers [Aho et al., 1986, Kahn and Carlsson, 1984, Bloch, 19841. While some optimizers use data about the execu- tion environment [Cohen, 19861 or collect statistics about the execution frequency of different control paths in a pro- gram, they do not generalize as PROLEARN does. This paper has shown how an interpreter can adapt to its execution environment and thus customize itself to a par- ticular application. PROLEARN, an implemented proto- type of an adaptive Prolog interpreter, uses two methods to increase its performance: explanation-based generalization and partial evaluation. The generalization of computed re- sults differentiates PROLEARN from programs that cache and reuse specific values. The effects of adding a learning component to Prolog - can be summarized as follows: The more search and subroutine calls in the original query, the more speedup after learning: the indexing and backtracking caused by search are eliminated, as well as the overhead of subroutine calls. The same speedup applies to the class of queries that match the learned subroutine, not just the query from which the subroutine was learned. This class includes queries that would have followed the same execution path as the original query. e A learned subroutine may slow down queries that match its head but fail its body. Prieditis and Mostow 497 Ackmowlledgments Many thanks go to Tony Bonner, Sridhar Mahade- van, Prasad Tadepalli, Steve Minton, Tom Fawcett, and Smadar Kedar-Cabelli for their comments on this paper. Thanks also go to Chun Liew and Milind Deshpande for their help with I#!. [Aho et al., 1986] A. Aho, R. Sethi, and J. Ullman. Com- pilerx Principles, Techniques, and TOOL. Addison- Wesley, Reading, Mass., 1986. [Bloch, 19841 6. Bloch. S ource-to-Source Tranaformationa of Logic Programs. Technical Report CS84-22, Weiz- mann Institute of Science, November 1984. [Cohen, 19861 D. C o h en. Automatic compilation of logical specifications into efficient programs. In Proceedings A AAI-87, American Association for Artificial Intelli- gence, Seattle, WA, August 1986. [Fikes et al., 19721 R. Fikes, P. Hart, and N. J. Nilsson. Learning and executing generalized robot plans. Ar- tificial Intelligence, 3(4):251-288,1972. Also in Read- ing8 in Artificial Intelligence, Webber, B. L. and Nils- son, N. J., (Eds.). [Iba, 19851 G. Iba. Learning by discovering macros in problem-solving. In Proceedings IJCAI-9, Interna- tional Joint Conferences on Artificial Intelligence, Los Angeles, CA, August 1985. [Kahn, 19841 K. K a h n. Partial evaluation, programming methodology and artificial intelligence. AI Mugazine, 5(1):53-57, Spring 1984. [Kahn and Carlsson, 19841 K. Kahn and M. Carlsson. The compilation of prolog programs without the use of a prolog compiler. In Proceedinga of the Interna- tional Conference on Fifth Generation Computer Sys- tems, ICOT, Tokyo, Japan, 1984. [Kedar-Cabelli and McCarty, 19871 S. T. Kedar-Cabelli and L. T. McCarty. Explanation-based generaliza- tion as resolution theorem proving. In Proceeding8 of the Fourth International Machine Learning Work- ahop, Irvine, CA, June 1987. [Korf, 19851 R. Korf. Learning to Solve Problem by Searching for Mucro- Operators. Pitman, Marshfield, MA, 1985. [Laird et al., 1986a] J.E. Laird, P.S. Rosenbloom, and A. Newell. Overgeneralization during knowledge compi- lation in soar. In Worbhop on Knowledge Compila- tion, Oregon State University, Corvallis, OR, Septem- ber 1986. [L&d et al., 1986b] J. E. Laird, P. S. Rosenbloom, and A. Newell. Soar: the architecture of a general learning mechanism. Machine Learning, l(l):ll-46, 1986. [Langley, 19831 P. Langley. Learning effective search heuristics. In Proceedings IJCAI-8, International Joint Conferences on Artificial Intelligence, Karl- sruhe, West Germany, August 1983. [Lenat et al., 19791 D.B. Lenat, F. Hayes-Roth, and P. Klahr. Cognitive economy in artificial intelligence systems. In Proceedings IJCAI-6, International Joint Conferences on Artificial Intelligence, Tokyo, Japan, August 1979. [Mahadevan, 19851 S. Mahadevan. Verification-based learning: a generalization strategy for inferring problem-decomposition methods. In Proceedings IJCAI-9, International Joint Conferences on Artificial Intelligence, Los Angeles, CA, August 1985. [Minton, 19851 S. Minton. Selectively generalizing plans for problem-solving. In Proceedinga IJCAI-9, Inter- national Joint Conferences on Artificial Intelligence, Los Angeles, CA, August 1985. [Mitchell et al., 19861 T. Mitchell, R. Keller, and S. Kedar-Cabelli. Explanation-based generalization: a unifying view. Machine Learning, 1(1):47-80, 1986. [Mitchell et al., 19831 T. M. Mitchell, P. E. Utgoff, and R. B. Banerji. Learning by experimentation: acquir- ing and refining problem-solving heuristics. In Ma- chine Learning, Tioga, Palo Alto, CA, 1983. [Mostow and Cohen, 19851 J. Mostow and D. Cohen. Au- tomating program speedup by deciding what to cache. In Proceedinga IJCAI-9, International Joint Confer- ences on Artificial Intelligence, Los Angeles, CA, Au- gust 1985. 498 Machine learning & Knowledge Acquisition
1987
89
686
Joshua*: Uniform Access to Heterogeneous Knowledge Structures Why Joshing is Better tlin Conniving or Phming Steve Rowley, Howard Shrobe, Robert Cassels Symbolics Cambridge Research Center 1 II Cambridge Center, Cambridge, MA 02142 Walter Hamscher MIT Artificial Intelligence Laboratory 545 Technology Square, Cambridge, MA 02139 Howard Shrobe is also a Principal Research Scientist at the MIT Artificial Intelligence LaboratorY* Abstract This paper presents Joshua, a system which provides syntactically uniform access to heterogeneously implemented knowledge bases. Its power comes from the observation that there is a Protocol of Inference consisting of a small set of abstract actions, each of which can be implemented in many ways. We use the object-oriented pro- gramming facilities of Flavors to control the choice of implemen- tation. A statement is an instance of a class identified with its predi- cate. The steps of the protocol are implemented by methods in- herited from the classes. Inheritance of protocol methods is a compile-time operation, leading to very fine-grained control with lit- tle run-time cost. Joshua has two major advantages: First, a Joshua programmer can easily change his program to use more efficient data structures without changing the rule set or other knowledge-level structures. We show how we thus sped up one application by a factor of 3. Second, it is straightforward to build an interface which incorporates an existing tool into Joshua, without modijjGng the tool. We show how a different TMS, implemented for another system, was thus in- terfaced to Joshua. 1.-r uandary Advances in computer science are often consolidated as program- ming systems which raise the abstraction level and the vocabulary for expressing solutions to new problems. We have seen little per- manent consolidation of this form in AI. We believe that there are four causes for the brief tenure of AI programming systems: 1. Some are overly restrictive in their choices of paradigms, data structures and representations. 2. Others provide little guidance in how to usefully employ the grab- bag of tools in the system. 3. Virtually all erect a syntactic barrier between the AI system and its surrounding procedural framework (e.g., Lisp). 4. Finally, it is very difficult to incorporate existing facilities which were not coded within the framework. These all result from the tension between the expressiveness of a problem solving language and the flexibility and efficiency of its im- plementation. Fully expressive languages, such as the Predicate Cal- culus, are invaluable because they provide a uniform framework within which one can capture all aspects of a problem’s solution. Historically, the expressiveness of such languages has forced im- plementators to employ uniform algorithms and data structures capable of supporting their generality. As a consequence it has been difficult to incorporate an external system which uses different data representations, such as a relational database, without special pur- pose kludgery. Furthermore, each such system requires different kludgery. One of our goals is to provide a framework for incor- porating such systems systematically. In addition, many problem domains don’t require all the expressive power of a general purpose language. In such cases, implementors have been able to exploit the limited expressiveness of a domain to build a highly efficient special purpose problem solving language. Such a language cannot, in principle, support general problem solv- ing, but where applicable it is highly desirable. A common example of this is when we are dealing exclusively with triples of objects, attributes and values. The popularity of frame-like languages is ac- counted for by the fact they are very efficient in this limited domain, even though they are incapable of dealing with full quantification. A frame language can very efficiently reason about the properties of Opus and birds in general, although it cannot even express a state- ment like “everybody like something, but nobody doesn’t like OPUS”. If all one wants to do is the former kind of reasoning, then a frame language provides a reasonable tradeoff. Another example is reasoning about physical artifacts where again the full expressive power of PC is unnecessary. Indeed, it is much more natural and much more efficient to build a representation which emphasizes the objects and their connectivity, mirroring the topology of the artifact in the topology of the data structures. The constraint language of [Sussman and Steele] is an example of such a special- ized language. The second of our goals is to be able to exploit many of these specialized techniques in a single system without closing ourselves off from the use of a problem solving language of full expressive power. In this paper we will illusa-ate this quandary and Joshua’s solution to it using the task of building a trouble-shooter for the digi- tal circuit shown in Figure 1; the trouble-shooter will be one very similar to those in the literature (e.g. [Davis, et al.]), our goal is to illustrate Joshua’s capabilities, not to discuss trouble-shooting. First we will show how the default Joshua facilities can be used to solve this problem, producing a solution of reasonable efficiency. Then we will show how we can include a specialized constraint language for this problem within the broader Joshua framework. This will lead tot a dramatic improvement in performance, even though it will leave unchanged all of the knowledge level structures of the original solution. A F -I E l Applied: 2 M3 product Expected: 6 - Multiplicand Figure 1: A Trouble Shooting’ Problem Which fulodule’s Failure Accounts for The Conflicts? *“And Joshua bmt Ai, and made it an heap forever, even a desolation unto this day.” -- Joshua 8% KJV 4% Al Architectures From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Joshua solves the problem using the abstraction power of the Lisp programming environment. In particular, it relies on the object- oriented facilities of Flavors, although the facilities of the emerging Common Lisp object-oriented programming standard would serve as well. The main features of Joshua are: 0 There is a Uniformly Accessible but Heterogeneous Data Base of Statements. Two Lisp forms, ASK and TELL, provide the inter- face to this database. ASK queries the data base, finding facts which explicitly match the query as well as those implied by back- ward chaining rules and other inferential capabilities. TELL inserts a statement and computes its consequences by invoking forward chaining rules and other inferential capabilities. ASK and TELL may implement their behavior in any manner desired and the statements may be represented using a variety of different data structures. The contract of ASK and TELL is functional, not implementational. e There is a Fine-Grained Protocol of Inference in which each dis- tinct step of processing a statement is identified and made acces- sible. This protocol is hierarchical as well as fine-grained. ASK, TELL, rule compilation, rule triggering and truth maintenance are all parts of this protocol as are their component steps. To use a novel set of data structures, for example, one needs to change only a few, small steps of the protocol. 8) Each step of the protocol is 8 Generic Function, i.e., an abstract procedure whose concrete implementation is found by dispatching on the data types of its arguments. The generic functions are im- plemented using object-oriented programming techniques (in par- ticular Flavors). Statements are regarded as instances. Predicates are identified with the classes. The protocol steps are implemented as methods. @ There is a modular inheritance scheme which allows facilities to be identified and reused. The classes corresonding to predi- cates are the leaves of an inheritance lattice. A more abstract class in this lattice is thought of as a model for implementing part or all of the protocol, supplying Methods only for those few protocol steps that it handles in a unique way. Inheritance of methods hap- pen at compile time; there is no run-time cost. Q There is a wellll-craned default implementation of each step of the protocol provided in the system. However, the protocol is hierar- chical, so modifications can be focused on lower level protocol steps, preserving the gross structure. Most models continue to use most of the default methods, thus satisfying the Principle of Zncrementality that the effort required to effect a modification of behavior should be proportional to the size of the changed be- havior. These features allow Joshua to incorporate outside tools easily and use specialized representations where desirable. In the rest of this paper we will illustrate these points. First, we will show a straightforward Joshua implementation of a digital trouble-shooter which is reasonably efficient. However, a solution using a constraint language approach would greatly improve the efficiency. To see how such a representation can be incorporated, we will present the Protocol of Inference in some detail. Then we will show that to incorporate this alternative implementation we will only have to provide a few protocol methods. We will modify no knowledge- level structures of our original implementation. Finally, we will present a brief example of how Joshua can incorporate tools built outside the Joshua framework. Joshua’s syntax is uniform and statement-oriented; statements are delimited by brackets and variables are indicated by a leading equiv- alence sign. Free variables are, as usual, universally quantified. The core of Joshua is provided by the two generic functions TELL and ASK. TELL adds a statement to the data base of known facts and then performs whatever antecedent inferences are possible. ASK takes two arguments, the first of which is the query; the second argument, called the continuation, is a function which is called in a binding context created by unifying the query and matching statements. The continuation is called once for each statement satisfying the query whether this statement is explicitly present or is deduced. Many of Joshua’s deductive capabilities are built using forward and backward-chaining rules. A Truth Maintenance System provides the ability to make and retract assumptions, to explain the reason for believing any statement, or to find the set of statements supporting any conclusion in the database. Figure 2 shows how one would use Joshua to build a hardware trouble-shooting system similar to those in [Davis et al.], [Genesereth] or [deKleer & Williams]. A simulator for the circuit is built by defining rules which describe adders, multipliers, and wires and then by executing a Lisp procedure which TELLS what com- ponents are present and how they are connected. A simulation is run by providing initial values for the inputs of the circuit. A backward- chaining rule captures the notion of a conflict, a point in the circuit at which the predicted and observed values disagree. The trouble- shooter’s goal is to find all modules in the circuit whose failure could plausibly account for each conflict. This is done in the procedure FIND-CANDIDATES which uses the TMS to find the intersection of the sets of assumptions supporting each conflict. (DEFRULE ADDER-FORWARD (:FORWARD :IMPORTANCE 1) ;; Compute adder output from inputs IF [AND [TYPE-OF EA ADDER-BOX) [STATUS-OF =A WORKING] [VALUE-OF INPUT A =A =Vll [VALUE-OF INPUT B =A =V211 THEN (TELL '[VALUE-OF OUTPUT SUM ,HA ,(+ =vI =v2)1)) (DEFRULE MULTIPLIER-INFERENCE (:FORWARD) ;; Compute multiplicand from product .a and multiplier by dividing ;F (AND [TYPE-OF =M MULTIPLIER) [STATUS-OF EM WORKING] [VALUE-OF PRODUCT =M =VI] [VALUE-OF MULTIPLIER =M =v211 THEN (UNLESS (= 0 =V2) (TELL '[VALUE-OF MULTIPLICAND ,=M ,(/ =VI =v2))))) DEFRULE DETECT-TERMINAL-CONFLICT (:BACKwARD) ;; Infer a conflict from difference of ;; observed and simulated values. IF [AND [OBSERVED-VALUE-OF =TERMINAL =OBJECT sOBSERVED [VALUE-OF ~TERMINAL SOB-CT Z~COMPUTED-VALUE) (# =OBSERVED-VALUE =COMPUTED-VALUE) 1 THEN [CONFLICT-AT STERMINAL EOBJECT aOBSERVED-VALUE =COMPUTED-VALUE]) lDEFUN FIND-CANDIDATES () .- find candidates that explain all the conflicts &T ((SUPPORT-SETS NIL)) (ASK [CONFLICT-AT ZTERMINAL EOBJECT EOBSERVED-VALUE =COMPUTED-VALUE] #'(LAMBDA (CONFLICT) ;; for each conflict derived, .a record its set of supporting assumptions &JSH (SUPPORT CONFLICT :ASSUMPTION) SUPPORT- (DEFRULE WIRE (:FORWARD :IMPORTANCE 2) ;; Compute value at one end of a wire from ;; the value at the other end ;: now take the intersection of all such sets (APPLY #'INTERSECTION SUPPORT-SETS))) IF [AND [WIRE ETERMINAL~ =OBJECT~ rTERMINAL2 =oBJEcT~I [VALUE-OF ETERMINAL~ ZOBJECT~ EVALUE)) (DEFUN SETUP () THEN [VALUE-OF =TERMINAL2 mOBJECT2 =VALUE]) (TELL [TYPE-OF Ml MULTIPLIER]) (TELL [STATUS-OF MI WORKING) :JUSTIFICATION 'ASSUMPTION) (DEFUN SIMULATE () (TELL [VALUE-OF A pl 31) ;&LL [WIRE OUTPUT PRODUCT Ml INPUT ADDEND All)) (TELL [VALUE-OF B PI 21) . . . ) igure 2: Joshua Code for The Trouble Shooter Rowley, Shrobe, Cassels, and Hamscher 49 Joshua provides well-crafted default implementations for all of its standard facilities. Discrimination networks are used for data and rule indexing. Forward chaining rules use a Rete network IForgy] to merge the bindings from matching the separate trigger patterns. There is a rule compiler that transforms the rule’s patterns and ac- tions into Lisp code. Using these default facilities, we achieve a rule-firing rate of about 120 rules/second while running the trouble- shooting example on a Symbolics 3640. This is comparable with other well-implemented tools. 3.1. The Probkm Joshua maintains several internal meters, one of which indicates that during the execution of the trouble-shooting procedure the Rete Network’s efficiency was only 5%. This means that the system wasted a lot of effort trying to trigger rules. One reason for this is clear: The WIRE rule contains two trigger patterns, each of which contains only variables. This means that the Rete network will try to merge every WIRE statement with every VALUE-OF statement, failing in most cases. There are several other mismatches between the problem and the implementation structure. The uniform statement- oriented syntax of Joshua is a reasonable means for expressing the problem solving strategy. However, the statement-oriented indexing- scheme needed to support this expressive generality provides a poor implementation for our specific problem since it cannot exploit its constraints. A constraint language framework like that in [Davis & Shrobe] which uses data structures mirroring the connectivity and topology of the circuit would better exploit the limitations of our problem domain. However, we want to avoid changing our rules or our trouble-shooting procedures since these constitute the “knowledge level” of the program. Finally, we want to avoid writing a large amount of code simply to take advantage of an existing set of data structures. The key to achieving all three of our goals simul- taneously is Joshua’s Protocol of Inference. 4. The Protocol of inference The structure of the Protocol is shown in Figure 3; each step of the protocol corresponds to a generic function that dispatches on the type of the statement being processed. We implement each statement as a Instance of a class, where the class corresponds to the Predicate of the statement. The classes are organized in an inheritance lattice with each class providing some protocol methods and inheriting others from more abstract classes. (In our current implementation, the classes are flavors and the statements are flavor instances). For example, the statement [VALUE-OF ADDEND Al lo] is an in- StanCe of the VALUE-OF class; this class inherits from the class for PREDICATION (all statements inherit from this class); in the default implementation it also inherits from the DN-MODEL class which provides discrimination-network data indexing. 7%e PREDICATION class provides the gross structure of the ASK and TELL protocol steps in its ASK and TELL methods. The DN-MODEL class provides a specific kind of data indexing by supplying methods for the INSERT and LOCATE-TRIGGER protocol steps which determine where data and rules are stored. ‘Ihe generic function for each protocol step dispatches on the type of a statement to determine, using the inheritance lattice, which method to run. For example, the generic function for the TELL protocol step when applied to the predication [VALUE-OF ADDEND Al lo] executes the TELL method inherited from the PREDICATION class. This TELL method calls several other generic functions, in particular the one for the INSERT protocol step. ‘Ihis method is inherited from the more specific DN-MODEL class. In the Flavors implementation used by Joshua, inheritance is a compile-time operation which incurs no run-time cost. The Protocol has major steps for TELL, ASK, the TRUTH-MAINTENANCE entry points, and RULE-COMPILATION; it has minor steps corresponding to the details of how each of the major actions is performed. For example, TELL is concerned with installing new information. Its components are JUSTIFY, which is the interface to the TMS, INSERT, which manages the actual data indexing, and MAP-OVER-FORWARD-TRIGGERS which invokes forward-chaining rules using the Rete network. This, in turn, relies on the LOCATE-TRJGCER protocol step which manages the indexing of rules. The advantage of exposing this structure is modularity: If one only wants to modify how the data is indexed, one doesn’t have to reimplement all the behavior of TELL. Instead one need only provide a new INSERT method; the rest of the behavior can be inherited from the defaults provided with the system. If one wants to modify how rules are indexed, one only has to provide a LOCATE-TRIGGER method. ‘I’he implementor should define these methods at a place in the lattice of classes so that only the desired statements inherit the new behavior. If, for example, there is a specialized indexing scheme which works well for a restricted class of statements we can easily make that set of statements take advantage of the technique, while all other statements continue to use the more general tech- niques provided as the system default. The Protocol of Inference Figure 4 shows an implementation technique for the trouble- shooting example which is similar to those used for constraint- languages. These structures can be thought of as a set of frames and slots. The frames are used to represent objects, e.g. ADDER-~, and classes of objects, e.g. ADDER. The slots are used to represent ter- minals, e.g. the ADDEND of ADDER-l; the facets of the slot are used to represent the value of the signal present at the terminal, the set of other terminals wired to it an&the set of relevant rules. l Telk installs new information. o Justify: the interface to the TMS. 0 Insert: manages the actual data indexing. l Map-Over-Forward-Triggers: finds and invokes rules. l Locate-Trigger: manages the indexing to locate relevant rules. e Ask retrieves known or implied data. 0 Fetch: manages the data indexing to find statements which might unify with the query * Map-Over-Backward-Triggers: finds and runs relevant rules. 0 Locate-Trigger: manages the indexing to locate relevant rules. 0 TMS Protocol: Manages Deductive Dependencies l Justify: instaRs a new TMS justification 0 Notice-Truth-Value-Change: allows special processing when statements change truth value. l Retract: removes a justification. = Rule Indexing Protocol l Add-Forward-Trigger l Remove-Forward-Trigger * Add-Backward-Trigger * Remove-Backward-Trigger 0 Trigger-Location: used by all four of the above. 0 Rule Customization Protocol *Compile-Forward-Trigger: the hook to provide your own matcher for a forward-chaining rule. * Positions-Matcher-Can-Skip: informs the match compiler that the data indexing scheme guarantees that certain positions of the statement al- ready match the pattern, so that less match code can be generated. * Compile-Backward-Trigger: same for backward-chaining rules. 0 Positions-Matcher-Can-Skip: as above. 0 CompileForward-Action: tailors the behavior of a statement in the THEN part of a forward-chaining rule. I a Notice-Truth-Value-Change: as above. * Compile-Backward-Action: tailors the behavior of a statement in the IF part 0 Explain: prints an explanation of the reason for believing a statement. of a backward-chaining rule. l Support: finds the set of facts or assumptions that a statement depends on. Figure 4: The Protocol sf inference 50 Al Architectures This representation exploits the object-oriented nature of the problem in several ways: First, the topology of the data structures is identical to that of the circuit; to find what other terminals are con- nected to the ADDEND of ADDER-~ one only need fetch the WIRES facet of the terminal. Second, facts are indexed locally. To find the value of the signal at the ADDEND of ADDER-I, one need only find ADDER-I and then find its ADDEND slot Third, rules are indexed locally. To find a rule which is triggered by the statement [VALUE-OF ADDEND Al 101, one only need find the Al frame, follows its AKQ link to the class ADDER, and then find ADDER'S addend slot. Thus, to add or retrieve information or to draw an inference one need only follow a small number of pointers. ln particular, notice that wires are represented by direct links between connected terminals, instead of the troublesome WIRE rule shown in Figure 2. These data structures can be implemented easily using a frame-like subsystem provided with Joshua’. However, let us imagine that we already have an implementation of a constraint language and then consider what we would need to do to make Joshua able to incorporate it. The trouble-shooting program has two broad categories of statements: The first category consists of TYPE-OF, and WIRE statements which describe the topology of the circuit. The second category includes VALUE-OF and OBSERVED-VALUE-OF statements which carry information about the value (or inferred value) of signals in the circuit. CONFLICT-AT state- ments also fall in the category, since they capture a discrepancy be- tween the predicted and observed values. The trouble-shooting program is primarily an antecedent reasoning system, so our attention will be focused on what methods we need to provide for the component steps of the TELL protocol. For the first category of statements our strategy will be as follows: When we TELL a TYPE-OF statement, e.g. [TYPE-OF AI ADDER], we will build a frame representing Al that is an instance of the ADDER frame. This frame has slots for each of Al’s terminals, and each of these has several facets, one of which is the WIRES facet. When we TELL a WIRE statement, e.g. lWIRE PRODUCT Ml ADDEND Al], we will add pointers to the WIRES facet of both mentioned terminals so that the PRODUCT of Ml points to the ADDEND of AI and vice versa. The INSERT protocol method is the right level of the TELL protocol to control this. Similarly, we only need to provide an INSERT method for WIRE statements which updates the WIRES facet of the appropriate slot. Also for each of these statement types we provide a FETCH method (the step of the ASK protocol responsible for locating the data) so that we can retrieve the data. Other than this, all processing of these statements uses the provided facilities. The second category of statements deals with signal values. Our strategy for these is as follows: We will store VALUE-OF and OBSERVED-VALUE-OF statements in the appropriate named TERMINAL of the circuit; to do this we need only provide an INSERT protocol method for VALUE-OF and OBSERVED-VALUE-OF statements. Since the constraint-language provides a means for locating a named terminal, we need only have our protocol method call this procedure. 1 FOP soace reasons. we won’t discuss the Joshua flavor-based frame svstem here falue-of Addend Al 5) / 2 (Type-of Al Adder) Addend Terminal I puts the value here Figure 4: Constraint-Language Style of Implementation In addition, we want to store our rules locally; for example, a rule about adders with the pattern [VALUE-OF 5 ADDEND = Al = VJ, should store its trigger in the FWRD-RULES facet of the ADDEND slot of the ADDER frame. To do this we only need to provide a LOCATE-TRIGGERS protocol method. The LOCATE-TRIGGERS protocol method is used by the protocol steps for installing rules and for fetching them; thus this one modification changes the complete rule data indexing scheme for this type of statement. .I. nin These data structures can perform certain deductions far more ef- ficiently that can our rules. Since the data structures exactly mirror the topology of the wires in the circuit being modelled we should use them to model the propagatation of signals along wires. To do this, we only need to add a small amount of code to the INSERT method for VALUE-OF and OBSERVED-VALUE-OF statements. This code ex- amines the WIRES facet of the terminal mentioned in the statement and then propagates the information to the connected terminals, by TELLing a new statement describing the value at the connected ter- minal. For example, suppose we TELL the system that the value of the PRODUCT of Ml is 6, and this terminal is connected to the ADDEND of Al. The INSERT protocol method for VALUEGF state- ments, will then TELL the system that the value of the ADDEND of Al is also 6. (The reason this doesn’t create an infinite loop is that part of the contract for INSERT methods is that they must first check to see if the data is already present; if so they must simply return the stored data). CGNFLICT-AT statements are also more efficiently deduced within the model. A CONFLICT-AT statement should be deduced anvtime the _ -_- VALUE and ‘OBSERVED-VALUE at a terminal disagree. To’ perform this inference, we again add code to the INSERT protocol method for each of these statements; this checks to see if we know both the VALUE and OBSERVED-VALUE at this terminal. If so, and if they disagree, then we TELL the appropriate CONFLICT-AT statement. .a. strai In summary, our system now has the following specialized be- havior. When we make an assertion about the TYPE-OF an object, we create a representation of this object and its electrical terminals. This representation is situated in a taxonomic hierarchy below the node representing its type. For example, (TELL HYPE-OF Al ADDER]) creates an object of type ADDER and names it Al. When we TELL that a wire connects two terminals, we update facets of their cor- responding TERMINALs so that they contain direct pointers to each other. When we make a statement about the value ata terminal (e.g., [VALUE-OF ADDEND Al 21), we locate the terminal data structure by first locating the object Al and then finding the terminal named ADDEND. This terminal has direct pointers to all the other terminals that are connected to it, so we just follow these pointers and update the value at these other terminals as well. For each terminal that we update, we find the rules which might trigger by looking in the FWRD-RULES facet. The whole system behaves like a constraint propagation system. However, it is not just behaving like a constraint propagator, it is a constraint propagation system embedded in the more general Joshua framework. Incorporating this constraint propagator only required a few small protocol methods, without reprogramming “knowledge-level” structures such as our rules and trouble-shoot:; procedures. Finally, although we’ve tailored this part of our system to the style of reasoning found in constraint language, nothing we’ve done prevents us from using more general purpose facilities in other parts of our system. We refer to the process we’ve gone through as mdeling. Like the logician’s notion of modeling, it maps statements to the objects to which they refer. The fact that each of these protocol steps can be tailored for any class of statements has allowed us to easily imple- ment the object-oriented data-indexing and rule retrieval scheme shown in Figure 4. We have presented enough of the details to show Rowley, Shrobe, Cassels, and Hamscher 51 the relative ease with which these facilities can be used. It is worth noting that we did not change the way that Joshua manages all asser- tions, only those which we felt needed special handling. other state- ments are handled in the default manner. Given the constraint lan- guage representation for circuits, we needed to write about six protocol methods, each of them containing only a few lines of code. The following table, comparing the default and modeled implemen- tations, illustrates the power of this approach: I Statistic 1 General 1 Specialized 1 Rules fired 37 5 Time 0.302 set 0.089 set Rules/&c 122.54 56.15 Normalized2 Rules&c 122.54 415.51 1 Merging 1421774 (5%) 1 10126 (38%) 1 Several facts are worth noticing here. First’ the number of rule executions went down by a factor of 7. This is because more of the reasoning happens within the models, i.e., is performed by the spe- cialized procedures. In particular, there is no longer a need for rules to propagate information along the wires. Second, the program ran over 3 times faster. Finally, the specialized implementation is much more selective; far fewer attempts are made to merge assertions through the Rete network and, of these, a much higher percentage succeed. 6.1. Incorporating Other Existing Tools 9. References So far we’ve seen an example of how the process of providing specialized protocol steps can lead to dramatic improvements in ef- ficiency. But this is not the only advantage. It is probably more significant that modeling provides a simple means for incorporatine an existing tool which was designed outside the Joshua context. One brief example of this is the incorporation of an ATMS. The default TMS in Joshua is similar to that in IMcAllester]. However, one of us (Hamscher) had previously implemented an ATMS [deKleer] for use in his research. Sometime later, when he decided to use Joshua as the general framework for his project, he also decided to continue to use his existing ATMS code. Interfacing this code involved implementing about five protocol methods. 7. Comparison to other Approaches The core problem addressed by Joshua has been studied widely in AI. Much of the literature on Meta-Level reasoning, for example [Russell has been motivated by the need to combine disparate sys- tems into a coherent whole. Compared to Joshua, most of these sys- tems pay a price at run-time for their flexibility since, at least in principle, they must deduce how to do any deduction. The Krypton ISrachman, et al.] system also has the goal of combining disparate facilities, using a theorem-prover as the glue. Theory Resolution [Stickel] provides the theoretical framework for this system. Also [Nelson & Oppen] describes a means for combining disparate deci- sion procedures into a larger, uniform decision procedure. Joshua lacks the theoretical foundations of these systems. However, it seems to provide a broader and more flexible framework which provides stronger guidance for how to actually implement a heterogeneous system. Our system is quite similar to the Vimal Collection of Assertions notion presented in [Komfeld], but differs in several ways. Joshua provides more complete integration with Lisp as well as a set of high performance techniques available as defaults. In addition, the Protocol of Inference provides a structure and granularity of control not present in Komfeld’s system. Bra&man, R.J., Fikes, R.E., and Levesque, H.J., 1983. “Krypton: A Functional Approach to Knowledge Representation,” IEEE Com- puter, Vol 16. No. 10, October, 1983, pp. 67-73. Davis, R. and Shrobe, H., 1983. “Representing Structure and Be- havior of Digital Hardware”. Computer vol 16 number 10, Gc- tober 1983. Davis, R., Shrobe, H., Hamscher, W., Wieckert, K. Shirley, M. and Polit, S., 1982. “Diagnosis Based on Descriptions of Structure and Function”, AAAI-82 pp. 137-142, Pittsburgh, PA. deKleer, J. and Williams, B., 1986. “Reasoning About Multiple Faults”, AAAI-$6, pp. 132-139. Philadelphia, Pa. deKleer, J., 1986. “An Assumption-Based Truth ‘Maintenance System”, Artificial Intelligence 28:127-162. Forgy, C. 1982. “RETE: A fast Algorithm for the many pattern/many object pattern match problem.” Artificial Intelligence 19: 17-38. Genesereth, M.R., 1984. “The Use of Design Descriptions in Automated Diagnosis”, Artificial Intelligence 24:41 l-436. Komfeld, W.A., 1981. Concepts in Parallel Problem Solving. Ph.D. Thesis, MIT Department of Electrical Engineering and Com- puter Science. October 198 1. McAllester, D.A., 1980. “An Outlook on Truth Maintenance”. MIT Artificial Intelligence Laboratory Memo 551. MIT, Cambridge Mass. Nelson, G. and Oppen, D., 1978. “A Simplifier Based on Efficient Decision Algorithms”, Conference Record of the Fifth ACM Sym- posium on Principles of Programming Languages, Tucson, Arizona, January 1978 pp. 141-150. Russel, S., 1985. The Compleat Guide to MRS. Stanford Knowledge Systems Laboratory Report No. KSL-85-12. Stanford Knowledge Systems Laboratory, Stanford CA. Stickel, M.E., 1983. “Theory Resolution: Building in Nonequational Theories.” AA&83 Washington, D.C. August 1983. Sussman, G.J. and Steele, G.L. Jr., 1980. “CONSTRAINTS: A Lan- guage for expressing almost-hierarchical descriptions” Artificial Intelligence 14: l-39. 8. Conclusions AI has suffered from an inability to consolidate its gains in the form of a programming system which is encompassing and which allows the abstraction level of our problem solving systems to grow. A key failure of previous systems has been their inability to provide strong paradigmatic guidance without implementational handcuffs. Joshua addresses these problems in several ways. 0 It removes syntactic barriers. Joshua’s deductive facilities and Lisp are closely integrated and easily mixed. @ Joshua is organized around a uniformly accessible heterogeneous database, whose interface is the two generic functions ASK and TELL. These provide the abstraction level necessary to allow state- ments to be stored in whatever manner is most convenient and ef- ficient. Special purpose inference procedures can be invoked at this interface. @ Joshua’s core routines are carefully structured into a Protocol of Inference. This allows a Joshua programmer to use specialized data structures and procedures without having to abandon the general purpose framework. Specialized approaches can be provided by supplying only a few simple methods. 0 The Protocol of Inference also facilitates the assimilation of exist- ing facilities which enriches the Joshua environment. Joshua, therefore, creates the possibility of an integrating facility which can combine disparate AI techniques into a coherent total sys- tem. 2This scales up the firing rate of the modeled version by 3715, since it d”s 37 rules’ worth of work in just 5 rule fiigs. The “hidden” rules are done tn the representation. 52 Al Architectures
1987
9
687
Knowledge Level Learning in Soar1 Paul S. Rosenbloom John E. Laird Allen Newell Knowledge Systems Lab. Department of EECS Computer Science Dept. Computer Science Dept. University of Michigan Carnegie-Mellon U. Stanford University Ann Arbor, MI 48109 Pittsburgh, PA 15213 Stanford, CA 94305 Abstract In this article we demonstrate how knowledge level learn- ing can be performed within the Soar architecture. That is, we demonstrate how Soar can acquire new knowledge that is not deductively implied by its existing knowledge. This demonstration employs Soar’s chunking mechanism - a mechanism which acquires new productions from goal-baaed experience - as its only learning mechanism. Chunking has previously been demonstrated to be a use- ful symbol level learning mechanism, able to speed up the performance of existing systems, but this is the first demonstration of its ability to perform knowledge level learning. Two simple declarative-memory tasks are employed for this demonstration: recognition and recall. I. Introduction Dietterich has recently divided learning systems into two classes: symbol level learners and knowledge level learners [3]. The distinction is based on whether or not the knowledge in the system, as measured by a knowledge level analysis [lo], increases with learning. A system performs symbol level learning if it improves its computational perfor- mance but does not increase the amount of knowledge it con- tains. According to a knowledge level analysis, knowledge only increases if a fact is added that is not implied by the ex- isting knowledge; that is, if the fact is not in the deductive closure of the existing knowledge. Explanation-based generalization (EBG) [2; 91 is a prime example of a learning technique that has proven quite successful as a mechanism for enabling a system to perform symbol level learning. EBG al- lows tasks that a system can already perform to be refor- mulated in such a way that they can be performed more ef- ficiently. Because EBG only generates knowledge that is al- ready within the deductive closure of its current knowledge base, it does no knowledge level learning (at least when used in any obvious ways). Symbol level learning can be quite useful for an intelligent system. By speeding up the system’s performance, it allows the system to perform more tasks while using the same amount of resources, and enables the system to complete %‘his research was sponsored by the Defense Advanced Research Projects Agency (DOD) under contract NOOO39-86-C-0133 and by the Sloan Foundation. Computer facilities were partially provided by NIH grant RR-00785 to Sumex-Aim. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the US Government, the Sloan Foundation, or the National Institutes of Health. Rosenbloom, Laird, and Newell 499 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. and describe some important future work. II. Overview of Soar Soar is based on formulating all goal-oriented processing as search in problem spaces. The problem space determines the set of legal states and operators that can be used during the processing to attain a goal. The states represent situa- tions. There is an initial state, representing the initial situa- tion, and a set of desired states that represent the goal. An operator, when applied to a state in the problem space, yields another state in the problem space. The goal is achieved when one of the desired states is reached as the result of a string of operator applications starting from the initial state. Goals, problem spaces, states, and operators exist as data structures in Soar’s working memory - a short-term declara- tive memory. Each goal defines a problem solving context (“context” for short). A context is a data structure in the working memory that contains, in addition to a goal, roles for a problem space, a state, and an operator. Problem solving for a goal is driven by the acts of selecting problem spaces, states, and operators for the appropriate roles in the context. Each of the deliberate acts of the Soar architecture - a selec- tion of a problem space, a state or an operator - is ac- complished via a two-phase decision cycle. First, during the elaboration phase, the description of the current situation (that is, the contents of working memory) is elaborated with relevant information from Soar’s production memory - a long-term procedural memory. The elaboration process in- volves the creation of new objects, the addition of knowledge about existing objects, and the addition of preferences. There is a fixed language of preferences that is used to describe the acceptability and desirability of the alternatives being con- sidered for selection. By using different preferences, it is pos- sible to assert that a particular problem space, state, or operator is acceptable (should be considered for selection), rejected (should not be considered for selection), better than another alternative, and so on. When the elaboration phase reaches quiescence - that is, no more productions can fire - the preferences in working memory are interpreted by a fixed decision procedure. If the preferences uniquely specify an ob- ject to be selected for a role in a context, then a decision can be made, and the specified object becomes the current value of the role. The decision cycle then repeats, starting with another elaboration phase. leads to results of subgoals. The actions of the new produc- tions are based on the results of the subgoal. The conditions are based on those aspects of the pre-goal situation that were relevant to the determination of the results. Relevance is determined by treating the traces of the productions that fired during the subgoal as dependency structures. Starting from the production trace that generated the subgoal’s result, those production traces that generated the working-memory elements in the condition of the trace are found, and then the traces that generated their condition elements are found, and so on until elements are reached that exist outside of the sub- goal. These elements form the basis for the conditions of the chunk. Productions that only generate preferences do not participate in this backtracing process - preferences only af- fect the efficiency with which a goal is achieved, and not the correctness of the goal’s results. Once the working-memory elements that are to form the basis of the conditions and ac- tions of a chunk have been determined, the elements are processed to yield the final conditions and actions. For the purposes of this article, the most important part of this processing is the replacement of some of the symbols in the working-memory elements by variables. If a symbol is an ob- ject identifier - a temporary place-holder symbol used to tie together the information about an object in working memory _ then it is replaced by a variable; otherwise the symbol is left as a constant. This is the minimal generalization re- quired to get any transfer. Chunking applies to all of the subgoals generated during task performance. Once a chunk has been learned, the new production will fire during the elaboration phase in relevantly similar situations in the future, directly producing the re- quired information. No impasse will occur, and problem solv- ing can proceed smoothly. Chunking is thus a form of goal- based caching which avoids redundant future effort by directly producing a result that once required problem solving to determine. . F~mdament%els of Data Ghunking Reduced to its essentials, data chunking involves the per- ception of some new piece of knowledge, followed by the storage of some representation of the new knowledge into long-term memory. Thus, the first step in performing data chunking in Soar is for Soar to use its perceptual capabilities to generate a representation of the new knowledge in its If an elaboration phase ever reaches quiescence while the preferences in working memory are either incomplete or in- consistent, an impasse occurs in problem solving because the system does not know how to proceed. When an impasse oc- curs, a subgoal with an associated problem solving context is automatically generated for the task of resolving the impasse. The impasses, and thus their subgoals, vary from problems of selection (of problem spaces, states, and operators) to problems of generation (e.g., operator application). Given a subgoal, Soar can bring its full problem solving capability and knowledge to bear on resolving the impasse that caused the subgoal. When subgoals occur within subgoals, a goal hierar- chy results (which therefore defines a hierarchy of contexts). The top goal in the hierarchy is a task goal. The subgoals below it are all generated as the result of impasses in problem solving. A subgoal terminates when its impasse is resolved, even if there are many levels of subgoals below it (the lower ones were all in the service of the terminated subgoal, so they can be eliminated if it is resolved). Chunking is a learning mechanism that automatically ac- quires new productions that summarize the processing that working memory.’ At this point, the new knowledge is avail- able for use by the system, but it has not yet been learned - working memory is only a temporary memory which holds the current data upon which the system is working. The learning act occurs when a production is created which can, at ap- propriate points in the future, retrieve the new knowledge into working memory. If Soar is to use its chunking mechanism to do this, it must take advantage of the fact that chunking learns from goal-based experience. The key is for it to set up the right internal tasks so that its problem solving experience in subgoals leads to the creation of chunks that represent the new knowledge. Suppose Soar is to memorize a new object, call it object A, so that it can be recalled on demand. To accomplish this, a chunk needs to be acquired that can generate the object when the demand arises. The appropriate internal task for this problem would appear to be ‘Soar does not yet have an appropriate I/O interface, so in the current implementation of data chunking this perceptual phase ip performed by special purpose Lisp code. 580 Machine Learning & Knowledge Acquisition simply to copy the object in a subgoal. The chunk that is task. There are two types of trials: training and performance. learned from this experience has actions which generate an On each training trial the system is presented with a new ob- object B that is a copy of object a. ject, and it must learn enough to be able to perform correctly This simple solution glosses over two important problems. on the performance trials. On each performance trial the sys- The first problem is that, if the generation of object B is tern is presented with an object which it may or may not based on an examination of object A, then the conditions of have seen during the training trials. It must respond affirma- the chunk will test for the existence of object A before tively if it has seen the object, and negatively if it has not. generating object B, thus allowing the object to be recalled in The objects that the system deals with are one of two only those circumstances where it is already available. The types: primitive or composite. Primitive objects are those solution to this problem that we have discovered is to split that the system is initially set up to recognize: the letters the act of recalling information into separate generate and a-z, plus the special objects C and 1. A composite object is test phases. A generation problem space is provided in which a hierarchical structure of simpler objects that is eventually new objects can be constructed by generating and combining grounded in primitive objects. The object representation in- objects that the system has already learned to recall. Object eludes two attributes: name and substructure. An object is B is thus constructed from scratch out of objects that the sys- recognized if it has a name. A primitive object has nothing tern already knows, rather than being a direct copy of object but a name. A composite object may or may not have a A. Object A is not examined during this process; instead, it is name, depending on whether it is recognized or not. A com- examined during a test phase in which it is compared with posite object is distinguished from a primitive object by object B to see if they are equivalent. Separate chunks are having a substructure attribute that gives the list of objects learned for the generate and test phases, allowing a chunk to out of which the object is composed. The list always begins be learned that generates object B without examining object with C, ends with I, and has one or more other objects - ei- A. ther primitive or composite - in between. For example, [a The second problem is that, at recall time, the system b cl and C Ca b cl [d e] 1, are two typical composite ob- must both generate the learned object B and avoid generating jects. all of the other objects that it could potentially generate. To learn to recognize a new composite object, an internal The direct effect of the generation chunk is simply to cache task is set up in which the system first recognizes each of the the generation of object B, allowing it to be generated more subobjects out of which the object is composed, and then efficiently in the future (symbol level learning). This, by it- generates a new name for the composite object. The name self, does not enable Soar to discriminate between object B becomes the result of the subgoal, and thus forms the basis and the other objects that could be generated (knowledge for the action of a chunk. The name is dependent on the level learning). However, this additional capability can be recognition of all of the object’s subobjects, so the conditions provided if: all of the learned objects can be recalled before of the chunk test for the subobjects’ names. During a perfor- any new objects can be generated; and if a termination signal mance trial, the recognition chunk can be used to assign a can be given after the learned objects have been recalled and name to a presented object if it is equivalent to the learned before any other objects are generated. In Soar, this one, allowing an affirmative response to be made to the capability is provided directly by the structure of the decision recognition query. cycle. The chunks fire during the elaboration phase, allowing In more detail, a training trial begins with a goal to learn learned objects to be recalled directly. After all of the to recognize an object. A recognition problem space is learned objects have been recalled, an impasse occurs. Other selected along with a state that points to the object that is to objects could be generated in the subgoal for this impasse, or be learned - the current object - for example, Ca b cl. If alternatively (and correctly) the impasse can be treated as a the current object is recognized - that is, has a name - the termination signal, keeping other objects from being training trial is terminated because its task is already ac- generated. Soar can thus break through the otherwise seam- complished. There is only one operator in the recognition less interface, in which a cached value looks exactly like a problem space: get-next-element. If the current-object is computed value, by making use of Soar’s ability to reflect on recognized, then the get-next-element operator receives an ac- its own behavior [13] - specifically, its ability to base a deci- ceptable preference, allowing it to be selected as the current sion on whether an impasse has occurred. operator. When the operator is executed, it generates a new Generation chunks thus support symbol level learning state that points to the object that follows the current one. (caching the generation of the object) and knowledge level However, if the current object is not recognized, the get- learning (correct performance on recall tasks). As described next-element operator cannot be selected, and an impasse oc- in the following two sections, rather than actually learning curs. It is in the subgoal that is generated for this impasse test chunks, recognition chunks are learned. These recog- that recognition of the object is learned. The recognition nition chunks speed up the performance of the system on problem space is used recursively in this subgoal, with an in- both recall and recognition tasks (symbol level learning), plus they allow Soar to perform correctly on recognition tasks itial state that points to the object’s first subobject (i.e., [). Because this new current object has a name, the get-next- (knowledge level learning). The abilities to learn to recognize element operator is selected and applied, making the next and recall new objects are two of the most basic, yet most important, data chunking capabilities. If Soar is able to ac- subobject (a, for the current example) the current object. If complish these two paradigmatic learning tasks, it would the subobject were not recognized, a second-level subgoal seem to have opened the gates to the demonstration of the would be generated, and the problem solving would again remaining data chunking tasks, as well as to more sophis- recur, but this time on the substructure of the subobject. The recursion is grounded whenever objects are reached that ticated forms of knowledge level learning. the system has previously learned to recognize. Initially this TV. Recognition is just for the primitive objects, but as the system learns to recognize composite objects, they too can terminate the recur- The recognition task is the simplest declarative memory sion. Rosenbloom, bird, and Newell 506 When the system has succeeded in recognizing all of the As described in Section III, on a training trial the general object’s subobjects, a unique internal name, such as approach is to set up a two-phase internal task in which the *PO@!&*, is generated for the object. The new name is object is copied. In the first phase, a new composite object is returned as the result of the subgoal, allowing the problem generated by executing a sequence of operators that recall and solving to proceed in the parent context because now its cur- assemble subobjects that the system already knows. This rent object has a name. The subgoal is thus terminated, and generation process does not depend on the presented object. a chunk is learned that examines the object’s subobjects, and In the second phase, the generated object is tested to see if it generates the object’s name. This recognition production can is equivalent to the presented object. Though this approach fire whenever a state is selected that points to an object that solves the problem discussed in Section III, it also introduces has the same substructure. In schematic pseudo-code, the a smaller but still important technical issue - how to ef- production for the current example looks like the following. ficiently generate the new object without examining the Currenl;-Obj ect(s, [a b cl <z>> --> presented object. Because it is possible to generate any ob- Name (5, *pOO45*) (1) ject that can be constructed out of the already known objects, The variable s binds to the current state in the context. The there is a control problem involved in ensuring that the right variable z binds to the identifier of the current object, whose object is generated. The solution to this problem is to use substructure must be [a b cl. The appearance of the the presented object as search-control knowledge during the relevant constants - [, a, b, c, I, and *pOO45* - in the process of generating the new object. Search-control conditions and actions of this production occur because, in knowledge determines how quickly a problem is solved, not creating a chunk from a set of production traces, constant the correctness of the solution - the goal test determines the symbols are not replaced by variables. correctness - so the result does not depend on any of the If [a b cl is now presented on a performance trial, knowledge used to control the search. Thus, chunks never in- production 1 (above) fires and augments the object with its corporate control knowledge. In consequence, the generation name. The system can then respond that it has recognized process can proceed efficiently, but the chunk created for it the object because there is a name associated with it. If an will not depend on the presented object. unknown object, such as [x y z], is presented on a perfor- In more detail, a training trial begins with a goal to learn mance trial, no recognition production fires, and an impasse to recall a presented object. The system selects a recall occurs. This impasse is used as a signal to terminate the per- problem space. An initial state is created and selected that formance trial with a “no” answer. points to the presented object; for example, Presented (~1, [a b cl), where sl is the identifier of the state. There is If the object being learned is a multi-level composite ob- ject, then in addition to learning to recognize the object itself, only one type of operator in the recall problem space: recall. recognition productions are learned for all of the unrecognized An instance of the recall operator is generated for each of the subobjects (and subsubobjects, etc.). For example, if the sys- objects that the system knows how to recall. To enable the tern is learning to recognize the object [ [a b cl Cd e] 1, it system to find these objects, they are all attached to the recall problem space. This can be a very large set if many first uses production 1 to recognize [a b c] and then learns the following two new recognition productions: objects have been memorized; a problem to which we return in Section VI. Initially the system knows how to recall the Current-Ob j ect (s, [d el <$>I --> same primitive objects that it can recognize: a-z, C, and 3. Name (z, *pOO46*) (2) This set increases as the system learns to recall composite ob- Current-Ob j ect (s, I*pOO45* *pOO46*] <z>> --> jects. Name(z, *pOO47*) (3) The presented object acts as search control for the genera- Chunks are also learned that allow composite subobjects to tion process by influencing which recall operator is selected. be recognized directly in the context of the current object. First the system tries to recognize the presented object. For To recognize a composite subobject without these chunks, the the current example, production 1 fires, augmenting the ob- system would have to go into a subgoal in which the sub- ject with its name (*p0045*). If the system had not object could itself be made the current object. previously learned to recognize the presented object, it does If C Ca b cl Cd ell is now presented on a performance so now before proceeding to learn to recall it. Then, if there trial, productions first fire to recognize [a b cl and Cd el is a recall operator that will recall an object with the same as objects *pOO45* and *pOO46*. Production 3 then fires name, an acceptable preference is generated for the operator, to recognize E*p0045* *pOO46*] as object *pOO47*. The allowing it to be selected. When a recall operator executes, it system can then’ reply in the affirmative to the recognition creates a new state in which it adds the recalled object to a query. structure representing the object being generated. If this hap- v. Recall pens in the top goal, it means that the system has already learned to recall the presented object, and it is therefore done The recall task involves the memorization of a set of ob- with the training trial. jects, which are later to be generated on demand. From the However, when the system does not already know how to point of view of the internal task, it is the dual of the recog- recall the object, as is true in this instance, no recall operator nition task. Instead of incorporating information about a can be selected. An impasse occurs and a subgoal is new object into the conditions of a production, the infor- generated. In this subgoal, processing recurses with the at- mation must be incorporated into the actions. As with recog- tempt to recall the subobjects out of which the presented ob- nition, there are training and performance trials. On each ject is composed. A new instance of the recall problem space training trial the system is presented with a new object, and is created and selected. Then, an initial state is selected that it must learn to generate the object on demand. On a perfor- points to the first subobject of the presented object mance trial, the system receives a recall request, and must (Presented (s2, I> ). In this subgoal, processing proceeds respond by producing the objects that it learned to generate just as in the parent goal. If the object is not recognized, the on the training trials. system learns to recognize it. Then, if the object cannot be 502 Machine learning & Knowledge Acquisition recalled, the system learns to recall it in a further subgoal. However, in this case the object (C) is a primitive and can thus already be recognized and recalled. The appropriate recall operator is selected and creates a new state with a newly generated [ object, in it (Generated (s3, 0). The operator also augments the new state with the successor to the presented object (Presented Cs3, a)). This infor- mation is used later to guide the selection of the next, recall operator. The system continues in this fashion until a state is created that contains a completely generated object (for ex- ample, Generated(s7, [a b cl I). The one thing missing from the generated object is a name, so the system next, tries to recognize the generated object as an instance of some known object,. If recognition fails, the subgoal stays around and the system has the opportunity to try again to generate a recognizable object. If recognition succeeds, as it does here, the generated object, is augmented with its name (*pOO45+). Generation is now complete, so the generated object is added to the set, of objects that can be recalled in the parent goal (unless there is already an object, with that name in the set). This act makes the generated object a result of the subgoal, causing a chunk to be learned which can generate the object in the future. Execution of this chunk is the basic act of retrieving the remembered object from long-term (production) memory into working memory. In schematic pseudo-code, this chunk looks like the following. -0b j ect (recall, +pOO45*) --> Object(recal1, *p0045*Ca b cl> (4) This production says that the object should be generated and attached to the recall problem space if there is not, already an object with that name so attached. Though generation is now complete, the generated object, cannot yet be recalled in the parent goal until a goal test has been performed to ensure that the generated object is equiv- alent to the presented object. This test is performed by com- paring the name of the presented object with the name of the generated object. If the names match, a recall operator can be selected in the parent goal for the generated object, and the subgoal is terminated. The recall operator is then ex- ecuted, and processing continues. If the names do not, match, no recall operator is selected, the subgoal does not terminate, and the system has the opportunity to keep trying. During a performance trial, the top goal is to recall all of the objects so far learned. A recall problem space is created, selected, and then augmented with the set of objects that the system has learned Go recall. Since the goal is to recall all learned objects rather than just a specific one, acceptable and indifferent preferences are created for all of the recall operators, allowing everything that has been so far learned to be recalled in random order - the indifferent preferences state that it doesn’t care which of the operators is selected first,. Recall performance is terminated when no more recall operators can be selected. This condition is signaled by the occurrence of an impasse. In the resulting subgoal the system cpuld generate more objects, but, it should not, because they would not correspond to objects it has seen. -If the object being learned is a multi-level composite ob- ject, the system learns to recall the object as well as each sub- object, assuming it has not previously learned them. If the system were to learn to recall the object [[a b c] [d el I, given that it has already learned to recognize the object and its subobjects, and to recall the subobject [a b cl, the fol- lowing two new generation productions would be learned. -0bj ect (recall, *pOO46S) --> Object (recall, *pOO46* [d el > (5) -0b j ect (recall, *pOO47*) --> Object (recall, *pOO47* [*pOO45* *pOO46*] > (6) On a performance trial that follows these training trials, the system would recall all three objects. In this article we have demonstrated how Soar can expand its knowledge level to incorporate information about new ob- jects, and thus perform knowledge level learning. This was accomplished with chunking, a symbol level learning mechanism, as the only learning mechanism. One new mechanism was added to Soar for this work: the ability to generate new long-term symbols to serve as the names of ob- jects. However, this capability is only critical for the learning of object, hierarchies. Knowledge level learning can be demonstrated for simpler one-level objects without this added capability. One implication of this demonstration is that caution must be exercised in classifying learning mechanisms as either symbol level or knowledge level. The distinction may not be as fundamental as it seems. In fact, other symbol level learn- ing mechanisms, such as EBG, may also be able to produce knowledge level learning. A second implication of this demonstration is that chunking may not have been mis- named, and that it may be able to produce the full span of data chunking phenomena. Three important items are left for future work. The first item is to extend the demonstrations provided here to more complex tasks. Work is currently underway on several projects that incorporate data chunking as part of a larger whole. In one such project,, data chunking will be used during the acquisition of problem spaces for new tasks [14]. Work is also underway on more complex forms of knowledge level learning. In one such project,, based on the work described in [S], analogical problem solving will be used as a basis for bottom-up (generalization-based) induction. In a second such project, top-down (discrimination-based) induction is per- formed during paired-associate learning (see also the next paragraph). Both of these latter two projects demonstrate what Dietterich termed learning [3]. nondeductive knowledge level The second item is to overcome a flaw in the way recall works. The problem is that whenever a recall problem space is entered, all of the objects that the system has ever learned to recall are retrieved from production memory into working memory. If the system has remembered many objects, this may be quite a time-consuming operation. We have begun work on an alternative approach to recall that is based on a cued-recall paradigm. In this version, the system builds up a discrimination network of cues that tell it which objects should be retrieved into working memory. Early results with this version have demonstrated the ability to greatly reduce the number of objects retrieved into working memory. The results also demonstrate a form of discrimination-based in- duction that allows objects to be recalled based on partial specifications. The third item is to use our data chunking approach as the basis for a psychological model of declarative learning and memory. Th ere are already a number of promising indica- tions: the resemblance between our model of recall and the generate-recognize theory of recall (see, for example, [IS]); the resemblance between the discrimination network learned during cued recall and the EPAM model of paired-associate Rosenbloom, bird, and Newell 503 learning [4]; the resemblance of retrieval-by-partial- 8. Miller, G. A. “The magic number seven plus or minus specification to the description-based memory model of Nor- two: Some limits on our capacity for processing information.” man and Bobrow [ll]; and the way in which both learning Psychological Review 69 (1956), 81-97. and retrieval are reconstructive processes in the cued recall 9. Mitchell, T. M., Keller, R. M., & Kedar-Cabelli, S. T. model. These resemblances came about not because we were “Explanation-based generalization: A unifying view.” trying to model the human data, but because the constraints Machine Learning 1 (1986), 47-80. on the architecture forced us to approach the problems in the 10. Newell, A. “The knowledge level.” AT Magazine 2 way we have. (1981), l-20. References 1. Buschke, H. “Learning is organized by chunking. ‘I Journal of Verbal Learning and Verbal Behavior 15 (1976) 313-324. 2. DeJong, G., & Mooney, R. “Explanation-based learning: An alternative view. ‘I 3. Dietterich, T. G. Machine Learning 1 (1986), 145-176. “Learning at the knowledge level. ‘1 Machine Learning 1 (1986), 287-315. 4. Feigenbaum, E. A., & Simon, H. A. “EPAM-like models of recognition and learning. ” 305-336. Cbgnitive Science 8 (1984), 5. Golding, A., Rosenbloom, P. S., & Laird, J. E. Learning general search control from outside guidance. Proceedings of IJCAI-87, Milan, 1987. In press 6. Laird, J. E., Newell, A., & Rosenbloom, P. S. “Soar: An architecture for general intelligence.” Artificial Intelligence 33 (1987). In Press 7. Laird, J. E., Rosenbloom, P. S., & Newell, A. “Chunking in Soar: The anatomy of a general learning mechanism. ‘I Machine Learning I (1986), 11-46. 11. Norman, D. A., & Bobrow, D. G. “Descriptions: An in- termediate stage in memory retrieval.” Cognitive Psychology 11 (1979), 107-123. 12. Rosenbloom, P. S., & Laird, J. E. Mapping explanation- based generalization onto Soar. Proceedings of AAAI-86, Philadelphia, 1986. 13. Rosenbloom, P. S., Laird, J. E., & Newell, A. Meta- levels in Soar. Proceedings of the Workshop on Me&Level Architecture and Reflection, Sardinia, 1986. 14. Steier, D. M., Laird, J. E., Newell, A., Rosenbloom, P. S., Flynn, R., Golding, A., Polk, T. A., Shivers, 0. G., Un- ruh, A., & Yost, G. R. Varieties of Learning in Soar: 1987. Proceedings of the Fourth International Machine Learning Workshop, Irvine, 1987. In press 15. Watkins, M. J., & Gardiner, J. M. “An appreciation of generate-recognize theory of recall. ” Journal of Verbal Learning and Verbal Behavior 18 (1979), 687-704. 504 Machine learning & Knowledge Acquisition
1987
90
688
Stuart 3. Russell Benjamin N. Grosof Computer Science Division Computer Science Department University of California Stanford University Berkeley, CA 94720 Stanford, CA 94305 Abstract We give a declarative formulation of the biases used in inductive concept learning, particularly the Version- Space approach. We then show how the process of learning a concept from examples can be implemented as a first-order deduction from the bias and the facts describing the inst antes . This has the following ad- vantages: 1) multiple sources and forms of knowl- edge can be incorporated into the learning process; 2) the learning system can be more fully integrated with the rest of the beliefs and reasoning of a com- plete intelligent agent. W ithout a semantic8 for the bias, we cannot generally chines that generate and practically build ma- inductive biases automatically and hence are able to learn indeDendentlv. With this . in mind. we show how one Dart bf the bias for Meta- DENDkAL, its instance description language, can be represented using first-order axioms called determi- nations, and can be derived from basic background knowledge about chemistry. The second part-of the paper shows how bias can be represented as defaults, allowing shift of bias to be accommodated in a non- monoto&c framework. mtro The stalzdard paridigm for inciuctive concept learning as hy- pothesis refinement from positive and negative examples was discussed by John Stuart Mill (2843), and has sinc;kcoFre an important part of machine learning research. - rently dominant approach to concept learning is that of a search through a predefined apace of candidate definitions for one that is consistent with the data so far seen. The approach that we are proposing is to view the process of learning a concept from examples as an inference process, beginning from declarnlively expressed premises, namely the in- stances and their descriptions together with whatever else the system may know, and leading to a conclusion, namely (if the system is successful) a belief in the correctness of the concept definition arrived at. The premises should provide good rea- sons, either deductive or inductive, for the conclusions. One part of our project, begun in (Russell, 1986a), is therefore to show how existing knowledge can generate extra constraints on allowable or preferable hypotheses, over and above simple consistency with observed instances. These constraints were grouped by Mitchell (1980) under the term bias. This is per- haps an unfortunate term, since it suggests that we have some- thing other than a good reason for applying these constraints. Mitchell himself concludes the paper with: It would be wise to make the biases and their use in controlling learning just as explicit as past research has made the observations and their use. The most important reason for the declarative characterization of bias is that without it, concept learning cannot practically become an integral part of artificially intelligent systems. As long as the process of deciding on a bias is left to the program- mer, concept learning is not something an AI system can do for itself. And as Rendell (1986) has shown, in typical AI concept learning systems, most of the information is contained in the - choice of bias, rather than in the observed instances. We will therefore try to analyze biases to see what they mean as facts or assumptions about the world, i.e. the environment external to the program. We will also need a plausible argument as to how a system could reasonably come to believe the premises of the deductive process; they should be automatically acquirable, at least in principle. We will first describe the Version Space method and can- didate elimination procedure of Mitchell (1978), and will show how the various types of bias present in this method can be represented as first-order statements. We illustrate this by for- malizing part of the bias used in the Meta-DENDRAL sys- tem (Buchanan and Mitchell 1978), and deriving it from basic knowledge of chemistry. The second part of the paper deals with the question of bias shifi: the process of altering a bias in response to observa- tions that contradict or augment an existing bias. We show that this process can be formulated as a nonmonotonic deduction. in -This paper is a condensation of two longer preparation for publication elsewhere. papers that are e rsi ace FO In this section we describe how the biases used in the Version Space method can be represented as sentences in first-order logic. The following section describes the process of updating the version space as a deduction from the bias and examples. The Version Space method is the most standard AI ap- proach to concept learning from examples. It equates the space of possible definitions of a target concept with the elements of a concept language, which is defined on a predicate vocabulary that consists of a set of basic predicates that apply to objects in the universe of instances of the concept. The predicates may be arranged into a predicate hierarchy, defined by subsump- tion relations between elements of the vocabulary. This in turn helps to define a concept hierarchy on all the possible, candi- date concept definitions in the concept language, based again on subsumption as a partial ordering. The programmer defines the initial version apace to be the concept language, in the belief that the correct definition is expressible in the concept language chosen. In addition to the concept language, there is an instance description language. The system is also given a classification for each instance: either it is a positive example of the target concept Q, or it is a negative example. At any point in a series of observational updates, some subset (possi- bly a singleton or the empty set) of the candidate definitions will be consistent with all the observed instances. This subset is called the current version apace. Further constraints may be used to choose one of the coisistent hypotheses as the ruie to be “adopted” - the preference criteria of Michalski (1983). The VS approach has the following difficulties: t. The framework cannot easily accommodate noisy data. 2. It is hard to incorporate arbitrazy background knowledge. Russell and Grosof 505 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 3. It is very difficult to come up with a suitable concept lan- guage for complex or unfamihar concepts. Moreover, there rs no semantics attached to the choice, and hence no a pri- ori generating mechanism. By casting the updating process as a first-order inference, we hope to overcome the second and third problems; the first can be’solved within a more complex, probabilistic model, or by using appropriate default rules (see below). A. Concept descriptions and instances The concept language, i.e. the initial version space, is a set C of candidate (concept) deacriptiona for the concept. The concept hierarchy is a strict partial order defined over C. Each concept description is a unary predicate schema (open formula) C](z), wllere-the argument variable is intended to range over instances. Mitchell defines the concept ortfering in terms of matching: C, is fess genera! than Ck if and only if C, matcfles a proper sub set of the instances matched by Ct. In our formulation, this ordering is a logical relationship between concepts. As in (Sub ramanian & F&genbaum 1986~), the hierarchy- is expressed as a set of facts refating the concepts by implication. The more natural ordering is the non-strict relationship 5, representing quantified implication, where we define (A < B) iff {b’s.A(x)~B(~)} (A < B) iff {(A < B) A l(B 5 A)} This implication relationship between concept descriptions is also Buntine’s aenerulized ;&sumption (19d6). Background knowledge, incliding the predicate-hierarchy, that can be used to derive < relations between concepts is contained in an artic- &lion theory Th, (so called because it links different levels of description), so that cj < ck iff for any t :, Tha, C,(x) b ck(x). For example, if we are trying to induce a definition for Suit- ablePet, Th, might contain b’z[BurksALot(z) 3 Noisy(z)], which induce8 an ordering between Cj = Furry(z) A BirksALot(z) A EutsTooMuch(z) and the more general concept Ck = Noisy(x) A EutsTooMuch(z). Thus the implication relations in the concept hierarchy do not have to be encoded explicitly for every pair of concepts. An instance is iust an obiect Q inthe universe of-discourse. Properties of the instance are’ represented by sentences involv- ing a. An instance description is then a unary predicate schema D, where D(u) holds. The classification of the instance is given by Q(u) or 19(a). Thus the ith observation, say of a positive instance, would consist of the conjunction Di(ai) A Q(ai). For example, we might have Cut(Feliz) A Furry(Feliz) A Euts(Feliz, SO#/duy) A . . . A SuitublePet(Fe/iz). A concept description C, matches an instance a iff C,(a). This must be derived on the basis of the description D of the in- stance; the derivation can use facts from the articulation the- ory Th, (which thus links instance-level terms to concept-level terms). In order to have complete matching, which is neces- sary for the VS process to work (Mitchell, 1978), Th, must entail either Di < Cj or D; 5 1Cj for any instance description Di and any concept description Cj. When these relationships hold without relying on facts in the articulation theory, we have what is commonly known as the single-representation trick. B. The instance language bias Our orientation towards the handling of instances is consid- erably different from that in, say, the LEX system (Mitchell et al. 1983), in which instances are identified with syntac- tic structures, as opposed to being objects which happen to satisfy descriptive predicates. Logically speaking, an instance in Mitchell’s system is a complet term, rather than a symbol described by sentences. Thus Felix would be represented by, say, (cut; furry; 50f/duy; . . .) instead of a set of sentences about 506 Machine learning & Knowledge Acquisition Fe&. Two instances with the same description become iden- tical (and therefore co-referring) terms; it is therefore logically impossible for them to have different classifications. This is clearly a non-trivial assumption, since it says that the instance description language contains enough detail to guarantee that no considerations that might possibly affect whether or not an object satisfies the goal concept Q have been omitted from its description. For this reason, we call it the Complete Description Assumption (CDA), and note that it may need to be reasoned about extensively. We therefore prefer to make it an explicit domain fact (or set of facts), i.e. (Di 5 Q) V (Di < 19) for every i. Another way of expressing this fact is to say that Di de- termines whether or not Q had8 for an object.- It therefore corresponds to the determination (Davies & Russe!!, 1987): where k is a truth-value variable. The CDA can also be seen as the ability to do single-instance generalization: Vak.{Di(a) A kQ(a)}~{VZ.Di(Z)*kQ(Z)} If the instance description language is infinite, then the CDA will be an infinite set of determinations. We can, however, rewrite it in many cases as a small set of axioms by using binary schemata. Take, for example, an instance description language consisting of a feature vector with n components, some of which, e.g. weight, shape, may have an infinite number of possible values. We can reduce the CDA for this infinite language to a single axiom, which says that the conjunction of the features determines whether or not an instance satisfies Q: where F,(z,y,) says that 2 has value y, for the jth feature. Such a language appears in the ID3 system (Quinlan 1983). It is clear from its formal expression that the instance lan- guage bias can only be derived from knowledge concerning the target concept itself. In a later section we will give just such a derivation using the Meta-DENDRAL system as an example. Another perspective on the CDA is that the determina- tions underlying it tell us how to recognize and to handle extra or lacking information in observations. If the observational up- date is Ei(ui)A kQ(ui), w h ere Ei is stronger than Di, the agent uses the determination to deduce the single-instance general- ization VDi(z)akQ(z), not just the weaker Vz.E,(z)JkQ(z). If important information is lucking in the update, the agent can use its knowledge of relevancy both to exploit the partial infor- mation, and to generate a new goal to obtain the missing detail. Thus the declarative formulation suggests how to generalize the VS method to less structured learning aituutions. C. The concept language bias The heart of the Version Space method is the assumption that the correct target description is u member of the concept lan- guage, i.e. that the concept language bias is in fact true. We can represent this assumption in first-order as a single Disjunctive D&nubility Axiom: - V (Q=Cj) - - CjEC (Here we abbreviate quantified logical equivalence with ‘r=” in the same way we defined “5” .) This axiom may be very long or even infinite; we can reduce an infinite DDA to a finite set of axioms using determinations, just as for the CDA. Subramanian and Feigenbaum (1986) introduce the notion of a version space formed from conjunctive jactota. We can express such a situation with an axiom that says the target concept Q is equivalent to a conjunction of concept factor8 Qr, with an analogue of the DDA for each factor. If we can express the factor DDA’s concisely using determinations, we then have a concise axiomatization of the&era!! D A e.g..: V=QW = {Q&) A Qz$,l FI(~,YI) s kQ&) Fzxca k kQz x aspects are generally considered irrelevant, on physical grounds, to the behavior of a molecule in a mass spectroscope, though for other purposes such as reaction-rate calculations or NMR they are highly relevant. Secondly, Borne properties are ignored for atoms including those, such as identity and history, that we might ascribe to other objects. Few chemists worry about whether the atoms in a sample are known to their friend8 as Fred. Thirdly, properties that are determined by aspects already taken into account may also be ignored. For example, the mass, valency, electronegativity, and orbital structure of each of the atoms are relevant to the mass spectroscopy process; yet they are omitted from the instance description because they are de- termined by the chemical element to which the atom belongs. The following is a derivation of the instance language bias starting from basic chemical facts. We know on quantum- mechanical ground8 that, for any atom a OrbitulStructure(a, 0) j- ChemicalBehauiour(a, ba) (1) Element(u, e) )- OrbitulStructure(a, 0) (2) implying: Element(a, e) + ChemicalBehuuiour(a, ba) (3) since determination8 on functional relation8 are transitive. We also have the following determination8 for any molecule m: BondTopology(m, t) A BehaviorOfNodeJ(n, bn) >- MolecularChemic~lBehuviour(m, bm) (4) Structz4rulFormulu(m, 8) Z BondTopology(m, t) A NodeE/ements(m, n) A~olecularChemicalBehaviour(m, bm) + MassSpectroscopicBehuviour(m, bs) MassSpectroscopicBehuviour(m, bs) $ k Breuks(m, cs) From (3), using the definition8 of the predicates NodeElement and BehuuiourOfNodes (omitted here), we can derive NodeElements(m, n) >- BehaviourOf Nodes(m, bn) (8) which we can combine with (4) to give BondTopology(m, t) A NodeElementa(m, n) j- MolecularChemiculBehauiour(m, bm) (9) nom (5)) (91, (6) and (7) we have, again by transitivity, the instance language bias for Meta-Dendral given earlier: StructuralFormula(molecule, structure) + Breaks(molecule, site) (10) The point of this section has not been to elucidate the in- tricacies of chemistry, but to show how in a “real-world” domain the factual content of part of the VS bias can be arrived at by a deduction from accepted premise8 representing background knowledge, and in particular to illustrate the use of determi- nation8 in expressing these premises. We can now (at least in part) automate the process of Jetting up a VS process. B We showed how to represent in first-order logic the bias in the pure Version Space (VS) method, which is the moat standard AI approach to concept learning from examples. The most important part of the bias is implicit in the choice of the instance and concept candidate description languages. A learning system can thus derive it8 own ini- tial version space from its background knowledge. We gave an account of such a derivation for the Meta-DENDRAE system for learning cleavage rule8 in mas8 spectroscopy. E) We showed the important role of a form of first-order ax- iom, determinutione, in the VS method’s bias. We identi- fied a substantive component of the bias in the choice of the instance description language. * We showed how to represent (pure) VS updating as deduc- tion in first-order logic. Using a general theorem-prover, we can therefore incorporate arbitrary first-order back- ground knowledge into the concept learning process. Russell and Gmsof 507 8 Our declarative analysis of VS bias suggests how to extend Automatic provision or modification of the descrip the VS method to less structured learning situations. The tion space is the most urgent open problem facing learning agent can use determination-form knowledge to automatic learning.“(Bundy et al 1985, section 7.3) actively identify the relevant aspects of its inputs. One common strategy for 8 learning agent, e.g. in the As designers of learning agents, instead of starting with an algorithm and some contrived inputs, we should insteac! ex- amine what knowledge is typically available about the target concept, and then show how it may be used efficiently to con- struct plausible rules for concluding concept definitions, given examples. This more first-principles attitude is facilitated by a declarative approach. We had difficulty declaratively formulating some other - kinds of bias which are defined in terms of computational- s resource-oriented bounds on data structures or syntactic prop erties of descriptions, e.g. limits on the sizes of VS boundary 0 sets, and limits on negation or disjunction. The latter seems sometimes to represent real “semantic” knowledge, e.g. the vo- cabulary choice in LEX (Mitchell et al. 1983); exactly how is Benjamin Grosof and unclear. We suspect that possession of 8 good vocabulary is a Stuart sine aua non of inductive success. Russell STABB system for shifting-concept language bias (UCgoff 1984, 1986), and in the Rleta-DENDRAL system for learning cleavage rules in mass spectroscopy (Mitchell l9713), is to start with a strong bias, which aids focus and provides a guide to action, and then relax when needful to a weaker bias. This shift is triggered by falling below some acceptability threshold on an evaluation criterion for the working theory. Often the criterion is an unacceptable degree of inconsistency with the observed instances. Note that new information or pragmatic constraints may also lead the agent to atrengthen its bias. At bottom of the declarative impulse is the desire to charac- mulate deep bi88 as a set of premises which are highly stable, terize 8s stably a8 possible the justifying basis for the agent’8 yet which suffice to justify shifty bias and shifty belief. The no- beliefs. In this light, to the extent that bias is formulated in tion of a default in non-monotonic logical formalisms offers the such a way that it shifts, then to that extent its formulation fails to be satisfactorily deep. We thus look for a way to for- form of exactly such 8 stable premise. If we represent the trig- ger condition for retracting bias as strict logical inconsistency The Version Space method can now be implemented as a deduc- tive process using the instance observation8 and a declaratively expressed ‘bias’. In this part of the paper, we address the issue of inductive leapa, and the shift8 of the biases underlying them, in concept learning. We begin by observing that, viewed declar- atively, inductive leaps and shifts of bias are non-monotonic. We develop 8 perspective on shifts of bias in terms of preferred beliefs. We then show how to express several kinds of shift8 of “version-space” bias, as deductions in a new, non-monotonic formalism of prioritized defaulta, based on circumscription. In particular, we show how to express 1) moving to a different, e.g. less restrictive, concept language when confronted by inconsis- tency with the observations; and 2) the Freference for more specific/genera! description8 (definitions) of a concept. of the bias with the instance observations (as in STABB), then we can neatly u8e 8 nonmonotonic formalism. We can view a default a8 B preferred belief. That is, we prefer to believe the default if it is consistent with our other, non-retractible, beliefs. If the non-retractible beliefs contradict a default, it is retracted. In general, however, defaults may conflict with each other. It is useful, therefore, to express pref- erences, a.k.a. priorities, between defaults, a8 well. In cases of conflict, the agent prefers to believe the default with higher priority. If neither has higher priority, then the agent believes merely that one must be false without saying which. We can regard non-retractible belief8 as having infinite priority. Our approach to shifts of bias, then, is to express them as the result8 of retracting different concept language biases, represented a8 defaults Stronger and weaker retractible bi- ases co-exist: when both are consistent, the stronger ones hide uctive Leaps and Shifts of FE? Non-Monotonic sure of its initial bias, no “inductive leap” is required to reach a definition for the target concept. The potential for retraction is essential to novelty in an induc- tive learning process. In other words, useful concept learning must be treated as non-monotonic inference. When we ascribe 8 declarative status to bias a8 something that the agent be- lieves about the external world, then the agent’s believed set of sentences in general evolves non-monotonically. Since we have shown the pure VS method to be monotonic deduction, in what sense is it “inductive”, in the sense of mak- ing inductive leaps ? Our answer would be that in practice, the VS method instantiated with a particular initial version space is used as a sub-program: in advance it is not known whether that initial version space will be expressively adequate. The potential for shift of bias, especially of concept language bias, ia vital to a VS-style learning program’8 inductive churacter. We will use a non-monotonic formalism to study shift of bias in a declarative framework. Several researchers have identified the automation of the shift of concept language bias, e.g. as in the VS method, as a prime outstanding problem in machine learning. Methods by which a program could automatically de- tect and repair deficiencies in its generalization lan- guage would represent a significant advance in this field”(Mitchel1 1932, section 6.1) the weaker. When the stronger become inconsistent before the For now, we- will treat instance observatyons as non- weaker. we see a dvnamic relaxation or weakening of bias. retractible. However, we might make them have less than in- finite priority if we wished to deal with noise or inaccuracy in observations, or to tolerate a degree of inconsistency with the ObBerVatiOn8 rather than reject elegant V. riositiaed Several different nonmonotonic formalisms can express defaults, more or less. Of these, circumscription (McCarthy 1986; Iifs- chitz 1986) has a number of advantages. It is relatively well- understood mathematically, especially semantically, and can express priorities gracefully. The formalism we employ to de- scribe biases is 8 meta-language for specifying circumscriptive theories. In our language of prioritized defaulta, there are four kinds of axioms. A non-monotonic theory NMCtOSU7ZE(A) is de- fined a8 the closure under non-monotonic entailment of a set of axioms A. Base axioms are just non-retractible, first-order axioms: bird(Tweety) ostrich(Joe) -rflies(Hulk) Vz.ostrich(z)~bird(~) Defuult axioms have the form of labelled first-order formulas. They express preferred, but retractible, beliefs. Default axioms may take the form of open, as we!! as closed, formulas. An open formula is in effect 8 schema expressing the collection of defaults corresponding to the instantiations of the schema. (dl :) :> bird(z)=+flies(z) 508 Machine learning & Knowledge Acquisition (d2 :) :> ostrich(x)*~jlies(x) Prioritization axioms express priorities between defaults. One default having higher priority than a second means that in case of conflict between the two, the first rather than the second will be entailed by the non-monotonic theory. Thus the follow- ing axiom says that the ostrich default is preferred to the bird defau It. PRETER( d2, d, ) This corresponds to inheritance hierarchies, for example, where the slot value (flying) for a more specific class (ostriches) takes precedence over the slot value for a more general class (birds). Fitiute axioms express constraints on the scope of the defaults’ non-monotonic effects. They declare that the truth of certain formulas can only be entailed monotonically. FZX(bird(x)) Taking the above set of axioms as .A, then the non-monotonic theory n/MCLXXWRE(A) contains flies(Tureety), by default. Both default axioms apply to Joe, since he is both an os- trich and a bird, but they conflict. The prioritization axiom resolves the conflict. It tells us to prefer the ostrich default. Thus NMCLOSURE(A) entails lflies(Joe). The fixture ax- iom comes into play by preventing the conclusion that Hulk is not a bird, which the consistency of the bird default for the instance Hulk seems to tell us to make. Now we show how to use our logic of prioritized defaults to describe an agent that starts wit& a string concept language bias and shifts so as to weaken it in the face of inconsistency with observations. Space limits us to a simple example; we adapt one from (Mitchell 1982). The agent has an initial bias and two weaker, back-up biases, the weakest being just the instance language bias itself. The available observations describe each instance as a fea- ture vector of color (red or blue), size (large or small), and shape (circle or triangle). The instance language bias says that the target concept is determined by these three features taken together. The initial concept language bias CC1 is that the concept is equivalent to a conjunction of a Color atom and a Size atom. A second, fall-back bias CC2 is that the concept is equivalent to a conjunction of a Color atom, a Size atom, and a Shape atom. The instance language bias ZG and the observational updates 024’ are expressed as base axioms. The concept language biases are expressed as defaults. In addition, we assume the Unique Names Assumption (so Red # Rive etc.). 2-L : { Vx.3!y.Color(x, y) } ( Vx.il!y.Size(x, y) } { Vx.3!y.Sihape(xc, y) } { Vxy.Color(x,y)=%{(y = Red) V (y = Blue)} ) { Vxy.Site(x,y)*{(y = Large) V (y = Small)} ) { Vxy.Shape(x,y)*{(y = Circle) V (y = Triangle)} } {Color(x,yl) A Size(s,yz) A Shupe(c,y3) % kQ(x)) cc1 : cc2 : { Color(x,y) % kQFt(x) } A { Size(x, y) !- kQ&(x) 1 A 1 Vx.Q(x) = {QWx) A Qfi(xc)l 1 { Color(x,y) z kQFFt(x) } A { Size(x,y) % kQFFz(x) } A { Shape(x,y) t kQFFa(x:) ) A ( Vx.Q(x) z (QFfi(x) A QFl;i(x) A QFF3(x) } cm’ : Qb 1 A Qb2) A lQ(a3) A Cofor(a1 ,Red) A Site(ul, Large) A Shape(ul, Cirde)A Color(a2,Red) A Size(a2, Small) A Shupe(u2, Circle)A Color(as,Blue) A Size(as, Small) A Shape(a3,Triangle) cm2 : lQ(u4) A Color(a4, Red) A Sa’te(a4, Large) A Shape(ar , Triangle) cm3 : Q(Q~) A Color(as, Blue) A Sdze(as, Small) A Shape(us, Circle) cm’ : Qb 1 A Color(a4, Blue) A SdZe(a6, Large) A Shupe(ue , Triangle) The agent’s starting axioms A0 are: ase axioms: (the instance language bias) XC. efaralt axioms: (d3 :) :> CL* (d, :) :> CL2 PrioriQizatiorn axioms: none. Fixture axioms: FZX(Color(x)) FZX(Size(x)) FZX(Shape(x)) Let A” denote A0 A OU’ A . . . A OU”, i.e. the agent’s ax- ioms after the mfh observational update. The agent’s umrking inductive theory WZ7’” is then equal to NMCLc(3SURE(A”). In WZ7’, i.e. after the first update, the initial concept language bias is uncontradicted, so it holds by default. That is, CGt is consistent (and thus so is the weaker Ct2) and thus holds. The corresponding version space has been refined to a single candidate; the agent’s working inductive hypothesis is that the concept is that the color is red. WZT’ k CL1 A CC2 A {Vx.Q(x) I Color(x, Red)} The second update, however, contradicts this hypothesis and the initial concept language bias. Thus in WZ’T’, CCI is retracted. However, CC2 is still consistent and thus holds: the agent shifts to the fall-back. The corresponding version space has two members. wz72 k lCtl A CL2A ( (Vx.Q(x) E Shape(x, Circle)}V {Vx.Q(x) G {Color(x, Red)A Shape(x, Circle)}} } After the third update, the version space is again refined to a single candidate: WZ13 j= -CC, A CL2 A {Vx.Q(x) G Shape(x, Circle)} However, the fourth update contradicts this hypothesis, i.e. even the fall-back bias. Thus in WZI’ the agent retracts the fall-back, i.e. CC2 as well as CC1 . WZ14 j= -CC1 A -dL2 The agent is then left with a VUC~OTJJ concept language bias which does not go beyond the instance language bias. The ver- sion space consists of all subsets of the “describable instances” consistent with the observations. Here, there are four of these, corresponding to the possible combinations of classifications for blue large circles and red small triangles. In addition to the concept and instance language bias, we can also represent some types of preference bias, includ- ing maximal specificity/generality bias, i.e., the preference for the most specific/general among concept descriptions that are consistent with all observations. This corresponds to minimiz- ing/maximizing the extension of the goal concept Q, and hence to the following default axio Maximal Specificity iom : (ds :) :> -Q(x) sximd Generdity Axiom : (de :) :> Q(x) In order to express the fact that an agent employs (say) max- imal generality bias, we just include the Maximal Generality Axiom in the agent’s axioms, bearing in mind that maximal generality as a preference may conflict with other defaults. Russell and Grosof 509 In our example above, intuitively what we would like is to apply (say) maximal generality only after attempting to adopt the default concept language biases. To express this formally, we need to ensure that the Maximal Generality Axiom has lower priority than the defaults corresponding to the retractible con- cept language biases, e.g. by including pP’REFc‘R( dr , de ) in the agent’s axioms. Thus in the above example, after the sec- ond update the agent would adopt the more general of the two candidates above as its working hypothesis: WZ’l?-&c b -CL1 A CL2 A { VZ.Q(Z) zz Shape(x,Circle) } In this part of the paper we attempted to show how could be dealt with in our declarative framework. bias shift We observed that from a declarative point of view, induc- tive leaps, and shifts of the biases which justify them, are non-monotonic. We showed how to declaratively represent shifts of bias, i.e. “shifiy” bias, using a new language of prioritized defaults, based on circumscription, for “version-space”-type concept language bias. We showed that the maximal specificity and maximal gen- erality biases are formulable quite simply: as negative and positive default belief, respectively, about the target con- cept. Thus we have a logical, semantic formulation for these preference-type biases which Dietterich (1986) listed as “syntactic” and “symbol-level”. Thus we can view inference that is non-deductive at the level of first-order logic, i.e. that is inductive, as deduction in another “knowledge level” associated with non-monotonic beliefs. This allows the use of arbitrary-form non-monotonic “background knowledge”. The non-monotonic viewpoint sug- gests formulating shifts among base-level bias sentences as de- feasible “shifty” bias sentences. How to efficiently implement such inference is an open question which we are currently inves- tigating. See (Grosof 1987) for a discussion of implementation issues. Our declarative formulation also poses the question of the source of the defaults and preferences among beliefs which are the “shifty” premise biases of inductively leaping agents. In our view, the justification of inductive leaps arises not just from probabilistic beliefs, but also from the pressure to decide, i.e. the need to act as if one knows. Because the information about which the agent is quite confident is incomplete, it requires an additional basis to decide how to act. (Since the agent acts some way, we can declaratively ascribe a working hypothesis to its decision principle.) A second reason why bias is needed is that the agent has computational limits on how many inductive hypotheses it can consider, and in what sequence. Thus we ex- pect that the justification for bias is largely decision-theoretic, based both on probabilities and utilities. We are currently investigating, in addition to implementa- tion issues, how to extend our approach to several other aspects of inductive theory formation, including 1) tolerance for noise and errors; 2) preferences for more likely hypotheses; 3) prefer- ences for simpler hypotheses, as in Occam’s Razor; and 4) the decision-theoretic basis for bias preferences. e We would particularly like to thank Michael Genesereth and Vladimir Lifschitz for their interest, criticism, and techni- cal help. Thanks also to Devika Subramanian, Haym Hirsh, Thomas Dietterich, Bruce Buchanan, David Wilkins, and the participants in the GRAIL, MUGS, and Nonmonotonic Rea- soning seminars at Stanford for valuable discussions. PI PI PI 141 PI PI PI PI PI ee Buchanan, B. G., and Mitchell, T. M., “Model-directed Learning of Production Rules”. In Waterman, D. A., and Hayes-Roth, F., (Eds.) Pattern-directed Inference Systems. New York: Academic Press, 1978. Bundy, Alan, Silver, Bernard, and Plummer, Dave, “An Analytical Comparison of Some Rule-Learning Programs”. In AI Jo8rna1, Vol. 27, 1985. Buntine, W., “Generalized Subsumption and its Appli- cation to Induction and Redundancy”. In Proceedings of ECAI-86, Brighton, UK, 1986. Davies, Todd. “Analogy”. informal Note CSLI-IN-~35-4, CSLI, Stanford, 1985. Davies, Todd R. and Russell, Stuart J., “A Logical Ap- proach to Reasoning by Analogy”. In Proceedings of IJCAI- 87, Milan, Italy, 1987. Dietterich, Thomas G., “Learning at the Knowledge Level”. In AIachine Learning, Vol. 1, No. 3, 1986. Genesereth, M. R., “An Overview of Meta-Level Architec- ture”. In Proceedings of AAAI-83, pp. 119-124, 1983. Grosof, Benjamin N.. Non-Monotonic Theories: Structure, Inference, and Applications (working title). Ph. D. thesis (in preparation), Stanford University, 19r37. Lifschitz, Vladimir, “Pointwise Circumscription”. In Bro- ceedings of AAAI-86, pp. 406-410, 1986. [lo] McCarthy, John, “Applications of Circumscription to Por- malizing Common-Sense Knowledge”. l[n Artificiul Intelli- gence, Vol. 28, No. 1, pp. 89-116, Peb. 1986. [ll] Michalski, R. S., “A Theory and Methodology of Inductive Learning.” Artificial Intelligence, Vol. 20, No. 2, 1983. [12] Mill, 9. S., System of b/ogic (first published 1843). Book HI Ch XX ‘Of Analogy’ in Vol. VPII of Collected Worka of John Stuart Mill. University of Toronto Press, 1973. [13] Mitchell, Tom M., version Spaces: an Approach to Con- cept Eearlaing. Ph.D. thesis, Stanford University, 1978. [14] Mitchell, Tom M. “The Need for Biases in Learning Gen- eralizations”. Rutgers University TR CBM-T&117, 1980. [15] Mitchell, Tom M., “Generalization as Search”. In A rlificial Intelligence, Vol. 18, No. 2, pp. 203-226, March 1982. [16] Mitchell, T. M., Utgoff, P., and Banerji, pt., “Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics”. In Carbonell, J. G., Michalski, R., and Mitchell T., (eds.) Ag ac h ine Learning: an Artificial Intelligence Ap- prouch. Palo Alto, CA: Tioga Press., 1983. [17] Quinlan, J. R., “Learning Efficient Classification Proce- dures and their Application to Chess End Games”. In Car- bonell, J. G., Michalski, R., and Mitchell T., (Eds.) Ma- chine Learning: an Artificial Intelligence App+oach. Palo Alto, CA: Tioga Press., 1983. [18] Rendell, Larry, “A General Framework for Induction and a Study of Selective Induction. Machine Learning, 1, 1986. [19] Russell, Stuart J., P’he CompIeat Goside to f&ffiS. Technical Report No. STAN-CS-85-1080, Stanford University, 1985. [ZO] Russell, Stuart J., “‘Preliminary Steps Toward the Au- tomation of Induction.” In Proceedinga of AAAI-86, Philadelphia, PA, 1986. [21] Russell, Stuart J., Analogical and Inductive Reaeoning. Ph. D. thesis, Stanford University, Dec. 1986. [22] Subramanian, Devika, and Peigenbaum, Joan, “Factoriza- tion in Experiment Generation’. In $roceedings of AAAI- 86, pp. 518-522, 1986. [23] Utgoff, P. E., Shift ofBiaefor dndactiue Concept Learning. Ph.D. thesis, Rutgers University, 1984. 518 Machine Learning & Knowledge Acquisition
1987
91
689
Learning and Representation Ghan Jeffrey C. ScM.immer* Department of Information and Computer Science University of California, Irvine 92717 schlimmer@ics .uci. edu To remain effective without human interaction, intelligent sys- tems must be able to adapt to their environment. One useful form of adaptation is to incrementally form concepts from ex- amples for the purposes of inference and problem-solving. A number of systems have been constructed for this task, yet their capability is limited by the language used to represent concepts. This paper presents an extension to the concept acquisition system STAGGER that allows it to utilize continuously valued attributes. The combination of methods employed is able to dynamically acquire appropriate representations, thereby mini- mizing the impact of initial representational bias decisions. Of additional interest is the distinction between the computational flavor of the learning methods, for one is similar to connectionist approaches while the other two are of a more symbolic nature. Consider the task of constructing a concept description given a series of examples and non-examples. This has been addressed by a number of learning systems, yet many have limited capability due to inflexibility inherent in the concept representational language. If the language is too restrictive, there will be some concepts which cannot be represented or learned. The restriction imposed by the concept representation language is necessary, though, and without it the learning method could do no better than to guess randomly at the concept’s definition (Utgoff 8c Mitchell, 1982). To all eviate this bind between flexibility and tractability, a system might modify the underlying rep- resentational language in some way or another, either in- creasing the language to accommodate more possible con- cepts ok reducing it to improve the possibility of finding an appropriate concept description. This paper describes a two part extension to a concept acquisition system called STAGGER (Schlimmer & &anger, “This work was snpported by grants ONR N0001485-K-0854, ONR NOOO1484K-0391, AM MDA903-85-C-0324, and NSF ET-85-12419. 1986). First, a new method is added that discretizes con- tinuously valued attributes. Secondly, by combining this new method with the existing methods that weight and refine a distributed concept description, STAGGER is able to overcome limitations inherent in its initial concept lan- guage. After briefly describing some related work, this pa- per describes each of the three learning methods and then demonstrates their interaction. IL A number of researchers have studied the problem of utilizing numerically valued information in symbolic con- cept learning. For example, Michalski (1983) presents the closing interval generalization rule. It specifies that if two values are found in positive examples, assume that all of the values between them will be. This mapping from con- tinuous to discrete values is similar to the type of approach used by Lebowitz’s (1985) UNIMEM which partitions real values in both a generalization and a data-driven manner. In the first, clusters of examples formed on the basis of dis- crete attributes partition a real range by implicitly group- ing the values. The latter, data-driven technique searches a subset of the numeric values present for gaps indicated by the distribution of real values across objects. Quinlan’s (1986) ID3 y t s s em forms a number of competing pairs of intervals, centered around potential splits in the real-value range. ID3 then considers these intervals as possible deci- sion tree roots by interpreting them as binary valued at- tributes. A distinctly different approach for handling numeric in- formation is taken by Bradshaw (1985) in his speech under- standing system. Instead of attempting to map the contin- uous information representing a verbal utterance into some set of symbolic values, his system retains a set of attribute averages for each word concept. Rendell’s (1986) PLS family of concept induction sys- tems incorporate both partitioning and averaging ap- proaches. In some instances concepts are represented as “rectangles” or value ranges, while in other cases concepts appear to be described in terms of their central values. Schlimmer 511 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Along representational lines, Utgoff and Mitchell (1982) were perhaps the first to address the issue of constrictive representational assumptions. In this and subsequent work (Utgoff, 1986) they develop a method which explicitly iden- tifies shortcomings and triggers procedures for relaxing the descriptive language. STAGGER’S numeric learning belongs in the partition- ing class, for it divides real values into a dynamically de- termined number of discrete ones. Perhaps more impor- tantly, interaction between the three learning components of STAGGER alleviates the impact of an ill-fitting initial concept description language. This representational ad- justment occurs in a continuous and natural manner; each of the methods assists the others while performing its own task. STAGGER uses three interacting learning components and represents concepts as a set of weighted, symbolic de- scription pieces. One of the learning methods adjusts the weights, another adds new Boolean pieces, and the third adds new pieces corresponding to an aggregation of real- values into a few discrete ones. The interaction between these methods may be viewed as a form of representational learning, for they exert influence on each other by changing the substrate from which induction proceeds. A. Concept representation and matching Concepts are represented in STAGGER as a set of dually- weighted, symbolic pieces. Each element of the concept description may be a single attribute-value pair, a range of acceptable values for a real-valued attribute, or a Boolean combination. Figure 1 depicts a typical concept description for size=medium & color=red. Each descriptive element is dually weighted to capture positive and negative implica- tion. One weight formalizes the element’s sufficiency (solid line), or matched ==s example, and the other represents its necessity (dashed line), or lmatched =s- -example. These weights are based on the logical sufficiency (LS) and log- ical necessity (LN) measures used in Prospector (Duda, Gaschnig, & Hart, 1979). LS = .m LN = p(Tmatehedlexample) p(lmatchedl-example) (1) LS ranges from zero to infinity and is interpreted in terms of 0dds.l A weight greater than one indicates predictive- ness; less than one denotes an element that predicts non- examples. EN has the same range but the opposite inter- pretation. For both, a weight of one indicates irrelevance. lTo convert odds into probability, divide odds by one plus odds. LS I.24 .m~.m~.m--.m~ Figurewedium & red concept description. Given a new example, all of the weighted concept ele- ments influence expectation of its identity. Following the mechanism used in (Duda et aI., 1979), the prior expec- tation of a positive example is metered by multiplying in the LS weight of each matching piece and the LN weight of each unmatched one. odds(Ejfeatures) = odds(E) x LS x LN (2) Vhf V4-f The resulting matching score is the odds in favor of a pos- itive example and reflects the degree of match between the concept description and the example. This holistic flavor of matching differs from many machine learning systems in which a single characterization completely influences con- cept prediction. . Modif$ng element weights The weights associated with each of the concept descrip- tion elements are easily adjusted by incrementally counting the number of different matches between an element and examples; these counts are used to compute estimates of the probabilities in Equation 1. LS = 1 matched&example 1 examples matched&examplej/ -examples (31 . I LN= I ~matched&example examples Tmatched&-example Texamples Keeping counts of matchings between elements and exam- ples also allows calculating the prior expectation for an example: odds(example) = lexamplesl/l~examplesl. 512 Machine Learning & Knowledge Acquisition By adjusting the weights associated with each of the de- scriptive elements, STAGGER is behaving as a single layer connection& model. Without hidden units, these mod- els suffer from the same representational limitations that STAGGER does without its Boolean learning method. Both are unable to assign weights to a combination of values, and this severely limits the number of discoverable concepts to only linearly-separable ones. From a representational point of view, the weight method can only form concepts in terms of existing description elements. Instead of in- cluding an element for all possible Boolean combinations of the attribute-values, this method begins with the rela- tively strong bias of only single attribute-value elements. c. Forming new Boolean combinations STAGGER selectively adds new elements by beam- searching through the space of all possible Booleans. The initial search frontier is the set of single attribute-value pairs. The B oo ean 1 method uses three search operators, specialization, generalization, and inversion to add new el- ements to the search frontier. The search is limited by proposing a new element only when STAGGER makes an expectation error. This cautiously extends the descriptive power of the weight adjusting process while retaining the constructive properties of a limited representation. For instance, when a non-example is expected to be a positive example, STAGGER is behaving too inclusively, too generally, and thus a more specific element may be needed. So, STAGGER expands the search frontier by tentatively adding a new AND formed from two elements which are necessary for the concept. The selection of component ele- ments is based on two observations: at least one necessary element is unmatched in a non-example, and necessary el- ements typically have strong logical necessity weights. The other type of prediction error also triggers the ex- pansion. A guess that a positive example is negative is overly specific. To correct for this underestimation, search is expanded to include a more general element; a new OR formed from two sufficient elements is tentatively added. Both predictive errors are opportunities to invert poor ele- ments. Further details concerning the Boolean method are documented in (Schlimmer $t Granger, 1986). Though the space of possible Boolean combinations is large, it does not include states for numerically valued at- tributes. Therefore, STAGGER has a third learning method which extends this space by adding discrete values for real value ranges. D. Partitioning real-valued attributes In order to carve up an attribute’s real-valued range into a set of discrete intervals, STAGGER retains a sim- ple statistic for a number of potential’interval end-points. These end-points are taken from processed examples and, through a beam-search, the best are utilized to naturally break up the range into discrete values. Each new exam- ple supplies a value to update these statistics, and in turn this method transforms successive examples into a palat- able form for the weight and Boolean learning methods. Specifically, for each potential end-point, a two by two record is kept of the number of positive and negative ex- amples with values less and greater than this potential end- point. A measure apphed to these numbers indicates useful divisions in the value range. By interpreting these divisions as the end points of discrete values, STAGGER maps the real-valued attributes of subsequent instances into discrete values. The utility measure is similar to Equation 2, for it involves the prior odds of each class and a conditional probability ratio similar to LS and LN. IClCZ88@.8l U( end-point) = odds( class;) x p(cZass; 1 < e-p) p(classi 1 > e-p) (4) i=l These conditional probabilities may be computed from the number of positive and negative examples with values less and greater than the measured end-point. Since the poten- tial end-points are taken from actual examples, the method is independent of scale considerations and does not en- tail any assumptions about the range of values. Further- more, because partitioning is driven by a statistic based on class information, the method is able to uncover effec- tive partitionings even when the values are uniformly dis- tributed across all classes, something a gap finding method (Lebowitz, 1985) is unable to do. A straightforward strategy for fractioning the real-value range would be to choose the end-point with a maximal utility and thereby divide the range into two discrete val- ues: greater and less. Rowever, for concept learning tasks which require finer distinctions, it would be more effective to divide the range into a number of discrete values. So af- ter applying a local smoothing function, STAGGER chooses the end-points that are locally maximal. These end-points represent pivotal values, for Equation 4 favors those that are predictive of concept indentity. This approach has the advantage that it naturally selects appropriate end-points and an effective number of discrete values. Having aggregated the real-valued range, attributes with continuous values in successive examples may be trans- formed into their discrete counterparts. This mapping re- sults in example descriptions that are consistent with the input requirements of both the weight and Boolean learn- ing methods described above. Furthermore, it embodies a type of representational learning, because by partitioning values into ranges, the concept description languages for the other learning methods is expanded. * E. Interactions between the learning methods STAGGER'S three learning methods cooperatively inter- Schlimmer 513 BOOLEAN LEARNING rz I I I I NUMERICAL \ I I ,e-- LEARNING / I / / piiqJ- Figure 2: Interaction between STAGGER'S three methods. act as Figure 2 depicts. The Boolean learning method alters the-representational base for the weight adjusting method; it is restructuring the input to the weight ad- justing method so the latter is able to capture the con- cept’s description. The numeric learning method has this administrative role for both the weight and Boolean learn- ing processes; it rewrites the real-valued attributes into a form suitable for induction by the latter methods. The de- pendent learning methods also exert influence on the rep- resentational processes which counsel them. The Boolean method draws its components from the pool of ranked el- ements maintained by the weight learner. The weighting method also provides a weak form of feedback for the nu- meric method: the similarity between the numeric eval- uation function (Equation 4) and the matching equation (Equation 2) implicitly ensures that the division of real attributes will be amenable to weighting and matching. IV. Empirical Performance The interaction of the three learning methods is perhaps best illustrated by examining their behavior on concept learning tasks. Consider STAGGER'S acquisition of a pair of simple object concepts. Each object is describable in terms of its size (real value between 0 and 20), its color (one of 3 discrete values), and shape (3 discrete values). For the first concept, an object is a positive example if red and between 5 and 15 in size. Optimally, the size at- tribute should be divided into three ranges: size < 5, 5 < size < 15, and 15 < size. In each of 10 executions, STAG- GER'S numeric partitioning method discovers this three- way split, and its Boolean combination technique forms a conjunction combining the middle value of size and the color red. The weight adjusting method further gives this element more influence over matching than any other. The following is typical of the elements formed by the cooper- ative action of the three learning methods. To illustrate the functioning of the partitioning method, consider the set of potential end-points for this task de- picted in Figure 3. Note that the partitioning measure (Equation 4) clearly identifies the two local maxima near 5 and 15 that are used to partition the size attribute into three discrete values. U( end-point) 25 0 1 I I I 0 5 10 15 20 SIZE Figure 3: End-points for 5.0 ssize< 15.0 & red. The complete, conjunctive element does not appear sud- denly. Though th e methods are not explicitly synchro- nized, they appear to operate in a staged manner as Fig- ure 2 indicates. First, the numerical partitioning method begins to search for a reasonable way to partition real ranges. At the same time, the weight adjusting method searches for appropriate element strengths. The weight of color=red is adjusted at this time, but combination with the size attribute must wait until the numerical method settles down. After processing about 50 examples, the tripartite division of the size attribute is stable, and the weight method is able to assign a strong LN weight to the middle value of the size attribute. After this, the Boolean method combines the size and color elements to form the element depicted above. Weight adjusting finishes the job by giving this element strong LS and LN values. Regrettably, these three learning methods are not suffi- cient for all concept learning tasks; there are some concepts for which the numerical method is unable to uncover an ef- fective partitioning. For example, consider the concept of objects that have a size between 5 and 15 or are red but not both. Figure 4 indicates that the numerical method is un- able to identify a reasonable partitioning in this case. This limitation also arises if we consider the capabilities of the 514 Machine Learning & Knowledge Acquisition U ( end-point) 251 the weight method: if a concept involves ‘an exclusive-or of a real-value range, STAGGER is unable to discover it. 0 5 10 15 20 SIZE Figure 4: End-points for 5.0 lsize< 15.0 @ red. weight method used without the other methods; the weight learner alone is only able to describe linearly-separable con- cepts (which does not include exclusive-or). The Boolean method allows it to overcome this limitation by rewriting its representational language. If the Boolean method could direct the numerical method, then the latter would also be able to move beyond linearly-separable concepts. This is an area for future study. V. Conclusions and Future This paper describes a three part approach to the task of learning a concept from examples. A connectionist style learning method modifies a simple concept description by changing the weights associated with descriptive elements. A second, symbolic learning method forms Boolean com- binations of these descriptive elements and allows the first method to overcome a representational shortcoming. The third learning method divides real-valued attributes into a set of discrete ranges, so other methods are able to con- struct descriptions of numerical concepts. Overall, the interaction between these three methods is a type of co- operative representational learning. The numeric method changes the bias for both the Boolean and weight method. Similarly, the Boolean method forms new compound ele- ments and thus increases the representational capabilities of the weight adjusting method. One drawback illustrated in the previous section arises because cooperation between the methods is incomplete. Dynamic feedback is lacking for the numeric method. Though it attempts to form value partitions that allow effective learning by the other methods, it does not use information about the progress of learning at those levels to alter its course of action. Consequently the numerical method can only uncover partitions that can be utilized by Acknowledgements Thanks to Rick Granger and Michal Young who provided much of the early foundations for this work; to Ross Quin- Ian for his assistance in formulating the numeric learning method; to Doug Fisher for insight on the interactions be- tween the weight, Boolean, and numeric learning methods; and to the machine learning group at UC1 for ready dis- cussion and suggestions. eferences Bradshaw, G. L. (1985). Learning to recognize ’ speech sounds: A theory and model. PhD thesis, Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA. Duda, R., Gaschnig, J., & Hart, P. (1979). Model design in the Prospector consultant system for mineral explo- ration. In D. Michie (Ed.), Expert systems in the mi; cro electronic age, Edinburgh: Edinburgh University Press. Lebowitz, M. (1985). Categorizing numeric information for generalization. Cognitive Science, 9, 285-308. Michalski, R. S. (1983). A theory and methodology of in- ductive learning. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning: An ar- tificiaZ intelligence approach. Los Altos, CA: Morgan Kaufmanu. Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, N-106. Rendell, L. (1986). A g eneral framework for induction and a study of selective induction. Machine Learning, I, 317-354. Schlimmer, J. C., & Granger, R. H., Jr. (1986). Incre- mental learning from noisy data. Machine Learning, I, 317-354. Utgoff, P. E., & Mitchell, T. M. (1982)) Acquisition of appropriate bias for inductive concept learning. Pro- ceedings of the National Conference on Artificial Intel- ligence (pp. 414-41‘7). Pittsburgh, PA: Morgan Kauf- mann. Utgoff, P. E. (1986). Shift of bias for inductive c\oncept learning. In R. S. Michalski, J. G. Carbonell & T. M. Mitchell (Eds.), 1M ac h ine learning: An artificial in- telligence approach (Vol. 2). Los Altos, CA: Morgan Kaufmann. Schlimmer 515
1987
92
690
An EBL System that xtends and Generalizes Explanations * Jude W. Shavlikl Gerald F. DeJong Coordinated Science Laboratory University of Illinois Urbana, IL 61801 Many concepts require generalizing number. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table involves an arbitrary number of guests. In addition. there is recent psychological evidence [Ahn87] that people can generalize number on the basis of one example. A domain-independent, explanation-based approach to the problem of “generalizing to N *’ is presented in [Shavlik8 7b]. * This research was partially supported by the National Science Foundation under grant NSF IS-T 85-l 1542. t University of Illinois Cognitive Science/Artificial Intelligence Fellow. That paper presents a theory of generalizing number. It also motivates the need for augmenting explanations. discusses other approaches to generalizing the structure of explanations [Cheng86, Prieditis86. Shavlik85, Shavlik87al and briefly discusses how this approach handles examples from several domains. This paper describes the details of a working system based on structures that -theory. The of the form shown system analyzes and in the left-hand side of generalizes figure 1. Observation of the repeated application of a rule or operator indicates that generalizing the number of rules in the explanation may be appropriate. The desired form of structural recursion is manifested as repeated application of an inference rule in such a manner that a portion of each consequent is used to satisfy some of the antecedents of the next application. When such a sequence is detected, it is determined how an arbitrary number of instantiations of this rule can be concatenated together. This indefinite-length sequence of rules is conceptually merged into the explanation. replacing the specific-length collection of rules, and a standard explanation-based algorithm produces a new rule from the augmented explanation. An additional requirement is that the preconditions for the N rule applications be fully specified in terms of the state of the world when the new rule is applied. That is. the preconditions do not depend on the results of intermediate applications of the underlying rule. application 1 ‘7\y 4 application 2 applxat ion T : . , 1 applrc tion, -r--Y Y -. . 1 . .- : . V goal applicatron, @v goal Figure 1. Augmenting the Explanation II. THE BAGGER SYSTEM The BAGGER system (Building Augmented Generalizations by Generating Extended Recurrences) an&zes predicate calculus proofs and - attempts - - to construct concepts that involve generalizing to N . Most of the examples under study use the situation calculus to reason about actions. One figure 2s. problem The goal solution analyzed by BAGGER is to clear block x . The system is shown in is provided low-level domain knowledge about blocks. including how to transfer a block from one location to another. Briefly. to move a 516 Machine learning & Knowledge Acquisition From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Figure 2s. Unstacking a Specific Tower block it must have nothing on it and there must be free space at which to place it. Additional inference rules (many of which are frame axioms) are used to reason about the effects of moving an object. The system produces a situation calculus proof validating the actions shown in figure 2s. in which two blocks must be moved to clear the desired block. By analyzing this example, the system acquires a general plan for clearing an arbitrary block contained in a tower of arbitrary height. The acquired plan applies. for example, to the problem of clearing block z in figure 2g. Note that there may be a different number of blocks on z than on x . lg a 2 . I I m Figure 2g. A General Plan for Unstacking Towers In another example, the system observes several blocks being stacked upon one another in order to satisfy the goal of having a block at a specified height. Extending the explanation of these actions produces a plan for stacking any number of blocks in order to reach any given height (provided enough blocks exist). Figure 3 illustrates this general plan. * + goal El A and the acquired inference rules. can be found in [Shavlik87c]. IPI. GENERALIZATION IN BAGGER The system begins its analysis of a specific solution at the goal node. It then traces backward, looking for repeated rule applications. These repeated applications need not directly connect - there can be intervening rules. The general rule repeatedly applied is called a focus rule. Once a focus rule is found, BAGGER ascertains how an arbitrary number of instantiations of this rule and any intervening rules can be concatenated together (as illustrated in figure 1). This indefinite-length collection of rules is conceptually merged into the explanation. replacing the specific-length collection. and a new rule is produced from the augmented explanation. Three classes of terms must be collected to construct the antecedents of a new rule. First. the antecedents of the initial rule application in the arbitrary length sequence of rule applications must be satisfied. To do this, the antecedents of the focus rule are used. Second, the preconditions imposed by chaining together an arbitrary number of rule applications must be collected. These are derived by analyzing instantiations of the focus rule in the sample proof. Those applications thaj provide enough information to be viewed as the arbitrary ith application produce this second class of preconditions. Third. the preconditions from the rest of the explanation must be collected. This determines the constraints on the final applications of the focus rule. The consequents of the new rule are produced by collecting the consequents of the last application in the chain of focus rule applications and any other terms in the goal expression. For example. the consequents for the rules illustrated by figures 2g and 4 state that in the final situation the object originally under the last object moved is clear. In the rule represented by figure 3. the consequents state that the last block moved is at the goal location. Figure 3. A General Plan for Building Towers Unlike many other block-manipulation examples, in these examples it is not assumed that blocks can support only one other block. This means that moving a block does not necessarily clear its supporting block. Another concept learned by BAGGER is a general plan for clearing an object directly supporting any number of blocks. This plan is illustrated in figure 4. Figure 4. A General Plan for Clearing Objects The domain of digital circuit design has also been investigated. By observing the repeated application of DeMorgan’s law to implement two cascaded and gates using or and not gates, BAGGER produces a general version of DeMorgan’s law which can be used to implement N cascaded and gates with N or and one not gate. The next section describes how BAGGER constructs these general plans. Complete details on these examples, including the initial set of inference rules used, the situation calculus proofs, In order to package a sequence of rule applications into a single macro-rztle, the preconditions that must be satisfied at each of the N rule applications must be collected and combined. The preconditions for applying the resulting extended rule must be specifiable in terms of the initial state, and not in terms of intermediate states. This insures that. given that the necessary conditions are satisfied in the initial state. a plan represented in an extended rule will run to completion without further problem solving, regardless of the number of intervening states necessary. For example. there is no possibility that a plan will lead to moving N - 2 blocks and then get stuck. If the preconditions for the ith rule application were expressed in terms of the result of the (i-11th application, each of the N rule applications would have to be considered in turn to see if the preconditions of the next are satisfied. In the approach taken, extra work during generalization and a possible loss of generality are traded off for a rule whose preconditions are easier to check. When a focus rule is concatenated an arbitrary number of times, variables need to be chosen for each rule application. A sequence of p-dimensional vectors. called the rule instantiation sequence (RIS). is used to represent this in,formation. The general form of the RIS is: <Vl,J,. f . , Vlg >. <v2,1.. . . , vzg > , . . . , <v, ,,, . . . . V” ,p > (1) In the unstacking example of figure 2s. p = 3: the current state, the object to be moved. and the object where the moved object will be placed. Depending on the rule used. the choice of elements for this sequence may be constrained. For example. certain elements may have to possess various properties. specific relations may have to hold among various elements, some elements may be constrained to be equal to or unequal to other elements, and some elements may be functions of other elements. Shawlik and DeJong 517 To determine the preconditions in terms of the initial state, each of the focus rule instantiations appearing in the specific proof is viewed as the ith application of the underlying rule. The antecedents of this rule are analyzed as to what must be true of the initial state in order that it is guaranteed the ith collection of antecedents are satisfied when needed. This involves analyzing the proof tree, considering how each antecedent is proved. An augmented version of a standard explanation-based algorithm [Mooney861 is used to determine which variables in this portion of the proof ree are constrained to be identical.’ Once this is done, the k va iables are expressed as components of the p- dimensional vectors described above, and the system ascertains what must be true of this sequence of vectors so that each antecedent is satisfied when necessary. All antecedents of the chosen instantiation of the focus rule must be satisfied in one of the following ways for generalizing to N to be possible: (1) (2) (3) (4) The antecedent may be situation-independent. Terms of this type are unaffected by actions. The antecedent may be supported by a consequent of an earlier application of the focus rule. Terms of this type place inter-vector constraints on the sequence of p- dimensional vectors. The antecedent may be supported by an “unwindable rule.” When this happens, the antecedent is unwound to the initial state and all of the preconditions necessary to insure that the antecedent holds when needed are collected. This process is elaborated later. It. too, may place inter- vector constraints on the sequence of p-dimensional vectors. The antecedent is supported by other terms that are satisfied in one of these ways. Notice that antecedents are considered satisfied when they can be expressed in terms of the initial state. and not when a leaf of the proof tree is reached. Conceivably. to satisfy these antecedents could require a large number of inference rules. If that is the case. it may be better to trace backwards through these rules until more operational terms are encountered. This operationality/generality trade-off [Mitchell861 is a major issue in explanation-based learning, and, except where it relates directly to generalizing to N , will not be discussed further here. A second point to notice is that not all proof subtrees will terminate in one of the above ways. If this is the case. this application of the focus rule cannot be viewed as the ith application.2 The possibility that a specific solution does not provide enough information to generalize to N is an important point in explanation-based approaches to generalizing number. A concept involving an arbitrary number of substructures may involve an arbitrary number of substantially different problems. Any specific solution will only have addressed a finite number of these sub-problems. Due to fortuitous circumstances in the example some of the potential problems may not have arisen. To generalize to N , a system must recognize all the problems that exist in the general concept and, by analyzing the specific solution, surmount them. Inference rules of a certain form ’ The rules used in the specific proof are replaced by their general versions and the algorithm determines which uniEcations must hold to maintain the veracity of the proof. That is, expressions must be unified wherever a rule consequent is used to satisfy an antecedent of another rule. * One solution to this problem would be to have the system search through its collection of unwindable rules and incorporate a relevant one into the proof structure. To study the limits of this paper’s approach to generalizing to N, we are requiring that al2 necessary information be present in the explanation: no problem-solving search is performed during generalization. Another solution ‘would be to assume the problem solver could overcome this problem at rule application time. This second technique, however, would eliminate the property that a learned plan will always run to completion whenever its preconditions are satisfied in the initial state. (described later) elegantly support this task in the BAGGER system. They allow the system to reason backwards through an arbitrary number of actions. A specific solution will contain several instantiations of the general rule chosen as the focus rule. Each of these applications of the rule addresses the need of satisfying the rule’s antecedents, possibly in different ways. For example. when clearing an object. the blocks moved can be placed in several qualitatively different types of locations. The moved block can be placed on a table (the domain model specifies that tables are always clear). it can be placed on a block moved in a previous step. or it can be placed on a block that was originally clear. BAGGER analyzes all applications of the general focus rule that appear in the specific example. When several instantiations of the focus rule provide sufficient information for number generalization, BAGGER collects the preconditions for satisfying their antecedents in a disjunction of conjunctions (one conjunct for each acceptable instantiation). Common terms are factored out of the disjunction. Knowledge about the independence of the methods of satisfying the antecedents can be used to further simplify the disjunction of conjunctions. The learned rule illustrated in figure 2g only allows clearing towers by unstacking each block (after the first) on the previously moved one. The first transfer of figure 2s provides no information that can be used to guarantee that the block to be moved in step i is clear at that step. The acquired rule would be more general if it contained provisions for placing moved objects in any of the types of locations mentioned above. When an example of unstacking a four-block tower is presented to the system. where one intermediate block is placed on the table. a disjunctive rule is learned. In this case. the learned rule provides a choice of places to locate moved blocks. The disjunctive rule represented by figure 3 involves a choice of where to get the next block for the tower being constructed. Either a block that is clear in the initial state is used. or a block that is cleared by earlier transfers is chosen. Figure 5 contains a portion of the proof for the unstacking example. Portions of two consecutive transfers are shown. All variables are universally quantified. Arrows run from the antecedents of a rule to its consequents. Double-headed arrows represent terms that are equated in the specific explanation. The generalization algorithm used enforces the unification of these paired terms. There are four antecedents of a transfer. To define a transfer, the block moved (x- >, the object on which it is placed (y >, and a state (s > must be specified, and the constraints among these variables must be satisfied. One antecedent, the one requiring a block not be placed on top of itself, is type 1 - it is situation-independent. The next two antecedents are type 2. transfer, (Clear ?x, (Do (Transfer ?x, ?y, ) ?s, )) (State (Do (Transfer ?x, ?y, ) ?s, )) I b (FlatTop ?z 1 (Clear ?z ?s ) (FreeSpace ?z ?s ) . .* (State ?s, ) (# %, ?yj+~2Li:J~~) transfer, Figure 5. Satisfying Antecedents by IPrevious Consequents 518 Machine learning & Knowledge Acquisition Two of the consequents of the ith transfer are used to satisfy these antecedents of the jth transfer. During transfer, . in state s, object x, is moved on to object y, . The consequents of this transfer are that a new state is produced, the object moved is clear in the new state. and x, is on y, in the resulting state. (The On term is not shown.) The state that results from transfer, satisfies the second antecedent of transfer, . Unifying these terms completely defines sJ in terms of the previous variables in the RIS. Another antecedent requires that. in state sJ , there be space on object y, to put block xJ . This antecedent is type 4. Another inference rule specifies that a clear object with a flat top has free space. The cleainess of x, after transfer, is used. Unifying this collection of terms leads, in addition to the redundant definition of s, , to the equating of y, with z and x, . This means that the previously moved block always provides a clear spot to place the current block. No provisions need be made to insure the existence of a clear location to place intermediate blocks. P(x,,,.... ~x,,p,yl,l,... JI,yrsJ and ‘JkE2,...,i Q (Xl ,,. - . . ~-%,p’Yk--1,1~~ * - *Yk-l,v*YK,l.. . - ’ yk ,Y) and Sk =Do (x1 l,...,X 1 ,p’ vk -1 ,I 1 * - * 9 Yk -l,y, *yk -1) + P(~j,l”..~X~,~,Y,,1,... ,y~,“?sJ (3) Frame axioms often satisfy the form of equation 2. Figure 6 shows one way to satisfy the need to have a clear object when placing the ith block in a tower. On the left-hand side of figure 6 is a portion of the proof of a tower-building example. Block x, is clear in state s, because it is clear in state s;-~ and the block moved in transfer,-, is not placed upon x, . Unwinding this rule leads to the result that block x, will be clear in state s, if it is clear in state s r and x, is never used as the new support block in any of the intervening transfers. The fourth antecedent. that Xj be liftable, is also type 4. A rule (not shown) states that an object is liftable if it is a clear block. Block x, is determined to be clear because the only object it originally supports is moved in transfer,. Tracing backwards from the liftable term leads to several situation-independent terms and the term (Supports ?x, (?x, > ?s, ). Fortunately. although this term contains a situation variable, it is satisfied by an “unwindable rule.” and is type 3. A Portion of the Explanation Unwound Subgraph (Clear ?Z ?S > ( f ?Z ?y > (Clear ?x, ?s 1> (?c ?x, ?y r> Y F-Y--- (Clear ?z (Do (Transfer ?x ?y > ?s)) (Clear ?n, ?s 2) (f ?x, ?y 2) unwindable. The consequent must match one of the antecedents Equation 2 presents the form required for a rule to be of the rule. Hence, the rule can be applied recursively. This feature is used to “unwind” the term from the it?1 state to the initial state.3 The variables in the rule are divided into three groups. First. there are the x variables. These appear unchanged in both the consequent’s term P and the antecedent’s term P. Second. there are the y variables which differ in the two P’s. Finally, there is the state variable (s >. There can be additional requirements of the x and y variables (via predicate Q>, however, these requirements cannot depend on a state variable. Only the definition of the next state can depend on the current state, as it is assumed the sequence of repeated rule applications completely determines the sequence of states. (Cl ear ?X, ?S, > . (Clear ?X, ?S, > Figure 6. Unwinding a Rule Similar reasoning is used is used in the unstacking example to insure that. up until the state in which it is moved, a block supports only one other block (and that block is moved in the previous transfer). This means that for the new rule to apply, an initial state block configuration must have successive support relations - in the initial state, the block to be moved in step i must support the one to be moved in step i -1 (the first block moved must be clear). As expected. a tower of blocks will be unstacked from the top downward. The new rule applies to the goal of clearing any object involved in the tower (including the table. provided there is another table on which to stack). Each P~x~,l,-.~~~j,Lc’y~-l,l~..-~y*-,,y~~~l-~~ block moved is placed on top of the object previously moved because that block is known to be clear at that time. This and constraint leads to the building of a new, inverted tower. The Q (Xl ,I 9 * - - * .q ,p’ Y2 -I,1 * - - - 1 YI -1 ,vr Y, ,I* * * - 9 Yt ,J first object moved can be placed anywhere that is clear - on the and table. on another table. or on another cleared block. In the s, = Do (Xj ,,, - * . , x, &’ y1-1 1, - * - 9 Y1-1,p s, -1) initial state. every block to be moved must be supported by an 3 object that is supporting no other block. If a supporting object P~x~,l’~~~~~~,y,Y~,l~-~..y~,“~~~(~ (2) supported more than one block, it would not be clear when it is its turn to be moved. or. for the “goal” object. after the new rule is applied. Applying equation 2 recursively i times produces equation 3. This rule determines the requirements on the initial state so that the desired term can be guaranteed in state i . Except for the definition of the next state. none of the antecedents depend on the intermediate states. Notice that a collection of y variables must be specified. Any of these variables not already contained in the RIS are added to it. 3 Actually, recursive rules are not always unwound to the initial state. If two (or morel of rules of this form are in a pathway, the first is unwound from state i to state t and the second is unwound from state t’ to the initial state. For example, a block can be supporting another block during some number of transfers, can be cleared, can remam clear during another sequence of transfers, and finally be added to a tower. Notice that information not contained in the focus rule, but appearing in the example. is incorporated into the extended rule. In the unstacking example. additional rules are used to determine when an object becomes clear. The rule for transferring a block says nothing about the clearness of the block’s original support after the block is moved. It applies to objects supporting any number of blocks. Other rules state that the supporting object is now clear if the moved block was the only one it formerly supported. The combination of these rules means that the new rule only applies to towers where each object (other than th@top one) only directly supports one block. Unfortunately. while more broadly applicable than a plan for clearing three-block towers. the newly-acquired rule cannot clear objects directly supporting more than one block. The specific example did not address this multiple-support problem. Hence, the explanation- based BAGGER system did not solve it. Shavlik and DeJong 519 Once the repeated rule portion of the extended rule is determined. the rest of the explanation is incorporated into the final result. In the unstacking example of figure 2s. this involves the proof that x is clear in the final state. It is accomplished in a manner similar to the way antecedents are satisfied in the repeated rule portion. The main difference is that the focus rule is now viewed as the Nth rule. application. As before. antecedents must be of one of the four specified types. Rules produced by BAGGER have the important property that their preconditions are expressed in terms of the initial state - they do not depend on the results of intermediate applications of the focus rule. If the preconditions are met, the results of multiple applications of the focus rule are immediately determined. There is no need to apply the rule successively. each time checking if the preconditions for the next application are satisfied. The example in figure 6 did not result in any new variables being added to the RIS. Other examples of unwinding do add to the variables in that sequence.’ Often this occurs during the process of specifying the rest of the explanation in terms of the initial state. For example. when building a tower, the y- coordinate of the last block added is determined by an unwindable rule. Unwinding this rule adds two terms to each vector in the RIS: the height of the block moved (xi >. and the y-coordinate of this block following the transfer. Generalizing structure is an important property currently lacking in most explanation-based systems. This research contributes to the theory and practice of explanation-based learning by developing and testing methods for extending the structure of explanations during generalization. performing it. The learned rule guarantees that they will be tiet. A problem solver that applies BAGGER’s learned rules has been implemented. An acquired rule can be applied if its antecedents are satisfied in a state of the world. Satisfying the antecedents will produce an RIS. Next, N actions are executed. one for each vector. Note that the problem solver need not evaluate each action’s preconditions immediately before Explanation-Based Learning,” CSL Technical Report, University of Illinois, Urbana, IL, February 1987. REFERENCES [.4hn871 W. Ahn, R. J. Mooney, W. F. Brewer and G. F. DeJong, “Schema Acquisition from One Example: Psychological Evidence for IV. CONCLUSION Most research in explanation-based learning involves relaxing constraints on the variables in an explanation. rather than generalizing the structure of the explanation. This paper presents an explanation-based approach to the problem of generalizing to iV . To illustrate the approach, situation calculus examples from the blocks world are analyzed. The approach presented leads to efficient plans that can be used to clear an object directly supporting an arbitrary number of other objects. build towers of arbitrary height, and unstack towers containing any number of blocks. A generalized version of DeMorgan’s law is also learned. The fully-implemented BAGGER system analyzes explanation structures (in this case, situation calculus proofs) and detects repeated. inter-dependent applications of rules. Once a rule on which to focus attention is found, the system determines how an arbitrary number of instantiations of this rule can be concatenated together. This indefinite-length collection of rules is conceptually merged into the explanation. replacing the specific-length collection of rules, and a standard explanation-based algorithm augmented explanation. produces a new rule from the The specific example guides the extension of the focus rule into a structure representing an arbitrary number of repeated applications. Information not contained in the focus rule, but appearing in the example, is often incorporated into the extended rule. In particular, “unwindable rules” provide the guidance as to how preconditions of the ith application can be specified in terms of the current state. A concept involving an arbitrary number of substructures may involve any number of substantially different problems. However. a specific solution will have necessarily only addressed a finite number of them. To properly generalize to N , a system must recognize all the problems that exist in the general concept and. by analyzing the specific solution. surmount them. If the specific solution does not provide enough information to circumvent all problems, generalization to N cannot occur because BAGGER is designed not to perform any problem- solving search during generalization. When a specific solution surmounts, in an extendible fashion, a sub-problem in different ways during different instantiations of the focus rule. disjunctions appear in the acquired rule. [Cheng86] P. Cheng and J. G. Carbonell, “The FERMI System: Inducing Iterative Macro-operators from Experience,” Proceedings of .4AAZ- 86, pp. 490-495, Philadelphia, PA, August 1986. [Fikes72] R. E. Fikes, P. E. Hart and N. J. Nilsson, “Learning and Executing Generalized Robot Plans,” Artiftcial Intelligence 3, pp. 251-288, (19721. [Mitchell86] T. M. Mitchell, R. Keller and S. Kedar-Cabelli, “Explanation-Based Generalization: A Unifying View,” hlachine Learning 1, I, pp. 47-80, (January 1986). [Mooney861 R. J. Mooney and S. W. Bennett, “A Domain Independent Esplanation-Based Generalizer,” Proceedings of AA.41-86, pp. 551- 555, Philadelphia, PA, August 1986. [O’Rorke87] P. \‘. O’Rorke, “Esplanation-Based Learning via Constraint Posting and Propagation,” Ph. D. Thesis, Department of Computer Science, University of Illinois, Urbana, IL, January 1987. [Prieditisgb] A. E. Prieditis, “Discovery of Algorithm from Weak Methods,” Proceedings of the International hZeetirrg on ;Idt*ances in Leurning, pp. 37-52, Les Arcs, Switzerland, 1986. [Rosenbloom P. Rosenbloom and J. Laird, “Mapping Esplanation- Based Generalization into Soar,” Proceedings of .4.4.41-86, pp. 667- 669, Philadelphia, PA, August 1985. [Shavlik85] J. W. Shavlik, “Building a Computer Model of Learning Classical Mechanics,” Proceedings of the Se\‘enth Annual Conference of the Cogr1itiL.e Sciertce Society, pp. 351-355, Irvine, CA, August 1985. [Shavlik87a] J. W. Shavlik and G. F. DeJong, “Analyzing Variable Cancellations to Generalize Symbolic Mathematical Calculations,” Proceedings of the Third IEEE Conference on .4rtijcial Intelligence Applications, pp. 100-105, Orlando, FL, February 1987. [Shavlik87b] J. W. Shavlik and G. F. DeJong, “An Explanation-Based Approach to Generalizing Number,” Proceedings of Z.ZC.lZ-87, Milan, Italy, August 1987. [.Shavlik87cl J. W. Shavlik, “Augmenting and Generalizing Explanations in Explanation-Based Learning,” Ph. D. Thesis, Department of Computer Science, University of Illinois, Urbana, IL, forthcoming. 520 Machine learning & Knowledge Acquisition
1987
93
691
This paper extends previous work which provided a theory for the interpretation of and necessity for clue words in a particular kind of discourse - namely, one-way arguments. Previous work described a taxonomy of cunnective clues (words such as “hen&’ or phrases such as “as a result”), where each clue, classified according to the taxonomy, would set in place a default interpretation of its contain- ing proposition, with respect to the representation for the argument so far. In this paper, we examine how to com- line the rf2drictions for clues with a basic lmnmsor for the discourse, offering a integrated processing algorithm, which takes advantage uf clues to reduce processing and to detect incuherent arguments, and can still produce an analysis in the absence of clues. We conclude with some suggestions for incmporating clues of redirection and clues that signal exceptimal transmissions. We also demmshatetheim@icatiansdcurresultsfordisccause~ general. I. Preamble This paper extends the work of (Cohen 1984) (see also (Cohen 1983)), which provided a theory for the interpre- tation of and necessity for clue words in arguments. The arguments referred to one-way dialogue where the speaker tries to convince the hearer of a particular point of view. Previous work described a taxonomy of connective clues (words and phrases), where each clue, classified according to the taxonomy, would set in place a default interpretation of its containing proposition, with respect to the representation for the argument so far. For example, consider the processing of an utterance containing the clue phrase “as a result”. “As a result” belongs to an “inference” category, which specifies that the containing proposition must find some prior proposition which supplies evidence to (acts as son to, in the tree diagram for the argument) the containing proposition. (In a sense, this work extended the ideas of (Hobbs 76), where a few special words are shown to signal particular coherence relations in discourse). The previous paper also discussed the necessity for clue words, describing particular transmissions recognized as exceptional to the basic processing strategy of the argu- ment understanding model, but nonetheless coherent, in the presence of a clue. This paper first of all addresses the issue of actually processing an argument with clues. We indicate how to combine the restrictions indicated by a connective and its taxonomic interpretation rule with the basic processing strategy, outlined to deal with all arguments (including the cases where no clues exist). We examine tradeoffs in ordering of restrictions suggested by both sources, and propose algorithms for accommodating clue recognition. We also strengthen our arguments for the necessity of clues with transmissions exceptional to the basic character- ization, by illustrating the processing that would occur in the absence of clues. The basic premise is that interpreta- tions with less computational effort would be preferred by a hearer, and would be drawn if clues were not available to override. The overall conclusion is that clue interpretation processes can be specified, for at least some clues, as a step towards a full processing model of discourse. We are operating in a framework of a model for analyzing d&course by interpreting each new utterance in turn, with respect to the discourse so far. In this sense, each clue provides information for processing, to be integrated into the other tests for interpreting the contained proposition. will argue for the usefulness of these results for discourse in general. 2. The basic psoces goritkm In order to understand posed analysis of clue words in discourse, we offer background on the model for analyzing the structure of arguments, used a basis for our study of clues. This model (described in more detail in (Cohen 1983)) (Cohen 1981)) first proposes that the interpretation for each new utterance in the discourse be done by com- parison to a restricted list of prior propositions eligible to relate to a new proposition. The type of discourse is res- tricted to one turn from a speaker, with a top level goal of convincing the hearer of some point of view (hence, an argument). The representation for the structure of the argument is drawn as a tree, where the relation between a son and its father is one of “evidence”. A very simplified summary of “evidence” is that: a proposition P is evidence for a proposition Q if there is a rule of inference such that P is premise to Q’s conclusion. The main step in process- ing is thus to test for possible evidence relations between a new proposition and those already stated, to continue g the tree. The restricted reception algorithm for g the representation is presented below: 528 Natural Language From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. L: last eligible node; NEW current proposition Tree has a dummy root; succeeds as father to all (used to simulate a stack) forever do: if NEW evidence for L then if no sons of L are evidence for NEW then /* just test rightmost son for evidence */ attach NEW below L set L to NEW exit forever loop else attach all sons of L which are evidence for NEW below NEW attach NEW below L exit forever loop endif else set L to father (L) endif end forever loop This is termed a hybrid reception, because sub-arguments may be inserted claim first (pre-order) or claim last (post-order). 3. Interpreting clues An argument, regardless of presence of clues, is processed in our model according to some proposed restrictions, based on recognizing only coherent transmission orderings from the speaker (as encoded in the algorithm above). Clue words have been observed to have two functions: to further restrict processing for the speaker, or to signal an exceptional transmission, for the hearer to accept beyond his basic processing strategy. In (Cohen 83) (see also (Cohen 87)) we argue that only certain kinds of exceptional strategies should be accepted as well. The preferred interpretation will always be one where the basic processing restrictions hold (the hybrid algo- rithm). To motivate why this is true, consider the exam- ple below. EM: 1) The park benches are rotting 2) The parks are a mess 3) The highways are run down 4) (Another problem with the parks is that) the grass is dying 5) This city is iu sad shape Without the clue in 4, redirecting to proposition 2, to add more evidence out of turn, a coherent representation could be built just the same, as below: 2 ’ +4 x 1 If the speaker intends 4 to add detail to 2, he cannot expect the hearer to recover this structure without a clue, simply because a more effortless interpretation can bc recovered as above. In the absence of a clue word, a coherent interpreta- tion still results, and will be drawn by the model, attempt- ing to satisfy the hybrid constraints. The clue in 4 may signal a different transmission, acceptable because no eli- gible candidates will otherwise satisfy the semantic con- straints of the clue. (Note that 2 is not eligible to receive new evidence, since the hybrid algorithm closes off earlier brothers at a level). Note as well that with one of the “acceptable excep- tional strategies”, absence of clues merely produces a dif- ferent interpretation to the hearer than the one possibly intended by the speaker. In the case of a parallel stmc*e, for example: EX2: 1) The city has problems 2) The parks are a mess 3) The highways are a mess 4) The buildings are a mess 5) (As for the parks) the benches are broken 6) (And for the highways) there are potholes in the autoroutes.. . with a representation, in the absence of clues: 2%%6 The parallel structure, described in (Cohen 84) as an exceptional strategy, involves a return to a previously closed proposition to add evidence, to then add evidence for each of the brothers of that closed proposition, in turn. The intended representation of EX2, recognizable with clues is: /2 /f\, 5 2 s.. Since following the hybrid algorithm is basically the preferred interpretation, it makes sense that the restric- tions embodied by this algorithm govern the processing of clues. We will first study particular classes of the taxon- omy of connectives and propose “processing mesh with the hybrid” for each case. We can then reflect on what the relationship between clue processing and basic search is. We will discuss clues of redirection (as in EXl) above briefly after the study of connective clues. g connective ches When clues appear in an argument, these should signal to the basic processor that additional information is being provided by the speaker. This information further restricts the tests for determining the interpretation of the proposi- tion containing the clue. Connective clues provide the additional information of HOW th some prior proposition (see the def Cohen 529 the taxonomy; the categories are drawn from (Quirk 72)). In the table below, S represents the proposition with the clue; P is the prior proposition which “connects” to S. Part of taxonomy of clue words, from (Cohen 83) category relation: S to P example parallel detail inference summary brother son father father to multiple sons in addition in particular as a result in sum We envision a general system architecture consisting of (i) a proposition analyzer, which performs the basic pro- cessing algorithm (ii) a clue interpreter, which is called when a clue is detected, and then controls the proposition analyzer (iii) an evidence oracle, which is passed two pro- positions by the proposition analyzer and responds yes or no whether one is evidence for the other. Since the oracle has a difficult task, the overall efficiency of processing would be improved if either calls to the oracle were avoided, or additional information were available to the oracle to facilitate its testing. (Note that the “oracle” is eventually given some specifications, and is more than just a black box. The processing of the oracle is another topic altogether; see (Cohen 83) for more details). Our research on integrating clue interpretation with the basic processor is still in progress, but we offer the follow- ing algorithm as a first version. Note that this algorithm would then replace the basic processing algorithm (described in section 2). We will explain the main features of the algorithm after its listing. clue1 : true if proposition has a “parallel” clue; clue2: for detail, clue3: for inference, clue4: for summary forever do: /* before testing NEW for L */ if L=dummy then if clue2 then (( 1)) INTERRUPT-DISCOURSE (and exit loop) endif endif /* see if rightmost son exists */ if (clue1 v clue3 v clue4) & no rightmost son of L then if L=dunnny then INTERRUPT-DISCOuRsE ((2B)) (and exit loop) else set L to father of L ((3)) endif endif if NEW evidence for L then /* see if sons will re-attach */ if no sons of L evidence for NEW then if (clue3 v clue4) then if L=dummy then INTERRUPT-DISCOURSE (( 2)) (and exit loop) else set L to father of L endif else /* normal attaching */ attach NEW below L set L to NEW exit forever loop endif else /* some son wants to re-attach */ attach all sons of L which are evidence for NEW below NEW attach NEW below L exit forever loop endif else /* if NEW not evidence for L */ set L to father of L endif end forever loop The first point is that some calls to the evidence oracle can be avoided, if one follows the restrictions of the clue interpretation rules for the taxonomy. Consider the fol- lowing example: EX3: I) The city is in serious trouble 2) There are some fires going 3) Three separate blazes have broken out 4) In addition, a tornado is passing through The clue in 4 requires 4 to be a brother of some prior pro- position. This is realized in our processing model by find- ing a father from which an attached son may serve as brother. The hybrid algorithm would have 4 first test to be son to 3 (the last eligible). Since a simple test can confirm that 3 has no sons, it is not considered at all. Thus, one possible call has been avoided, due to the presence of the ‘s is illustrated in part ((3)) of the algorithm fEe)(= . We now consider incoherent arguments. We can specify criteria for recognizing an incoherent transmission, which would then be detected earlier than if the clue did not exist to constrain the required relationship. For instance, in the case where we expect a son prior in the argument, if all tests for father that can also pick up a son fail (must now be at the dummy top to realize this) then we can label the argument incoherent and interrupt - the expectation of the clue has not been met. Without a clue, we could expect to find later proposi- tions acting as son to current; as such, we would not detect incoherence until the end when no common father exists at the top. 530 Natural Language EX4: 1) The parks are a mess 2) The park benches are a mess 3) The playgrounds are a mess 4) The highways are a mess 5) The buildings are a mess 6) The stadiums are a mess If at this point the argument ended, the analyzer could detect a lack of top level father to detect incoherence. EX5: 1) The parks are a mess 2) The park benches are a mess 3) The playgrounds are a mess 4) As a result, the highways are a mess Here, 4 requires a son prior in the discourse. As this fails, the incoherence of this possibly continuing argument (as in EX4) would be detected earlier. This is incorporated into the algorithm in the tests labelled ((2)). Both parallel, inference and summary require a son (either to attach to NEW or serve as a brother to NEW). If this test cannot be met, the argu- ment is incoherent (see ((29)). Likewise, if no prior pro- position exists to connect to the proposition with a con- nective clue, regardless of the relationship expected, the argument is again incoherent and the hearer would inter- rupt (as in part (( 1))) where a detail clue expects a non- dummy father prior in the argument). Examining when an argument is incoherent is also important for studying when clues are used to signal exceptions, rather than just to additionally constrain the basic hybrid case. So, the clarification of when connec- tives fail in their default interpretations is important as a processing indication to then test for exceptional stra- tegies. (The semantics of the clue and the representation of propositions is also critical; see discussion in section 5). Are there additional constraints to processing that clues can provide? One possibility we examined was whether some connective clues suggest altering the order of tests performed by the hybrid algorithm. We decided that the order of nodes visited from the eligible list should not change (connectives merely indicate HOW, not WHERE propositions relate). But we examined the effects of testing for a son before testing to be a son at any given node in the tree. To explain, the inference class, for example, requires a son to be found earlier in the tree. As each eligible node L is examined, should we test sons of L as son to NEW before we test NEW son to L? Our conclusion is that it is costlier to test for sons first. Defense of this conclusion is offered below. The standard algorithm, when we are dealing with a statement with an inference clue, Cm be stated as follows: do if L is-father-of NEW then attach NEW as son of L re-attach sons of L below NEW BREAK else set L to father-of(L) endif enddo If we modify this to check first for a son of NEW, then we have: do if NEW is-father-of rightmost-son(L) then if L is-father-of NEW then attach NEW as son of L re-attach sons of L below NEW BREAK else set L to father-of(L) endif else set L to father-of(L) endif enddo Suppose Li is the father to NEW. Under the standard method the following tests will be performed: NEW is-evidence-for Ll -> FAIL NEW is-evidence-for L2 -> FAIL . . . . . . NEW is-evidence-for Li-1 -> FAIL NEW is-evidence-for Li -> SUCCEED (then re-attach sons of Li) If we use the modified algorithm, and test for a son of NEW first, then we have: Ll is-evidence-for NEW -> SUCCEED * NEW is-evidence-for Ll -> FAIL L2 is-evidence-for NEW -> SUCCEED * NEW is-evidence-for L2 -> FALL, . . . . . . Li-1 is-evidence-for NEW -> SUCCEED NEW is-evidence-for Li -> SUCCEED (then re-attach sons of Li) The above tests marked * all succeed because of the transitive nature of the evidence relationship. That is, since Li-1 is evidence for NEW, anything which is evi- dence for Li-1 will also be evidence for NEW. Thus, any test for an Lj to be a son of NEW (with j C i) will succeed. From this we can rewrite the modified algorithm. It is essentially: Cohen 531 do evidence-oracle call which always succeeds if L is-father-of NEW then attach NEW as son of L re-attach sons of L below NEW BREAK else set L to father-of(L) endif enddo Thus, this algorithm will use more evidence oracle calls than the standard method of checking NEW to be a son of L first. In fact, trying to find a son of NEW first will take on the order of twice as many calls. In short, we adhere to the basic algorithm’s testing of NEW to be son, before testing to re-attach propositions to be sons of NEW, regardless of clue. For the taxonomy classes of detail, inference, summary and parallel (conjunction type versus list type (first, secondly, etc.)), we offer the following results: (i) for these classes, it is not effective to alter the tests at a par- ticular node (ii) it is possible to cut one test to the oracle (iii) one additional advantage that the connective clues provide is to detect earlier incoherent arguments from a speaker (if the expectations associated with the clue are not satisfied by some prior proposition as required). 5. Re-direction clues and future work Clues which redirect the processing should have the fol- lowing relationship to the hybrid: (i) can alter the order of nodes visited (ii) unless the clue also has a kind of con- nective specified, cannot alter the order of testing at a node or add constraints to the node (i.e. must have sons). Clues such as “first, secondly, etc.” can now be exam- ined as a redirection indication. They are parallel con- nectives, expecting brothers prior in the argument, but they also expect to connect at a particular location - namely, at the head of the sub-argument tagged by the specific clue that is one earlier in the list (e.g. “thirdly” expects to connect to “secondly”). For future work, this kind of clue word should be examined to lead in to an incorporation of re-direction clues into the processing algorithm. In general, redirection clues are supposed to provide some insight into which prior proposition relates to the one with the clue. Connective clue words only indicate which relation holds with some prior proposition. It is worth investigating how the proper prior proposition can he selected, especially in exceptional cases which override the eligibles for the hybrid. ‘Ibis research will require a deeper investigation of the semantic representation for propositions, used in the analysis. Another consideration for future work is the role that clue words have on the work of the oracle. In particular, if a connective clue carries certain semantic constraints, how are these precisely communicated to the oracle to fzd.itate its pr ocasing? The answer is obviously infh- enced by the underlying representation used for the knowledge bases accessed by the oracle and the form of the propositions, when “parsed”, made available to the oracle. We are currently developing an implementation of the algorithm described here to incorporate clues, together with upcoming solutions for handljng other kinds of clues, building on the initial implementation of the basic proces- sor, completed in (Smedley 86). Refinement of the clue interpretation rules and the integrated algorithm is another topic for future work. In (Cohen 83) we offer some motivation for why the interpretation rules as formulated hold for the associated class of clues in the taxonomy. In developing an algo- rithm for implementation, additional constraints and char- acterizations may occur. We include a brief discussion of two additional constraints to investigate. With a clue of the “parallel” category, a brother earlier in the discourse must be found. (Note: it is still coherent to have the father not yet appear in the discourse). According to our integrated algorithm, it is possible for the proposition with the clue (NEW) to find a father (L) and to re-attach the sons of L. Some modification to this test must be made to prevent all the sons of L from re-attaching, thus leaving no brother for NEW. IIow- ever, it is worth studying whether re-attachment of sons of L is in itself a signal of incoherence. For the “summary” category, more than one son is to be found earlier in the discourse. So, the integrated algo- rithm should have an additional test to ensure that when sons are re-attached, more than one re-attaching occurs. But what of the case when the son that re-attaches is in effect a tree, so that there are “multiple sons” for NEW, but not all at the same level? One interpretation is that this structure is, as well, incoherent. The general problem raised by these suggestions for incoherence is how to consider the interaction between different types of clue words, when more than clue word occurs, either within one sentence or between two sen- tences which are being tested for a relation (e.g. So, for example.. . or So, next... ). The interacting occurrences may allow for certain relations to be tested in the integrated algorithm, which on the surface seem indica- tors of incoherence. Studying how multiple constraints may be satisfied is again a topic for future research. We have provided some new insights into how to incor- porate clue interpretation into our model for analyzing arguments, to mesh with the basic processing restrictions. In the process, we have discovered some worthwhile pro- perties of clues: (i) they signal overrides to the processing (for exceptional transmissions) (ii) they provide additional information on where to process or which relationship to find in the prior argument (iii) indications of which rela- tion to find do not constrain the basic processor, except to rule out one test at the last proposition (possibly) or in cases where the argument is incoherent. We feel that these results carry over to the case of 532 Natural Langwage discourse analysis in general. If coherence constraints for processing of discourse are postulated, the clues should help constrain further. Other researchers have studied the role of clue words in discourse (e.g. (Reichman 81)) (Grosz and Sidner 85)) (Polyani and Scha 83)). If one allows a processing of discourse that does not contain clues, one must comment on how the presence of clues alters the basic processing. In this paper, we suggest how a clue interpretation module would constrain the processor for certain connectives, and point to ongoing work on the analysis of redirection clues. We also provide insight into when an argument is considered incoherent, and when exceptional transmissions are recognizable (when the clue exists by necessity). But most of the saving in processing for connectives should come from demanding more specialized semantic relationships (the part tested in our model by the oracle). We have to describe more precisely these operations for future work, to also gain insight into interpreting redirection clues. We feel that current studies of intona- tion as a clue (Hirschberg and Pierrehumbert 86) can be treated in a similar fashion. We would then propose an analysis in terms of operations saved, on average, when clues indicate where to test for relations. Acknowledgements I am indebted to Trevor Smedley for discussions on this research and comments on earlier drafts of this paper. This research was supported by Nserc (Natural Sciences and Engineering Research Council of Canada). eferences (Cohen 81) Cohen, R.; “Investigation of Processing Stra- tegies for the Structural Analysis of Discourse”; Proceed- ings of ACL81, 1981. (Cohen 83) Cohen, R.; “A Computational Model for the Analysis of Arguments“; University of Toronto Com- puter Systems Research Group Technical Report No. CSRG-151, 1983. (Ph.D. thesis) (Cohen 84) Cohen, R.; “A Computational Theory of the Function of Clue Words in Discourse”; Proceedings of COLING84, 1984. (Cohen 87) Cohen, R.; “Analyzing the Structure of Argumentative Discourse”; to appear in Computational Linguistics, 1987. (Grosz and Sidner 85) Grosz, B. and Sidner, C. ; ‘The Structures of Discourse Structure”; Bolt, Beranek and Newman (BBN) Report No. 6097, 1985. (also Report No. CSLI-85-39) (Hirschberg and Pierrehumbert 86) Hirschberg, J. and Pierrehumbert, J. ; ‘The Intonational Structure of Discourse”; Proceedings of ACL86, 1986. (Hobbs 76) Hobbs, J.; “A Computational Approach to Discourse Analysis”; City University of New York Department of Computer Sciences Research Report No. 76-2, 1976. (Hobbs 78) Hobbs, J.; “Why is Discourse Coherent?“; SRI Technical Note No. 176, 1978. (Polyani and Scha 83) Polyani, L. and Scha, R.; ‘On the Recursive Structure of Discourse”; in Connectedness in Sentence, Discourse and Text, KEhlich and H. van Riemsdijk, eds., 1983. (Quirk 72) Quirk, R. et. al.; A Grammar of Contem- porary English; Longmans, 1972. (Reichman 81) Reichman, R.; ‘Plain Speaking: A Theory and Grammar of Spontaneous Discourse”; BBN Report NO. 4681, 1981. (Smedley 86) Smedley, T.; “An Implementation of a Computational Model for the Analysis of Arguments: An Introduction to the First Attempt”; University of Waterloo Department of Computer Science Technical Report No. CS-86-26, 1986. Cohen 533
1987
94
692
AN: An Interlingual A Bonnie Dorr to achine Tradati M.I.T. Artificial Intelligence Laboratory Abstract Machine translation has been a particularly difficult problem in the area of Natural Language Processing for over two decades. Early approaches to translation failed in part because interaction effects of complex phenomena made translation appear to be unman- ageable. Later approaches to the problem have suc- ceeded but are based on many language-specific rules. To capture all natural language phenomena, rule- based systems require an overwhelming number of rules; thus, such translation systems either have lim- ited coverage, or poor performance due to formidable grammar size. This paper presents an implementa- tion of an 5nterlingual” approach to natural lan- guage translation. The UNITRAN system relies on principle-based descriptions of grammar rather than rule-oriented descriptions. 2 The model is based on linguistically motivated principles and their associ- ated parameters of variation. Because a few princi- ples cover all languages, the unmanageable grammar size of alternative approaches is no longer a problem. The problem addressed in this paper is to construct a translation model that operates cross-linguistically with- out relying on complex language-specific rules. Many ma- chine translation systems depend heavily on context-free rule-based systems. For example, the METAL system [Slocum, 19841, (Sl ocum and Bennett, 19851 is a trans- fer approach that relies on a large database of rules per language, solely for syntactic processing. The aim of this paper is to present the computational framework for UNI- TRAN, a syntactic translation system currently operating bidirectionally between Spanish and English, and to put into perspective how the design of the system differs from and compares to other translation designs. The distinction - lThis report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this work has been provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contracts NO001480-C-0505 and NO001485-K-0124, and also in part by NSF Grant DCR-85552543 under a Presidential Young Investigator’s Award to Professor Robert C. Berwick. 2The name UNITRAN stands for UNIversal TRANslator, that is, the system serves as the basis for translation across a variety of languages, not just two languages or a family of languages. 534 Natural Language Verb Prepoaing: tQu6 vio Juan? ‘What did John see?’ Null Subject: 1 Vio al hombre. The man that John saw that ate dinner left. ‘El hombre a quidn Juan vio que corni la cena sali6.’ Table 1: Sentences handled by UNITRAN between rule-based (non-interlingual) and principle-based (interlingual) systems will be presented, and the advan- tages of the principle-based design over other designs will be discussed. Finally, an overview of the UNITRAN design will be given, and a translation example will be shown. The model that has been constructed is based on ab- stract principles of the “Gove&rnent and Binding” (GB) [Chomsky, 19811 framework. The grammar is viewed as a modular system of principles rather than a large set of language-specific rules. Distinctions among languages are handled by settings of parameters associated with the prin- ciples. Several types of phenomena are handled without sacrificing cross-linguistic application (table 1 shows some examples). The system gives the user access to parameter set- tings, thus enabling additional languages to be handled. Interaction effects of the principles are handled by the sys- tem, not the user, thus eliminating the task of spelling out the details of rule applications. Before the source lan- guage processing (parsing) takes place, the parameters are set according to the source language values, and are then reset according to the target language values before target language processing (generation) occurs. For example, a “constituent order” parameter is associated with a univer- sal principle that requires a language-dependent ordering of constituents with respect to a phrase. The user should SThe “{.. , . . }” notation denotes optionality. of the sentence may either be he or she. Thus, the subject From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 4 I I 1 1 I Em n- glish English- 3 Gem-- LIUII l German Pa 3 user Gener nator Transfer I I 0 is English German Sentence Sentence Figure I: Transfer Translation Approach as found in METAL (1984) set this parameter to be head-initial for a language like English, but head-final for a language like Japanese. Translation is primarily syntactic; thus, there is no global contextual “understanding” (the system translates one sentence at a time). Semantics is incorporated only to the extent of locating possible antecedents of pronouns (e.g., linking himself with he in the sentence he dressed hinasdf), and assigning semantic roles (e.g., designating he as “agent-of-action” in he ate dinner) to certain arguments of verbs.” It should be noted that determining the map ping between arguments of semantically equivalent verbs is nontrivial.5 For example, although the Spanish verb gus- tar is semantically equivalent to the English verb like, the argument structures of these two verbs differ. The sub- ject of Iike is the agent, whereas the object of gustar is the agent. Because of such cases of thematic divergence, the argument structure of a source language verb must be matched with the argument structure of the corresponding target language verb before substitution takes place. This section compares a non-interlingual (rule-based) sys- tem to the interlingual (principle-based) design of UNI- TRAN. an& approach to translation has been taken [Slocum, 19841, [Slocum and Bennett, 19851. In this approach there is a parser and a generator for each source and target language. In addition, there are a set of transfer components, one for each source-target language pair (see figure 1). The transfer phase is actually a third translation stage in which one language-specific represen- tation is mapped into another. The METAL system cur- rently translates from German into Chinese and Spanish, as well as from English into German. “This is not to say that semantic issues should be ignored in ma- chine transurlation; on the contrary, semantics may be the next step in the evolution of the translation system presented here. 61n general, an argu nz& of a verb is a subject or an object of the verb, as specified in the verb’s dictionary entry. cl Parser 3 Source Target Sentence Sentence Figure 2: The Interlingual Design of UNITRAN The malady of the transfer approach is that each of the parsing, generation and transfer components is en- tirely language-specific. 6 Because the system has no ac- cess to universal principles, there is no consistency across the components; thus, each component has an indepen- dent theoretical and engineering basis. Rather than ab- stracting principles that are common to all languages into separate modules that are activated during translation of any language, each component must independently include all of the information required to translate that language, whether or not the information is universal. For exam- ple, agreement information must be encoded into each rule in the METAL system; there is no separate agreement module that can apply to other rules. Consequently, in order to account for a wide range of phenomena, thou- sands of idiosyncratic rules are required for each language, thus increasing parse time. Furthermore, there is no “rule- sharing” - all rules apply to only one language. 0 nterlingual The translation model described in this paper moves away from the language-specific rule-based design, and moves toward a linguistically motivated principle-based design. The approach is interlingual, (i.e., the source language is mapped into a form that is independent of any language); thus, there are no transfer modules or language-specific rules. The interlingual approach has been taken in the past [Sharp, 19851; h owever, the UNITRAN system differs from 61n Slocum’s system, the type of grammar formalism is allowed to vary from language to language; however, regardless of the type of grammar formalism employed, each parser is nevertheless based on a large database of language-specific rules. FOP example, the German parser is based on phrase-structure grammar, augmented by proce- dures for transformations, and the English parser employs a modified GPSG approach. Dow 535 Table 2: Parameter Values for Spanish and English Conztituont Order: The man ate cheese. *The man cheese ate.’ Null Subject: *John hi saw the man. Inversion: Table 3: Effects of Parameter Settings for Spanish and English Sharp’s system in three respects. First, the system uses the same parser and generator for all languages, whereas Sharp’s system requires the user to supply parser for each source language and a generator for each target language. Second, the user is allowed to specify parameter values to the principles - thus modifying the effect of the principles from language to language - while in Sharp’s system, the user has limited access to the parameters of the system ( e.g., the “constituent order” parameter mentioned in sec- tion I is not available for modification). Third, the system generates rules on the fly using linguistically motivated principles; by contrast, in Sharp’s system context-free rules (set up for English-like languages) are hardwired into the code; thus, languages (like German or Japanese) that do not have the same order of constituents as English cannot be handled by the system. The result is that the class of languages that can be translated is limited. The approach presented here more closely approxi- mates a true universal approach since the principles that apply across all languages are entirely separate from the language-specific characteristics expressed by parameter settings.* Figure 2 illustrates the design of the model. lThe equivalent structure for +thc ))2~01 cheese ate (= *cl hombre queso com+G) is illegal in Spanish also. On the other hand, the sentence is legal for Japanese and other head-final languages. 8The approach is ‘universal” only to the extent that the linguis- tic theory is “universal.” There are some residual phenomena not covered by the theory that are consequently not handled by the sys- tem in a principle-based manner. For example, the language-specific English rules of i&insertion and do-insc*tion cannot be accounted for by parameterized principles, but must be individually stipulated as idiosyncratic rules-of English. Happily, there appear to be-only a few such rule8 per language since the principle-based approach factors out most of the commonalities acro88 languages. I Structure- I Building Module L ---m Target Sentence -I Figure 3: Structure-Building and Linguistic Constraint Modules of UNITRAN The parser and generator are user-programmable: all of the principles associated with the system are associated with parameters that are set by the user. Thus, the user does not need to supply a source language parser or a target language generator since these are already part of the translation system. The only requirement is that the built-in parser and generator be programmed (via param- eter settings) to process the source and target languages. For example, the user must specify that an English sen- tence requires a subject, but that a Spanish sentence does not require a subject. This is done by setting the “null subject” parameter to TRUE; by contrast, this parameter must be set to FALSE for English. (For details on the null subject parameter, see [van Riemsdijk and Williams, 19861.) Table 2 shows some examples of the parameters and their settings for Spanish and English. Table 3 de- scribes the effects of each these parameters respectively.’ A dictionary for each language must also be supplied. The next section describes the system in more detail. II-H, verview of UNIT The translation system consists of three stages: First, the parser takes a morphologically analyzed input and returns a tree structure that encodes structural relations among elements of source language sentence. (This structure is the “interlingual” representation that underlies both lan- guages.) Second, substitution routines replace the source language constituents with the thematically correspond- ing target language lexical entries. Third, the generator performs movement and morphological synthesis, thus de- riving the target language sentence. All three translation stages operate in a co-routine fashion: the flow of control is passed back and forth be- tween a structure-building module and a linguistic con- “An asterisk (*) de note8 ill-formedness. 536 Natural language Table 4: Translation Tasks of Structure-Building and Lin- guistic Constraint Modules straint module. (See figure 3.) At each of the three stages of translation, processing tasks are divided between the two modules as shown in table 4. During the parsing stage the structure-building com- ponent, an implementation of the Earley algorithm (see [Earley, 19701)) applies predicting, scanning and complet- ing actions, while the linguistic constraint component, an implementation of GB principles, enforces well-formedness conditions on the structures passed to it. The phrase- structures that are built by the structurce-building com- ponent are underspecified, (i.e., they do not include in- the system avoids computational costs due to large gram- mar size. Just prior to the lexical substitution stage, the source language sentence is in an underlying form, i.e., a form that can be translated into any target language accord- ing to conditions relevant to that target language. This means that all participants of the main action (e.g., agent, patient, etc.) of the sentence are identified and placed in a “base” position relative to the main verb. At the level of lexical substitution, the structure-building module sim- ply replaces target language words with their equivalent target language translations while the linguistic constraint module applies tests for for semantic mismatches as in the gustar-like example mentioned in section I, and fulfills ar- gument structure requirements. During generation, the structure-building module transforms the sentence into a grammatically acceptable form with respect to the target language; in English the underlying form wes called John would be transformed into the surface form John was called. Tests for grammatical- ity are made by the linguistic constraint module according to structural and morphological constraints, which are pa- rameterized to satisfy the target language requirements. formation about agreement, abstract case, semantic roles, argument structure, etc.); the basis of these structures is a set of templates derived during a precompilation phase according to certain source language parameters.l’ The linguistic constraint component eliminates or modifies the underspecified phrase-structures according to principles of GB (e.g., agreement filters, case filters, argument require- ments, semantic role conditions, etc.). This design is con- sistent with several studies that indicate that the human language processor initially assigns a (possibly ambiguous or under-specified) structural analysis to a sentence, leaving lexical and semantic decisions for subsequent processing. (See [Frazier, 19861.) B ecause the linguistic constraints are available during parsing, the structures built by the structure-building module need not be elaborate; conse- quently the grammar size need not, and should not, be as large as is found in many other parsing systems. l1 Thus, loThe precompilation phase is discussed in [Dorr, 19871, but is not the focus of this paper. In a nutshell, it consists of compiling the principles of a GB subtheory (X-Theory) concerning phrase struc- ture templates. These templates are generated according to certain parameter settings (e.g., constituent order, choice of specifiers, etc.) of the source language. The precompiled tihrase structures are then used to drive the parsing mechanism. l”In fact, the number of phrase structure templates that are gen- erated per language generally does not exceed 150 since there are a limited number of configurations per language that are allowed by the principles of X-Theory. Thus, the running time of the parser is not This section demonstrates the parsing, substitution and generation stages for translation of the following sentence: (1) c omi6 una manzana. ‘{Be, she} ate an apple.’ 0 As mentioned in section ILB, there is a “null subject” parameter that is set to TRUE for Spanish. The parser must access this parameter to “know” that a missing sub- ject in (1) does not rule out the sentence (as it would in English). Figure 4 gives snapshots of the parser in ac- tion. First the Earley structure-building component pre- dicts that the sentence has a noun phrase (NP) and a verb phrase (W) (see (a)), the order of which is determined by the %onstituent order” parameter at precompilation time.12 The only structures available for prediction by the Earley module are those generated at precompilation time; thus, at this point no further information about the structure is available until the linguistic constraint module takes control. The constraint module accesses the “null subject” pa- rameter, which dictates that the empty element attached subject to the same slow-downs that are found in other systems. (As noted in [Barton, 19841 in a typical parsing system the description of a language is lengthy, thus increasing the running time of many parsing algorithms. For example, Earley algorithm for context-free language parsing can quadruple its running time when the grammar size is doubled.) ‘%ince Spanish is a ko&dtitiaJ language, NP must precede VP. This would not be the case for non-hea&aa%ol languages. Dorr 539 L) rcomi6 una manzana (b) Acorni una manzana * A NP VP /A NP VP I 4+d c) COm%Auna manzana (d) comib una manZr&miA A /A Ni- 7 T A 4+p4 8 = agent V e[+prol v I 8 = agent I A comer comer una manzana [+wt St3 IPI [+pst sg lP1 8 = goal Figure 4: Snapshots of Parser in Action to NP is a subject; the [+pro] (pronominal) feature is as- sociated with the node (see (b)) so that subject will ac- commodate both null subject source languages and overt subject source languages.13 In snapshot (c), the Earley module expands VP and scans the first input word comer.l* Now the Earley mod- ule cannot proceed any further; thus, the constraint mod- ule takes over again. First a semantic role (or d-role, as it is called in GB Theory) of agent is assigned to the empty subject of the sentence. This information is determined from the dictionary entry of comer which dictates that this verb requires both an agent (assigned to the subject or external argument of the verb) and a theme (assigned to the object or internal argument of the verb). The dic- tionary entry for comer is encoded as follows: (comer: Cext : agent] tint: theme] V (english: eat) (french: manger) . . .) 18For example, Italian and Hebrew do not require an overt subject, but English and French do; thus, during a later stage (generation), e[pro] will either be left as is, or lexicalized to a pronominal form (e.g., hc or she in English) that agrees with the main verb. l*The verb cornid has been changed to the infinitive form comer (with person, tense, and number features) via a morphological anal- ysis that precede8 the parsing stage. The details of the morphological analysis stage will not be discussed here. Table 5: Thematic Correspondence (Comer and Eat) vs. Thematic Divergence (Gustar and Like) In order to parse the final two words, the constraint module first predicts that a noun phrase (corresponding to the internal argument of comer) follows the verb. Then the Earley module scans the final two words, thus complet- ing the NP and allowing the constraint module to assign a &role of theme to una manzana. Snapshot (d) shows the completed parse. The sentence is now in the underlying (interlingual) form required for the substitution and gen- dration phases. That is, all participants (ugent and theme) of the main action (comer) have been identified, and all ar- guments (subject and object) are in their “base” positions (external and internal) with respect to the verb comer. The equivalent source language sentence can now be de- rived via the generator (which is programmed to operate on the basis of the target language parameter settings). B. Substitution Stage There are two parts to the substitution stage. First, a mapping between thematic roles takes place. That is, the argument structure of the source language verb comer is examined to determine the position of the agent and the theme for the target language verb eat. In the example presented here, the positioning of agent and theme are the same for both Spanish and English, Le., the agent is external and the theme is internal in both cases. Thus, the thematic divergence test is not required; the agent and theme are directly translated in situ. However, this direct mapping does not always apply, e.g., in the case of the gustar-like divergence discussed in section I. Table 5 illustrates the distinction between the argu- ment structures of comer and gustar. In such cases of thematic divergence, a more complex mapping is required. The second part of the substitution stage is lexical re- placement. All verbs and arguments are replaced by the corresponding equivalent forms found in the lexical entries of the source language words. The resulting target lan- guage underlying form is shown in figure 5. C. Generation Stage Generation is both structural and morphological. First, structural routines check to see whether movement (e.g., passivization, raising, etc.) is required. Because the sen- tence is a simple active sentence, no such movement is required. Next, morphological routines take over to gen- erate the correct form of the main verb, and also to real- ize the subject of the sentence, which up until this point has been empty. In order for this realization (or lexical- i&ion) to take place, the generator must “know” that 538 Natural language s A NP VP 8 = agent ’ eat i+pst f% 1Pl an apple 19 = goal Figure 5: Target Language Underlying Form English requires a subject - otherwise, the subject will incorrectly be left unrealized. Thus, the “null subject” pa- rameter mentioned in section 1I.B is accessed at generation time. The final target language sentences are: (2) He ate an apple. She ate an apple. Note that the form e[+pro] has been lexicalized as both he and she to match the person and number of the verb eat. The translation has revealed an ambiguity that exists implicitly in the Spanish source sentence: without context, the subject of the Spanish sentence may be interpreted as either he or she. The system described here is based on modular theories of syntax which include systems of principles and param- eters rather than complex, language-specific rules. The contribution put forth by this investigation is two-fold: (a) from a linguistic point of view, the investigation al- lows the principles of GB to be realized and verified; and (b) from a computational perspective, descriptions of natu- ral grammars are simplified, thus easing the programmer’s and grammar writer’s task. The model not only permits a language to be described by the same set of parameters that specify the language in linguistic theory, but it also eases the burden of the programmer by handling interac- tion effects of universal principles without requiring that the effects be specifically spelled out. Currently the UNITRAN system operates bidirection- ally between Spanish and English; other languages may easily be added simply by setting the parameters to ac- commodate those languages.15 I would like to thank Bob Berwick, Ed Barton, Sandi- way Fong and Dave Braunegg, all of whom provided useful guidance and commentary during this research. eferences [Barton, 19841 Barton, G. Edward, Jr. Toward a Principle-based Parser. Technical Report AI Memo 788, Massachusetts Institute of Technology, July 1984. [Chomsky, 19811 Noam A. Chomsky. Lectures on Govern- ment and Binding, the Piss Lectures. Volume 9 of Studies in Generative Grammar, Foris Publications, Dordrecht, 1981. [Dorr, 19871 Bonnie J. Dorr. UNITRAN: A Prkncdple- Based Approach to Machine Translation. Master’s thesis, Massachusetts Institute of Technology, 1987. [Earley, 19701 Jay Earley. “An Efficient Context-Free Parsing Algorithm.” Communications of the ACM9 13, 1970. [Frazier, 19861 Lyn Frazier. Natural Classes in Language Processing. November 1986. presented at Cognitive Science Seminar, MIT. [Sharp, 1985) Randall M. Sharp. A Model of Gram- mar Based of Principles of Government and Binding. Master’s thesis, The University of British Columbia, October 1985. [Slocum, 19841 Jonathan Slocum. %ETAL: The LRC Machine Translation System.n In Proceedings of ISSCO Tutorial on Machine Translation, Lugano, Switzerland, 1984. [Slocum and Bennett, 19851 Jonathan Slocum and Win- field S. Bennett. “The LRC Machine Translation Sys- tem.n Computational Linguistics, ll:lll-121, 1985. [van Riemsdijk and Williams, 19861 Henk van Riemsdijk and Edwin Williams. Introduction to the Theory of Grammar. MIT Press, Cambridge, MA, 1986. 16Experimente with Warlpiri and other “non-standard” languages - - are currently underway. Dorr 539
1987
95
693
Recovering from Erroneous Inferences Kurt P. Eiselt* Department of Information and Computer Science University of California Irvine, California 92717 ntroduction As we read, we make unconscious decisions about the meaning of ambiguous words, sentences, or passages based on incomplete information. Often those decisions are wrong and we must revise our understanding of the text. For ex- ample, consider the following simple story: Text 1: Fred asked Wilma to Wilma began to cry. marry him. Interpreting this text requires that a causal relationship between Fred’s proposal and Wilma’s tears be inferred. One such possible relationship is that Wilma was happy about Fred’s proposal and was crying “tears of joy.” An- other equally likely inference is that Wilma was crying be- cause she was saddened or upset by the prop0sal.l Now consider this variation of Text 1: Text 2: Fred asked Wilma to marry him. Wilma began to cry. She was saddened by the proposal. Assuming that after processing the first two sentences of Text 2, the text understander has inferred that Wilma is *This research was supported in part by the National Science Foun- dation under grants IST-81-20685 and IST-85-12419 and by the Naval Ocean Systems Center under contracts N00123-81-C-1078 and N66001- 83-C-0255. ‘Experimental evidence indicates that either interpretation is equally likely when this text is presented to human subjects (Granger & Holbrook, 1983). happy, how does the understander resolve that inference with the contradictory third sentence? One solution is to postpone making inferences for as long as possible so that potential conflicts are resolved before any decisions are made. However, this solution becomes less viable as texts increase in length. A better solution is to make inferences as the opportunities arise, then revise initial inferences if later text shows them to be incorrect. This paper describes how one model of text understanding, ATLAST, simplifies the error recovery process by remem- bering the alternative inferences it could have made but did not, and reconsidering those alternatives when subsequent text suggests they might now be correct. Most models of language understanding fail to address the problem of recovery from erroneous inferences, but there have been exceptions. Granger’s ARTHUR (1980) was able to supplant incorrect inferences by maintaining a map of pointers to all inferences generated during the processing of a text, whether or not they appeared in the final representation. O’Rorke (1983) designed a story un- derstander called RESUND that used non-monotonic de- pendencies to correct false assumptions. Norvig’s FAUS- TUS (1983) temporarily stored rejected inferences using a process similar to the retention process discussed in this paper. FAUSTUS represented inferences as frames, and re- jected frames were stored in a separate data base in case later text forced revision of earlier decisions. ATLAST’s ability to revise its interpretation of a text de- pends in large part on the use of a relational network to rep- resent knowledge. ATLAST uses marker-passing to search its relational network for paths that connect meanings of open-class words from the input text. A single path is a chain of nodes, representing objects or events, connected by links, corresponding to relationships between the nodes. Any nodes in a path which are not explicitly mentioned in the text are events or objects that are inferred; therefore, these paths are called inference paths. A set of inference paths that joins all words in the text into a connected graph represents one possible interpretation of the text. In this respect ATLAST resembles a number of other models of text understanding that utilize marker-passing or spread- ing activation (e.g., Charniak, 1983; Cottrell, 1984; Hirst, 540 Natural Language From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 1984; Quillian, 1969; Riesbeck & Martin, 1986; Waltz & Pollack, 1985). The paths that make up the current inter- pretation me called active paths. For any given text, however, there may be a great num- ber of possible interpretations, many of which are nonsensi- cal. The problem then is determining which of the possible interpretations provides the best explanation of the text. ATLAST de& with this problem by applying inference evaluation metrics. These metrics are used to compare two competing inference paths and select the more appropriate one. Two inference paths compete when they connect the same two nodes in the relational network via different com- binations of links and nodes. The path that fits better with the current interpretation is activated (i.e., it becomes part of the interpretation). The other path is de-activated but not discarded. Instead, that path is retained in order to fa- cilitate error recovery as described below. The choice of one inference path over another is made as soon as ATLAST discovers that the two paths compete; ATLAST does not postpone inference decisions. As the marker-passing search mechanism finds more paths, ATLAST constructs an in- terpretation consisting of those paths which survive the evaluation process. When the marker-passing and evalua- tion processes end, the surviving active paths make up the final interpretation of the text. In addition to the assumption of a specific representa- tion scheme, ATLAST relies on two key processing features for error recovery: the ability to remember inference paths that it originally decided should not be part of its interpre- tation of the input text, and a mechanism for recognizing when these rejected paths should be reconsidered. With- out a mechanism for knowing when and how to re-evaluate the retained paths, the retention feature alone provides no benefit. There are two ways in which the re-evaluation of a re- tained path can be initiated. The first is through direct rediscovery of the retained path by the search process. Be- cause the passing of markers begins in different places at different times during the processing of text, the same in- ference path may be discovered (or more appropriately, rediscovered) more than once. If a rediscovered path is not currently part of ATLAST’s interpretation of the text (i.e., the path has been discovered earlier, rejected by the evaluation metrics, but retained), that path is re-evaluated against the competing path which is part of the interpre- tation. This rediscovery process initiates reconsideration of some of the retained paths, but it is not dependent upon retention because these paths would be reconsidered even if they had not been retained. process. If a (re)discovered path is evaluated against a com- peting path in the current interpretation, any subpaths or superpaths of the (re)discovered path are also evaluated against the current interpretation. In this way, ATLAST attempts to limit re-evaluation to those paths that are cur- rently relevant .2 Without the ability to force re-evaluation of paths rejected early in processing but not rediscovered later, ATLAST’s final interpretation probably will be in- correct. Indirectly initiating the re-evaluation of previously rejected inference paths is essential to ATLAST’s error re- covery capability and is dependent upon inference reten- tion. An example of ATLAST processing a simple but poten- tially misleading text will illustrate the program’s capacity for error recovery. This section describes the operation of ATLAST as it arrives at an interpretation for a simplified version of Text 2: Text 3: Fred proposed. Wilma cried. Wilma was sad. Although this is a simplified version of the original text (because ATLAST’s syntactic abilities are limited), the rel- evant inference decisions should be the same for both texts. In the following example, many of the steps are left out for the sake of brevity. The corresponding memory structure is shown in Figure 1. As ATLAST reads the first sentence from left to right, it finds a path from ProPosed to Fred. At this point, there is no candidate interpretation for the text, thus no competing inference paths, so this path becomes the first member of the set of active paths: path0 from PRQPOSE-MARRINE to FRED: PROPOSE-MURIAGE has the role-filler GENERIC-HUMAN GENERIC- I has the instance FRED activating path0 While processing the second sentence of the text, AT- LAST finds a path denoting a causal relationship between proposed and cried. This path represents the inference that the crying results from a state of happiness which in turn results from the proposal of marriage. This path is added to the set of active paths: Some retained paths, though, will not be rediscovered, but the inferences made lFrom later text may change the in- terpretation in such a way that these paths now should be included. ATLAST uses a method of “piggy-backing” the re-evaluation of these paths onto the evaluation of paths which are directly discovered or rediscovered by the search 2Cons6r&ing reconsideration to just those paths that completely contain or are compleltely contained by the (re)discovered path has proven to be too restrictive for another sample text. In that case, one retained path which should have been part of the final representation was neither directly nor indirectly chosen for m-evaluation. Relaxing 6he constraints allowed ATEAST to recover while still avoiding the re-evaluation of every retained path. Eiselt 541 Figure 1: The organization of nodes in the memory structure for Text 3. path4 from CRY-TEARS to PROPOSE-MARRIAGE: CRY-TEARS is a result of HAPPY-STATE HAPPY-STATE is a result of HAPPY-EVENT HAPPY-EVENT has the instance PROPOSE-MARRIAGE activating path4 Next, ATLAST discovers a path that provides an alternate interpretation to that offered by the previous path. During this example, ATLAST was instructed to give preference to older paths over newer paths when no other evaluation metric was able to make a decision3 Thus, the newer path is not added to the set of active paths: path5 from CRY-TEARS to PROPOSE-MARRIAGE: CRY-TEARS is a result of SAD-STATE SAD-STATE is a result of SAD-EVENT SAD-EVENT has the instance PROPOSE-MARRIAGE path4 older than path5 de-act ivat ing path5 ATLAST now finds a path that connects cried to Wilma and adds it to the set of active paths: path9 from CRY-TEARS to WILMA: CRY-TEARS is a result of SAD-STATE SAD-STATE is an instance of HUMAN-MENT-STATE HUMAN-MENT-STATE is an attribute of GENERIC-HUMAN GENERIC-HUMAN has the instance WILMA act ivat ing path9 The interpretation now contains three paths: path 0, path 4, and path 9. There is a semantic contradiction among the active paths at this time in that path 9 is an in- ference that Wilma cried because she was sad while path 4 says that the tears were shed due to a state of happiness 3This tendency to prefer older inferences over newer ones results from the work on differences in human inference decision behavior noted in an earlier footnote. The theory that was proposed to explain the differences suggests that some subjects prefer older inferences when faced with a choice between competing inferences, while other subjects prefer newer inferences. The people who prefer older inferences are called “perseverers” while those who prefer newer inferences are called %ecencies.n ATLAST is capable of modeling either hind of behavior by changing one of its evaluation metrics; it recovers from erroneous inferences in either mode. induced by the marriage proposal. ATLAST does not no- tice the contradiction because the two paths are not com- peting paths. This is the best interpretation based on the paths discovered so far. ATLAST then finds a competing path from Wilma to cried. This new path, path 11, shares more nodes with other active paths than does its compet- ing path, path 9; this is one of the criteria employed to decide which path explains more of the input. In this case, path 11 explains more input so it is added to the set of active paths and path 9 is moved to the set of retained paths: path11 from CRY-TEARS to WILMA: CRY-TEARS is a result of HAPPY-STATE HAPPY-STATE is an instance of HUMAN-MENT-STATE HUMAN-MENT-STATE is an attribute of GENERIC-HUMAN GENERIC-HUMAN has the imstance WILMA path11 has more shared nodes than path9 de-activating path9 activating pathif As the final sentence is processed, ATLAST discovers a path connecting proposed to sad. This path is added to the set of active paths. In addition, this new path has four superpaths among the set of retained paths, and these paths are re-evaluated. 0ne of these superpaths, path 5, is now preferred over the active path 4 because it is rein- forced by path 15 ( i.e., it contains the active path 15 as a subpath). Path 4 is moved from the set of active paths to the retained paths, and path 5 is moved from the retained paths to the active paths: path15 from SAD-STATE to PROPOSE-MARRIAGE: SAD-STATE is a result of SAD-EVENT SAD-EVENT has the instance PROPOSE-MARRIAGE also reconsidering: (path13 path10 path5 path21 activating path15 path11 shorter than path13 de-activating path13 path4 shorter than path10 de-activating path10 path5 has more shared nodes than path4 de-activating path4 activating path5 path0 shorter than path2 de-activating path2 542 Natural Language The previous step demonstrates the need for inference path retention. Path 5 has been found directly several times prior to this point. Each time, the evaluation metrics have determined that path 4 fits better with the context. Now that path 15 is part of that context, path 5 is determined to be more appropriate than path 4. Had path 5 not been retained after being rejected earlier, it could not have been reconsidered at this time, nor would it ever have been re- considered because the search process will not find path 5 again. If path 5 had not been retained, path 4 would in- correctly end up in the final representation of the story. In fact, this is what happens when ATLAST’s retention capability is disabled while processing Text 3. The principle of retaining rejected inference paths is in- spired by experimental work which has led to a the- ory of lexical disambiguation called conditional retention (Granger, Holbrook, & Eiselt, 1984). According to this theory, lexical disambiguation is an automatic process in which all meanings of an ambiguous word are retrieved, the meaning most appropriate to the preceding context is cho- sen, and the other meanings are temporarily retained. In the case where the ambiguous word appears within a short text, the meanings are retained until the end (of the text. Should later text contradict the initially chosen meaning, the retained meanings for that word are reconsidered in light of the updated context, and a new meaning is selected without repeating the lexical retrieval process. The theory of conditional retention thus offers an explanation of how readers can recover from an incorrect choice of word mean- ing without reprocessing the text. Because the choice of a word meaning will affect the inferences which are made during the understanding of a text, the theory of condi- tional retention has implications for making inference de- cisions at levels other than the lexical level. Following this assumption, ATLAST uses the inference retention mecha- nism described in Sections I1[ and HI to recover from both incorrect lexical inferences as well as erroneous pragmatic inferences. Continuing with the example, ATLAST finds a new path from cried to sad and adds it to the active paths. This new path also forces the reconsideration of several retained superpaths, including path 9, which is now preferred over its old competitor, path 11, because path 9 now shares more nodes with other active paths than does path 11. Path 9 is returned to the set of active paths and path 11 becomes a retained path, again illustrating the usefulness of inference retention: path16 from SAD-STATE to CRY-TEARS: SAD-STATE has the result CRY-TEARS also reconsidering: (path16 path13 path9 path14 path7 path6 path31 activating path18 path15 shorter than path16 de-activating path16 path11 shorter than path13 de-activating path13 path9 has more shared nodes than path11 de-activating pathfl activating path9 ATLAST then discovers the last new path to be added to the set of active paths. This path connects Wilma and sad. path20 from SAD-STATE to WILMA: SAD-STATE is an instance of HUMAN-MENT-STATE HUMAN-MENT-STATE is an attribute of GENERIC-HUMAN GENERIC-HUMAN has the instance WILMA activating path20 The marker-passing mechanism will uncover nine more new paths to be considered and rediscover many others; these paths will in turn force the re-evaluation of a number of retained subpaths and superpaths of those paths. However, none of these paths will be incorporated into the final in- terpretation of the text, which consists of paths 0, 5,9, 15, 18, and 20. However, the theory of conditional retention is by no means widely accepted, and the criticisms of conditional retention should be taken into consideration when eval- uating ATLAST’s utility as a cognitive model. One ar- gument against conditional retention is a large body of experimental evidence which shows that, almost immedi- ately after a meaning of an ambiguous word has been se- lected, the alternate meanings seem as if they had never been recalled (e.g., Seidenberg, Tanenhaus, Leiman, & Bi- enkowski, 1982). This h as been interpreted by some as proof that retention does not QCCUI. On the other hand, these experiments werenot specific&y designedtolookfor evidence of retention. Also, as shown by Holbrook, Eiselt, Granger, and Matthei (1987), the results of some exper- iments (e.g., Hudson & Tanenhaus, 1984) can be inter- preted in such a way as to support the theory of conditional retention, though not conclusively. The one experiment to date that was designed to look for retention (&anger et al., 1984) also yielded inconclusive results. A frequent and deserved criticism of the conditional re- tention theory is that it offers no concrete answer to the question of how long alternate choices are retained; it says only that the choices are retained until the end of the text if the text is short. The experiment described by Changer et al. (1984) did not address this issue, but new work with ATLAST may suggest some answers. ATLAST has been modified so that a path is given a time stamp indicatmg the time at which it was added to the set of retained paths. In addition, a limit has been placed on the amount of time that a path can be retained without being reconsidered. With these modifications, the minimum duration of reten- Eiselt 543 tion that is sufficient to allow ATLAST to arrive at the correct interpretation of a given text can be determined empirically. This in turn will enable us to investigate, for example, the possibility of a correlation between the dura- tion of retention and structural cues such as clause bound- aries. If interesting predictions do arise from this work, it may be possible to test these predictions in the laboratory with human subjects. Another problem with the conditional retention theory is that it assumes human readers recover Gem errors without rereading the text. Rowever, as Carpenter and Daneman (1981) demonstrate through studies of eye fixations of hu- man subjects while reading, there are texts that cause a reader to backtrack when a semantic inconsistency is dis- covered in an ambiguous text. Carpenter and Daneman propose that a human reader’s error recovery heuristics in- clude checking previous words that caused processing diffi- culty and that this heuristic might utilize a memory trace of previous word-sense decisions, though this is not the only interpretation they offer. Thus, while ATLAST differs in many ways from the model of Carpenter and Daneman, especially in regard to the issue of reprocessing the input text, the latter model at least recognizes the plausibility of the principle of retention in explaining a reader’s abil- ity to recover from incorrect inferences made while reading misleading text. The principle of retaining rejected inference paths within the larger framework of a relational network provides a sim- ple but effective mechanism for recovering from erroneous inferences during text understanding, but only if there is a way to locate and re-evaluate the retained paths at the appropriate times. From a practical perspective, the principle of inference retention could be incorporated into new or existing text understanding systems in order to enable them to correct erroneous decisions. From a cognitive modeling perspec- tive, however, the jury is still out on the issue of infer- ence retention. While a model like ATLAST demonstrates the plausibility of the theory, only psycholinguistic experi- ments designed specifically to test for retention will be able to confirm or deny the validity of the theory. Carpenter, P.A., & Daneman, M. (1981). Lexical retrieval and error recovery in reading: A model based on eye fixa- tions. Journal of Verbal Learning and Verbal Behavior, 20, 137-160. Char&k, E. (1983). Passing markers: A theory of contex- tual influence in language comprehension. Cognitive Sci- ence, 7, 171-190. Cottrell, G.W. (1984). A model of lexical access of am- biguous words. Broeeedings of the National Conference on Artificial Intelligence, Austin, TX. Granger, R.H. (1980). When expectation fails: Towards a self-correcting inference system. Proceedings of the First Annual National Conference on Artijcial Intelli- gence, Stanford, CA. Granger, R.H., & %olbrook, J.K. (1983). Perseverers, re- cencies, and deferrers: New experimental evidence for mul- tiple inference strategies in understanding. Proceedings of the Fifeh Annual Conference of the Cognitive Science So- ciety, Rochester, NY. Granger, RX., Holbrook, J.K., & Eiselt, K.P. (1984). In- teraction effects between word-level and text-level infer- ences: On-line processing of ambiguous words in context. Proceedings of the Sixth Annual Conference of the Cogni- tive Science Society, Boulder, CO. Hirst, G. (1984). Jumping to conclusions: Psychological reality and unreality in a word disambiguation program. Proceedings of the Sixth Annual Conference of the Cogni- tive Science Society, Boulder, CO. Holbrook, J.K., Eiselt, K.P., Granger, R.R., & Matthei, E.H. (1987). (Al most) never letting go: Inference retention during text understanding. In S.L. Small, G.W. Cottrell, & M.K. Tanenhaus (Eds.), I, ezicad ambiguity resolution in the comprehension of human language. Los Altos, CA: Morgan Kaufmann (to appear). Hudson, S.B., & Tanenhaus, M.K. (1984). Ambiguity reso- lution in the absence of contextual bias. Proceedings of the Sixth Annual Conference of the Cognitive Science Society, Boulder, CO. No&g, P. (1983). Six problems for story understanders. Proceedings of the National Conference on Artificial Intel- ligence, Washington, DC. O’Rorke, P. (1983). R easons for beliefs in understanding: Applications of non-monotonic dependencies to story pro- cessing. Proceedings of the National Conference on Artifi- cial Intelligence, Washington, DC. Quillian, MR. (1969). Th e eachable language comprehen- t der: A simulation program and theory of language. Com- munications of the ACM, 12(8), 459-476. Riesbeck, C.K., & Martin, C.E. (1986). Direct memory ac- cess parsing. In J.L. Kolodner & C.K. Riesbeck (Eds.), Ez- perience, memory, and reasoning. Hillsdale, NJ: Lawrence Erlbaum Associates. Seideuberg, AM., Tanenhaus, M.K., Leimau, J.&I., 8z Bi- enkowski, M. (1982). Automatic access of the mean- ings of ambiguous words in context: Some limitations of knowledge-based processing. Cognitive Psychology, la, 489-537. Waltz, D.L., & PoUack, J.B. (1985). Massively parallel parsing: A strongly interactive model of natural lauguage interpretation. Cognitive §cience, 9(1J9 51-74. 544 Natural Language
1987
96
694
Eduard H. Hovy Information Sciences Institute of USC1 4676 Admiralty Way Marina de1 Rey, CA 962926695 Telephone: 213-822-1511 HQVU @ VAXA.ISI.EDU Abstract a’he computer maxim garbage in, garbage out is especially true of language generation. ‘When a generator slavishly follows its input topics, it usu- ally produces bad text. In order to find more ap- propriate forms of expression, generators must be given the ability to interpret their input topics. Of- ten, newly formed interpretations can help genera- tors achieve their pragmatic goals with respect to the hearer. Since interpretation requires inference, generators must exercise some control over the in- ference process. Some general strategies of control, and some specific techniques geared toward achiev- ing pragmatic goals, are described here. Simply put, the generator’s task, for a given sentence topic, is to find a form of expression - either a syntac- tic rule or a phrase - that will enable it to select and to order aspects of the topic in order to build a sentence. The straightforward approach is to define a fixed corre- spondence between topic representation types on the one hand and grammatical rules and lexical elements on the other. This approach has a flaw: the results are invari- ably bad or boring. How bad, of course, depends on the representation, but anything detailed enough to be useful for other purposes, such as learning or diagnosing, simply does not make great prose in practice. A good example is furnished by the following text, in which the genera- tor’s input consists of a list of topics, where each topic describes some episode in a fight between two people2. Straightforward generation produces: lThis work was done while the author was at Yale University Com- puter Science Department, 2158 Yale Station, New Haven, CT 06520. This worh was supported in part by DARPA monitored by the QNR under contract N00014-82-K-0149. It was also supported by AFQSR contract F49820-87-C-0005. aThe input was produced by the JUDGE program (see [Bain 861 and [Bain 84]), a case-based expert system that models the sentencing behavior of a judge. As input, JUDGE accepts the representation of a fight - a set of actions and resulting states - and as output it produces a set of interpretations of each action. (4 FIRST, JIM BUMPED MIKE ONCE, HURTING HIM. THEN MIKE HIT JIM. HURTING HIM. THEN JIM HIT MIKE ONCE, KNOCKING HIM DOWN. THEN MIKE HIT JIM SEVERAL TIMES, KNOCKING HIM DOWN. THEN JIM SLAPPED MIKE SEVERAL TIMES, HURTING HIM. THEN MIKE STABBED JIM. AS A RESULT, JIM DIED. This examples is an extreme case because it contains only two main representation types, ACTION and STATE, which can relate in only one way, RESULT. When1 the generator knows only one way to express this combina- tion, what more can we hope for? Correcting this inflexibility seems straightforward. Though there is nothing wrong with the sentence form used above, namely, [ [say-time #TIME] [say-sentence #ACTION] y [say-participle #STATE] ] one can add to the grammar a few more sentence forms expressing actions and their results, more time words, and more verbs, and then make the generator cycle through its options whenever it encounters a choice point: 04 FIRST. JIM BUMPED MIKE ONCE AND HURT HIM. THEN MIKE SMACKED JIM, HURTING HIM. NEXT, JIM HIT MIKE ONCE. THE RESULT WAS THAT HE KNOCKED HIM DOWN. AFTER THAT. MIKE SMACKED JIM SEVERAL TIMES AND KNOCKED HIM DOWN. JIM SLAPPED MIKE SEVERAL TIMES, HURTING HIM. AFTER THAT, MIKE STABBED JIM. AS A RESULT, JIM DIED. .; Yet this produces no real improvement! Clearly, simply extending the number of phrase patterns for each rep- resentation type does not solve the problem. When we ‘All the texts in this paper were generated by PAULINE (Planning And Uttering Language In Natural Environments), a program that can reslise a given input in a number of different ways, depending on how its pragmatic goals are set. An overview description can be found in [Hovy 87a, 87b]. The program consists of over 12,000 lines of T, a Scheme-like dialect of LISP developed at Yale. Howy 545 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. speak, we do a lot more than simply cast input different forms; for example, we might say: topics in (c) JIM DIED IN A FIGHT WITH MIKE. (4 AFTER JIM BUMPED MIKE ONCE, THEY FOUGHT, AND EVENTUALLY MIKE KILLED JIM. (4 AFTER JIM BUMPED MIKE ONCE. THEY FOUGHT, AND EVENTUALLY HE WAS KNOCKED TO THE GROUND BY MIKE. HE SLAPPED MIKE A FEW TIMES. THEN MIKE STABBED JIM. AND JIM DIED. Illustrated this way, the problem seems rather simple. Qb- viously, the solution is to group together similar enough topics, where the similarity criterion can be varied de- pending on external factors, and then to generate the groupings instead of the individual actions. Grouping to- gether contiguous actions of similar force, PAULINE pro- duced variants (c)) (d) , and (e). (In the first variant, all actions were grouped together; in the second, all actions more violent than bumping but less violent than killing were accepted; and in the third, the grouping resulted from defining four levels of violence: bumping, hitting and slapping, knocking to the ground, and killing.) Clearly, though it improves the JUDGE examples, the technique of grouping actions by levels of force is very spe- cific and not very useful. However, when “group” is used in a wider sense to mean “interpret”, this technique be- comes both difficult and interesting, and provides a very powerful way to increase the expressive flexibility and text quality of a generator. So the questions are: what inter- pretation/grouping criteria are general and still useful? When and how should the generator interpret input top its? How should it ftnd appropriate grouping criteria? xample of Interpretation In a second example, PAULINE produces a number of ver- sions describing a hypothetical primary election between Carter and Kennedy during the 1980 Democratic Presi- dential nomination race. In thi election, Kennedy narrows Carter’s lead4. When PAULINE is given as input the out- come for each candidate, straightforward generation pro- duces: (f 1 IN THE PRIMARY ON 20 FEBRUARY CARTER GOT 1860 VOTES. KENNEDY GOT 2186. 4This event is represented using about 80 elements of a repre- sentation scheme similar to Conceptual Dependency [&hank 72, 75, 821, defined in a property-inheritance network such as described in [Char&k, Riesbeck C McDermott 801. However, PAULINE can notice that both outcomes relate to the same primary, and can say instead: k) IN THE PRIMARY ON 20 FEBRUARY, KENNEDY BEAT CARTER BY 335 VOTES. (or any of a number of similar sentences using “beat” y “win”, and “lose”). But why stop there? If PAULINE examines the input further, it can notice that Carter’s current delegate count is greater than Kennedy’s, that this was also the case before the primary, and that this primary is part of a series that culminates in the final election, the nomination. In other words, PAULINE can recognize that what happened in this primary was: (h) IN THE PRIMARY ON 20 FEBRUARY, KENNEDY NARROWED CARTER’S LEAD BY GETTING 2f86 VOTES TO HIS 1860. If we want good text from our generators, we have to give them the ability to recognize that “beat” or ((lose” or “nar- row lead” can be used instead of only the straightforward sentences (f) . This ability is more than a simple grouping of the two outcomes. It is an act of generator-directed inference, of interpretation, forming out of the two topics a new topic, perhaps one that does not even exist in memory yet. And the new topic is not simply a generator construct, but is a valid concept in memory. The act of determining that “beat” is appropriate is the act of interpreting the input as an instance of BEAT - denying this is to imply that “beat” can logically be used where BEAT is not appro- priate, which is a contradiction. This is not an obvious point; one could hold that the task of finding “beat” to satisfy a syntactic or pragmatic goal is a legitimate gen- erator function, whereas the task of instantiating it and incorporating it into memory is not. However, it is clearly inefficient for a generator to interpret its input, say it, and then simply forget it again! - especially when there is no principled reason why generator inferences should be distinct from other memory processes. Thus, after interpretation, the newly built instance of the concept should be added to the story representation, where it can also be used by other processes, or by the generator the next time it tells the story. In this way the content of memory can change as a result of generation. This is consistent with the fact that you often understand a topic better after you have told someone about it: the act of generating has caused you to make explicit and to remember some information you didn’t have before. Immediately, this view poses the cess in responsible for making these question: which pro- inferences 7. The two 546 Natural Language possible positions on this issue reflect the amount of work one expects the generator to do. According to the strict minimalist position - a position held by most, if not all, generator builders today -, the generator’s responsibil- ity is to produce text that faithfully mirrors the input topics with minimal deviation: each sentence-level input topic produces a distinct output sentence (though perhaps conjoined with or subordinated to another). Naturally, this inflexible attitude gave rise to the JUDGE texts (a) and (b). To circumvent this problem, in practice, most generator builders employ in their programs a number of special-purpose techniques, such as sophisticated sentence specialists that are sensitive to the subsequent input top- ics. Of course, this is a tacit acknowledgement that the strict position does not hold. However, on renouncing the hard-line position, one must face the question how much generator-directed inference are you prepared to do? I do not believe that a simple answer can be given to thii question. The issue here, I think, is economic: a tradeoff exists between the time and effort required to do interpretation (which includes finding candidate interpre- tations, making them, and deciding on one) on the one hand, and the importance of flowing, good text on the other. Greater expense in time and effort produces better text. Thus pragmatic criteria are appropriate for treat- ing this question. Hence a reasonable answer is I’22 do as much inference as I can do, given the available time, the pragmatic constraints on what I want the hearer to know, and the richness of my memory and my lexkon. Of these three factors, the most difficult is clearly the pragmatic constraints on what the hearer is to be told. When does the hearer need to know the details of the topic? What is the effect of telling him only interpretations? Or of telling him both? The answer can be summarized as: if you can trust him to make the interpretations himself, then all you need give him are the details. Thus, if the hearer is a po- litical pundit who is following the nomination race with interest, then clearly (f) is better, since he can draw the conclusion without difficulty, and, in addition, he has pre- cise numerical information. If, in contrast, the hearer has only minimal knowledge about or interest in the nomina- tion procedure, then (h) is better, since it doesn’t burden him with details and require him to do the interpretation himself. What must you say, however, if the hearer is in- terested and has a limited amount of knowledge - say, he is a student of the political process -, or if he is knowl- edgeable but unlikely to make the right interpretation - say, he is a strong Kennedy supporter, whereas you are pro-Carter? In both these cases you must ensure that the hearer understands how you expect him to interpret the facts. So you tell him details and the interpretations: (i) KENNEDY NARROWED CARTER’S LEAD IN THE PRIMARY ON 20 FEBRUARY. HE GOT 2186 VOTES AND CARTER GOT 1850. In summary, you must be as specific as the hearer’s knowledge of the topic allows: if you are too specific he won’t understand, and if you are too general you run the risk of seeming to hide things from him, or of being unco- operative. In the first case, you violate the goal to be intelligible, and in the second, you violate the goal to avoid unacceptable implications. In either case, you vi- olate Grice’s maxim of quantity to say neither more nor less than is required (see [Grice 751). The problem in interpretation tions easily and quickly. is to find valid interpreta- ottom-Up Interpretation: One solution to this problem is to try inferences directly on the input topics. This bottom-up method of interpretation uses the struc- ture of the memory network itself. In PAULINE, bottom-up interpretation inferences re- side in memory and the lexicon as part of the definitions of concept types. In order to enable bottom-up interpre- tations, links are defined from concept types to the inter- pretations in which they could take part. (This scheme forms a concept representation network slightly different from the usual multi-parent schemes used in, say, [Ste- fik & Bobrow $51, [Charniak, Riesbeck & McDermott $01, and [Bobrow & Winograd 771.) Of course, this is not a wonderful solution - it depends on the right links being defined beforehand - but it is practical in limited. do- mains. The program collects possible inferences from the type of each input topic. Top- terpretation: Another way to find in- terpretations is top-down: to run only the inferences likely to produce results that serve the generator’s pragmatic goals. Potentially useful inferences can be explicitly in- cluded in the plans, and can be tried on candidate sen- tence topics whenever they are collected. Since interpreta- tion is a powerful way of slanting the text, the pragmatic goals to communicate opinions are an eminently suitable source of guidance b. Indeed, many of these goals cam only be achieved through interpreting the input topics appro- priately. 6Ho~ PAULINE is given opinions and some of its techniques for slanting are described in [Hovy 801. Hovy 547 PAULINE’s strategies for slanting its text include a number of top-down interpretation inferences. For exam- ple, one strategy for an unsympathetic action is Interpret as confrontation: state that the actor you oppose (X) did some action (ACT) as a confrontation with some actor you sup- port (Y). This rule can be represented as: IF X has the goal that some actor B must do some action C AND Y has goal that B must do C' AND C' conflicts with C AND X's action ACT forces B to do C' (disregarding Y) THEN interpret ACT as a confrontation In ordertointerpret the input topics as instances of some concept, the interpretation process must recognize when the topics (or some of them) conform to the definition (or part of the definition) of the concept. Thus, either con- cepts must be defined in such a way as to allow a general process to read their definitions, or inferences must exist that fire when a definition is matched - in other words, the antecedent of an inference is the definition and the consequent asserts the existence of the new concept. PAULINE was implemented with the second approach, using patterns called configurations. A configuration is the description of the way in which a collection of concepts must relate to one other to form a legitimate instance of a high-level concept. It contains a pattern, in the form of a list of triplets (type ?var pattern), where e type is either the type (in the property inheritance memory network) of the concept currently to be matched, or a variable Pvar which must have been encountered before. o ?var is either 0, or a variable Pvar by which the cur- rent conceptwillbe identified later in the match,or two such variables that have to be bound to different concepts for a match. o pattern is a lit of (aspect config) pairs, where the filler of each aspect must recursively match the con- fig, which is again a pattern. Configuration patterns obviously depend on the exact rep- resentations used. For example, the configuration for the concept BEAT is ; ?X is someone's VOTE-OUTCOME ; in some primary ?Y, (VOTE-OUTCOME ?X (instance (ELECTION ?Y)) (relations (REL-GREATER 0 ; and it is greater than ; another VOTE-OUTCOME in ?Y (concl (?X>> (conc2 (VOTE-OUTCOME 0 (instance (?Y))>>))> which means: some concept is a VOTE-OUTCOME; its aspect RELATIONS contains a GREATER relation of which the greater part is that same concept and the smaller part is another VOTE-OUTCOME in the same primary. Thus, since Kennedy's outcome resulted from a primary and it is greater than Carter's outcome, the two form an instance of BEATing. Most configurations are considerably more complex. During its planning stage, PAULINE gathers likely in- terpretation inferences, both top-down and bottom-up, and then, using a simple pattern-matcher, applies their configurations to the candidate topics and collects all the matches. Its strategies for selecting configurations are based upon the pragmatic factors knowledge, slant, and time, described above. If an instance of a newly made interpretation does not yet exist in memory, PAULINE creates one and indexes it following the memory organi- zation principles described in [Schank 821, so that it can be found again and used in future. A final example: PAULINE generates over 100 ver- sions (neutral, slanted in various ways, in various styles) of an episode that took place at Yale in April 1986. The episode requires about 120 representation elements. A neutral version is: (3 IN EARLY APRIL, A NUMBER OF STUDENTS BUILT A SHANTYTOWN ON BEINECKE PLAZA. THE STUDENTS WANTED YALE UNIVERSITY TO DIVEST FROM COMPANIES DOING BUSINESS IN SOUTH AFRICA. ONE MORNING, OFFICIALS DESTROYED THE SHANTYTOWN AND POLICE ARRESTED 76 STUDENTS. FINALLY, THE UNIVERSITY ALLOWED THE STUDENTS TO REBUILD IT, AND ANNOUNCED THAT A COMMISSION WOULD GO TO SOUTH AFRICA IN JULY TO EXAMINE THE SYSTEM OF APARTHEID. When generating with the goal to slant the input, PAULINE uses top-down inferences such as those men- tioned above to interpret the input topics. For example, an anti-protester text is: 548 Natural Language (k) IN EARLY APRIL, A SMALL NUMBER OF 4. STUDENTS [WERE INVOLVED IN A CONFRONTATION], WITH YALE UNIVERSITY OVER YALE'S INVESTMENT IN COMPANIES DOING BUSINESS IN SOUTH AFRICA. THE 5. STUDENTS [TOOK ~VER]~ BEINECKE PLAZA AND CONSTRUCTED A SHANTYTOWN NAMED WINNIE MANDELA CITY [IN ORDER TO FORCE], THE UNIVERSITY TO DIVEST FROM 6. THOSE COMPANIES. YALE REQUESTED THAT THE STUDENTS ERECT IT ELSEWHERE, BUT THEY REFUSED TO LEAVE. LATER, AT 6:30 AM ON APRIL 14. OFFICIALS HAD TO DISASSEMBLE THE SHANTYTOWN. 7. FINALLY. YALE, [BEING CONCILIATORYI~ TOWARD THE STUDENTS, NOT ONLY PERMITTED THEM TO RECONSTRUCT IT, BUT ALSO ANNOUNCED THAT A COMMISSION WOULD GO TO SOUTH AFRICA IN JULY TO EXAMINE THE SYSTEM OF APARTHEID. PAULINE made the interpretations confrontation (a) 9 ap- propriation (b), coercion (c), and conciliation (d), none of which were contained in the original input story. As generators become larger and more complex, and as they are increasingly used together with other programs, they should use the capabilities of those programs to fur- ther their own ends. Therefore, we should study the kinds of tasks that generators share with other processes and the purposes generators require them to fulfill. The con- siderations and strategies described here determine some of the kinds of demands a generator can be expected to place on a general-purpose inference engine. And even with PAULINE’s limited inferential capability, the pro- gram can greatly enhance the quality of its text and the efficiency of its communication of non-literal pragmatic information. 1. ain, WM., Toward a Model of Subjective Interpretation, Yale University Technical Report no 324, 1984. 2. Bain, WM., Case=&& Reasoning: A Computer Model of S’ub- jective Assessment, Ph.D. dissertation, Yale, 1985. 3. Bobrow, D.G. & Winograd, T., An Overview of KRL, a Knowledge-Representation Language, in Cognitive Science vol 1 no 1, 1977. 8. Hovy, E.H., 1987a, Some Pragmatic Decision Criteria in Generation, in rat ion: ecent Ad- Iigenee, SYChOICl~, G. (ed), Kluwer Aca- demic Publishers, 1987. 9. 10. 11. 12. 13. Hovy, E-H., P987b, Generating Natural Language under Pragmatic Con- straints, in Journal of Pragmatics, vol XI no 6,1987, forthcoming. Schank, R.C., ‘Semantics’ in Conceptual Analysi; in Lingua vol 36 no 2, 1972, North-Holland Publishing Company. Schank, R.C., Conceptual Pnforrrration Processing, North- Holland Publishing Company, 1975; Schank, R.C., and Leaning in Computers and BeopIe, Cam- bridge University Press, 1982. Stefik, &I. & Bobrow, D.G., Object-Oriented Programming: Themes and Varia- tions, in AI Magazine Vol 6, No 4, 1986. Char&k, E., Riesbeck, C.K. & McDermott, D.V., rogramming, Lawrence Grice, H.P., Logic and Conversation, in The Logic of Gram- mar, Davidson D. & Harman G. (eds), Dickinson Publishing Company9 1975. Hovy, E.H. 9 Integrating Text Planning and Production in Gener- ation, IJCAI Conference Proceedings, 1985. Hovy, E.H., Putting A$ect into Tezt, Proceedings of the Eighth Conference of the Cognitive Science Society, 1986. Hovy 549
1987
97
695
Aravind IL Joshi Department of Corn uter and Information Science Room 55 Moore School s Universit of Penns lvania Philade phia, PA 9104 i! Y ABSTRACT In natural language generation the grammatical component has to be systematically interfaced to the other components of the system, for example, the planning component. Grammatical formalisms can be studied with respect to their suitability for generation. The tree adjoining grammar (TAG) formalism has been previously studied in terms of incremental generation. In this paper, the TAG formalism has been investigated from the point of view of its ability to handle word-order variation in the context of generation. Word-order cannot be treated as a last minute adjustment of a structure; this position is not satisfactory cognitively or computationally. The grammatical framework has to be able to deal with the word-order phenomena in a way such that it can be systematically interfaced to the other components of the generation system. I Introduction Natural langua research in natu raf e generation is a very active area in AI language processing. In principle, comprehension and generation can be viewed as inverses. However, there are some interestin comprehension, it asymmetries. In may be posse le, % under certain circumstances, to eschew structural (grammatical) information by the use of other knowledge sources. However, in generation, no matter how much higher level knowledge is available, it is not possible to bypass the rammatical component, as the ou ut !i! has to be well- ormed and B acceptable to the user grammatical component has What this implies is that the to be systematically interfaced to the other components of the generation system, for example, the planning component. Grammatical formalisms can be viewed as neutral with respect to comprehension or generation, or they may be investigated from the point of view of their suitability for comprehension and generation separately. Although the view that grammatical formalisms can be neutral with respect to generation or comprehension is viable from a purely theoretical perspeetrve, we do not think it is justified cognitively and computationally. This is because comprehension may be largely heuristic but generation is not. Therefore, generation requires a systematic interaction between the grammatical component and the planning component, as we have stated above. A particular aspect of this mterface is a kind of flexibility that leads to incremental generation, including the possibility of detaching part of the ‘This work was partially supported by AR0 grant DAA29-84-9-0027, NSF grants MCS-82-07294, DCR-84-10413, and DARPA grant N00014-85-K-0018. I want to thank Mark Steedman, K. Vijay-Shanker, and D. Weir for their valuable comments. %ne might think that the use of templates would avoid this problem; however, this approach is very limited and certainly fails to provide textual coherence. 550 Natural language representation produced by the planner for the generation of a sentence, and using it for the generation of the next sentence, without affecting the well-formedness of the fiit sentence. In an earlier paper, Joshi (1986) has investigated the Tree Adjoining Grammar (TAG) formalism from the point of % eneration. This formalism has been investigated extensively (r975). Joshi (1983. 1985). K&h and’ Joshi (1986). Kroch Joshi and his co-workers (e.g Joshi Levy and Takahashi (1986); Vijay-‘shanker, Weir, and Joshi (1986), and other works). In Joshi (1986), the TAG formalism was studied from the point of its suitability for incremental generation. McDonald and Pustejovsky (1985) have also TAG formalism for investigated the system of McDonal cf eneration with respect to the M . In Joshi (1986), the problem of word- order variation in generation was raised and briefly discussed. The main oal of this paper is to investigate, in some detail, the TAG ormalism from the point of view of its ability to B handle word-order variation in the context of generation. We will also discuss the relationship of our work to other formalisms, particularly with respect to the extent to which they can deal with the issues discussed in this paper. Specifically, we will consider the context-free grammar based formalism such as the Generalized Phrase Structure Grammar (GPSG), and the Functional Unification Grammar (FUG) of Kay, which has been used in some the TEXT system of McKeown (19 5 eneration systems (e.g., in 4)). In Section 2, we will give a brief introduction to the TAG formalism together with some examples. ln Section 3 we will deal with the problem of word-order variation. The main characteristics of TAG’s are as follows: 1) TAG is a tree generating system It consists of a finite set of elementary trees (elaborated up to preterminal (terminal) symbols) and a composition operation (adjoining) which builds trees out of elementary trees and trees derived from elementary trees by adjoining. A TAG should be viewed primarily as a tree generating system in contrast to a string generating system such as a context-free grammar or some of its extensions. 2) TAG’s factor recursion and dependencies in a novel way. The elementary trees are the domain of dependencies which are statable as co-occurence relations among the elements of the elementary trees and also relations between elementary trees. Recursion enters via the operation of adjoining. Adjoining preserves the dependencies. Localization of dependencies in this manner has both linguistic and corn utational significance. Such localization cannot be achieve tf directl TAG’s are more powerfu K in a string generating system. 3) only “mildly” so. than context-free grammars, but This extra power of TAG is a ditect corollary of the way TAG factors recursion and dependencies. From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. A tree adjoining grammar (TAG) G = (I, A) where I and A are finite sets of ehnentary trees. The trees in I will be cded the initial trees and the trees in A, the auxiliary trees. A tree a is an initial tree if it is of the form in (1) and a tree p is an auxiliary tree if it is of the form in 2: That is, the root node of a is labelled S and the frontier nodes are all terminals, and the root node of p is labelled X where X is a non-terminal and the frontier nodes are all terminals except one which is labelled X, the same label as that of the root. The node labelled X on the frontier will be called the “foot node” of p. The internal nodes are non-terminals. The initial and the auxiliary trees are not constrained in any manner other than as indicated above. The idea, however, is that both the initial and auxiliary trees will be minimall in some sense. An initial tree will correspond to a minimal sentential tree (i.e.g without recursing on any non-terminal) and an auxiliary tree, with root and foot node labelled X, will corres nd to a manima~ recursive structure that must be broug r t into the derivation, if one recurses on X. We will now define a composition operation called ~~~~~~i~g (or a & with a tree c&ion), which composes an auxiliary tree earing x the la kl y be a tree containmg a node (address n) X and let p be an auxiliary tree whose root node is also labelled X. (Note that B must have, by definition, a node (and only one such) labelled X on the frontier.) Then the adiunction of B to Y at node n will be the tree Y’ that results when Jthe following o+ration is carried out: 1) The sub-tree of y at n, call it t, is excised; 2) The auxiliary tree p is attached at n; 3) The sub-tree t is attached to the foot node of p. Figure 1 illustrates this operation. Figure 1 The intuition underlying the adjoining operation is a simnle one. but the oneration is distinct from other onerations on &trees that have *been discussed in the literature. In In each elementary tree, any twcs nodes (or an set of nodes) are dependent simply by virtue of the fact x at they belong to the same tree. Gf course, some s &” ific de in 8” ndencies are of interest. These are indicated y co- exing the nodes (or showing a link between nodes). 2.2 Derivation in a TAG Although we shall not describe formally the notion of derivation in a TAG, we want to give the reader a more recise understandin F i.i of the conce rom description of e operation o f t than (s)he might form adjoining. Adjoining is an operation defined on an elementary tree, say y, an auxiliary tree, say p, and a node (i.e., an address) in y, say w. Thus, every instance of adjunction is of the form “p is adjoined to y at n,” and this adjunction is always and on1 local constraints associated with n. Althoug ii subject to the we very often speak of adjoining a tree to a node in a corn lex structure, we do so only for convenience. Strictly spe azl ‘ng, adjoining is always at a node in an elementary tree; and, therefore, it 1s more precise to talk about adjoining at an address in an elementary tree. More than one auxiliary tree can be adjoined to an elementary ttee as long as each tree is adjoined at a distinct node. After all these auxiliary trees are adjoined to the elementary tree, only nodes in the auxiliary trees are available for further adjunction. Now suppose that a is an initial tree and that pl, b,... are auxiliary trees in a TAG, 6. Then the derivation structure corresponding to the generation of a particular tree and the correspondence string m L(G) might look as follows: a. al is an initial tree. &, pg and PI0 are adjoined at nodes UQ, “2, and “3 respectively in al, where wl, w2, and “~3 are all distinct nodes. p1 and p3 arc adjoined to p3 at nodes mp and m2 respectively. Again, urrl and m2 are distinct. pg has no further adjunctions but p8 is adjoined to plo at node pl. This is a top-down derivation, a bottom-up derivation can be defined also and it is more ap ropriate for the multicom nent adjunction discussed in Kroc R and Joshi (1986). Note t I? at the derivation structure D implicitly characterizes the surface tree that is generated by it. D also serves as the basis for defining a compositional semantic interpretation (Vijay-Shanker 1986). In this way the derivation structure can be seen as the basic formal object constructed in the course of sentence generation. Associated with it will be two mappings, one to a surface syntactic tree and the other to a semantic interpretation, as below: derivation structure surface tree c --- ---> semantic interpretation 2.3 Some Binguistic ~x~rn~~~s We will give some simple lin uistic illustrate the applicability of the T R examples that G formalism to the descri a TA 8 tion of natural langua where I is the set o P e phenomena. Let G = (I, A) be initial trees and A is the set of auxiliary trees. We will list only some of the trees in I and A, those relevant to the derivation of our illustrative sentences. Rather than introduce all these trees at once, we shall introduce them as necessary. Joshi 551 DEI- N V NP DET N V Tree a, corresponds to a “minimal sentence” with a transitive verb, ai in (l);-and % corresponds to a minimal sentence with an intransitive verb, as in (2): (1) The man met the woman. and (2) The man fell. Initial trees as we have defimed them require terminal s x mbols on the frontier. In the linguistic context, the nodes on e frontier will be f as N, V, A, P, DE reterminal lexical category symbols such , etc. The lexical items are inserted for each of the preterminal symbols as each elementary tree enters the derivatron. Thus, we generate the sentence in (1) by performing lexical insertion on al, yielding: (1) The man met the woman. As we continue the derivation by selectin and adjoining them appropriately, we fo ow ii auxiliary trees the same convention, i.e., as each elementary tree is chosen, we make the lexical insertions. Thus in a derivation in a TAG, lexical insertion oes hand in hand with the derivation. This asnect of % AG is highlv relevant to generation and is discussed in Joshi (19&Q: Each step in thy derivation selects an elementary tree together with a set of a items. Note that as we select the lexic a!! propriate lexical items for each elementary tree we can check a variety of constraints, e.g., agreement and subcategorization constraints on the set of lexical items. These constraints can be checked easily because the entire elementary tree that is the domain of the constraints is available as a single unit at each step in the derivation. As the reader will have noted, we require different initial trees for the sentences “John fell” and “the man fell” because the expansion of NP is different in the two cases. Since the structure of these two sentences is otherwise identical, we cannot be content with a theory that treats the two sentences as unrelated. In a fully articulated theory of grammar employing the TAG formalism, the relationships among initial trees is expressed in an independent module of the grammar that specifies the constraints on possible elementary (initial or auxiliary) trees. And we can even provide schemata or rules for obtaining some elementary structures from others. In any case, these rules are abbreviatory. The most important pint re arding the source of elementars trees is that usme the f AG formalism allows us to treat as orthogonal t&e principles governing the construction of minimal syntactic units and those governing the composition of these units into complex structures. In (1) we give a topical&d structure, and in (2) gives a WH-question: (2) a = AA cOMPs I A NP: NP VP p N; “i’ & [+whi 1 /y N N V NP PP e. V NP to Mary John Dti gave a book N who met Mary These correspond to (3) To Mary John gave a book, and (4) Who met Mary. Thus far all of the initial trees that we have nd to minimal root sentences. We now a= S’ * a= S' PRO to invite Mary I ei who PRO to invite Tree aA will be used in the derivation of sentences like (4) and (5): (i) John persuaded Bill PRG to invite Mary. (5) John tried PRO to invite Mary. a8 will be used m deriving sentences like (6): (6) Who did John try to invite? (PRG stands for the missing subject). Now we introduce auxiliary trees that will adjoin to the above infinitival initial trees to produce complete independent sentences: j.j= S’ I 4’ S’ I 4” S’ I A A A N,P A N,P A AUX N,P VP N V NP S’ N V S’ NV S John persuaded Bill s’ John tried S did John try S The reader can easily check that the sentences (4) - (6) will be derived if the appropriate auxiliary trees in (4) are adjoined at the starred nodes of the initial trees in (3). that Now let us introduce some auxiliary trees generate sentences with relative clauses: will allow us to 552 Natural language 4= A, NP A COMP S I A Nq. NP VP COMP s I A NP, NP VP who met Mary who met Mary Tree Ph can be used to build sentences with subject relatives, as in (6); and j34 can be used to build sentences with object relatives, as in (7): (6) The boy who met Mary left and (7) The boy who Mary met left. 3 -order variation It is well known that all languages allow for word-order variation, but some allow for considerably more than others, the extreme case bein the so-called “free” word-order. The linguistic relevance o B word-order variation for generation is as follows. First of all, the different word orders (if not all) carry some ragmatic information (topic/new information, for example). 5 he component shoul 8 uestion is at what point the grammatical decide on the word order and what point it should reorder the words (or planner can certainly give hrases) to reflect this order. The tK e pragmatic information to the % rammatical component lon before all the descri tions are uilt or even tR lanned. In a f structures, e AG, if we work with e ementary P information imm e&Y mmatical component can use this ‘ately and select the ap structure. The correct word-order will tR ropriate elementary en be preserved as the sentence is incrementally built. Even if a particular word order has no pragmatic significance, it is difficult to see how the complex patterns can be realized just by reordering the terminals after the sentence is built because man not realizable b patterns are $~m;Illlty of %i! ‘ust permuting the siblings o r some node. G to specify a given word order at the 3 between structure level appears to provide a better interface e planner and the rammatrcal corn h nent. We will now describe how word-or er variation can e handled in a %” TAG. This feature of TAG is a direct conse uence of the extended domain of locality (as corn ared to 2F of TAG and the o c ration of adjoining. FU 8 6) withTA . shares the first aspect We will now take the elementary trees of a TAG as elementary domination strwctures (initial structures and auxiliary structures) over which linear precedences can be defined. In fact, from now on we will define an elementary structure (ES) as consisting of the domination structure and linear precedences 3 Thus, a below is the domination structure of an ES 1 NP VP2 2.1 v NP 2.2 The addresses for nodes serve to identify the nodes. They are not to be taken as defining the tree ordering. They are just labels for the nodes. Let LPY be a set of associated with a linear precedence statements where if x c y (x precedes y) then (1) x and y are nondominating nodes (i.e., x does not dominate not dominate x) and (2) if x dominates z1 and y B and y does ominates z2, then z1 c 3. Note that c is partial. Note that LPY corresponds exactly to the standard tree ordering. Given LPY the only terminal string that is possible with the ES (a, WY), where a is the domination structure and k?L.PF is the linear precedence statement. (1) m, v 9 If instead of LJ’Y, we have LPF First note that in 1 c 2.1, 2.1 is not a sister of 1. We can define precedences between non-sister nodes because the precedences are defined over a, the domain of locality. Once again, the only terminal string that is possible with the ES (a, LPF) is (2) Nq v N$? but there is an important difference between (a, Lpy) and (a, LPF), which will become clear when we examine what happens when an auxiliary tree is adjoined to a. Before we discuss this point, let us consider %he idea of factoring constituency (domination) relationships and linear order is basicahy similar to the lDLP format of GPSG (Gazdar, Klein, Pullum, and Sag (1985)). However, there are important differences. First the domain of locality is the elementary structures (and not the rewrite rules or local trees), secondly, we have defined the LFJ for each elementary structure. Of course, a compact description of LP over a set of elementary structures can be easily defined, but when it is compiled out, it will be in the form we have stated here. The lD/LP format of GPSG cannot capture the range of word-order variation permitted by the TAG framework. FUG can capture some word-order variations beyond what ID/LF’ format can do, but it cannot capture the Ml range of variations that TAG can. LP;=+ i.e., there are no precedence constraints. In this case, we will get all six possible orderings (GPSG with the IDLP format cannot do this)P ‘?here are many improvements of the IIYLP frmwork of GPSG that have been suggeste4 e.g., Uzkomit (1986). joshi 553 LP’(= LP; ULPP = V NP, NP2, and NP2 NP, V Let us return to (a, WY) and (a, LIP;). As we have seen before, both ES give the same terminal string. Now let us consider an ES which is an auxiliary structure (analogous to an auxiliary tree) with an associated LP, LPP. LPP=[1<2] 1G -VP2 When B is adjoined to a at the VP node in a. We have We have put indices on NP and V for easy identification. Nl?,, V,, NP2 belong to a and V2 belongs to p. If we have LPF associated with a and LPP with LP’s are updated in the obvious manner. p, after adjoining the II 1<2 LPp 2.2.1 < 2.2.2 LPP = [2.1 < 2.21 The resulting LP for y is LPY= LP; ULPP = Thus y with LPT gives the terminal string (3) N-q V2 v, NP, Instead of LPY, if we associate LPT with p to a as before, the updated LP’s are LPP = c2.1 < 2.21 The resulting LP for y is a then after adjoining Thus y with LPY gives the terminal strings (4) NP, V,V, NP, (5) V2 ml V, NP, (4) is the same as (3), ,but in (5) V2 has ‘moved’ past NPl. If we adjoin p once more to y at the node VP at 2, then with LPY associated with a, we will get (6) ml V3 V2 Vl w2 and with LP: associated with a, we will get (7) ml V3 V, V, m, (8) V, ml V3 V, m, (9) v, v2 N-p, v, m2 Let us consider another LP for a, say LP: Then we others) have the following terminal strings for a (among LP; = [l < 2.11 (11) m, NQv It can be easily seen that given LPtassociated with a and LPP associated p with LPP = 0, after two adjoining with p, we will get (among other strings) (12) N@, v3 v2 v, m2 (13) m, v3 v2 m2 v, (14) m, v3 m2 v2 v, (15) Ml m2 v3 v2 v, and, of course, several others. In (13), (14), and (15), NP2, the complement of Vl in a has ‘moved’ past V,, V2, and Vg respectively. Karttunen (1986) discusses several problems centering around word-order variations in Finnish m the context of a cate orial unification grammar. In particular, he deals with auxi iaries f and verbs taking infinitival complements. The word order variations lead to dependent elements arbitrarily apart from each other (i.e, long distance dependencies). These long distance dependencies are reminiscent of the long distance de endencies due to topicalization or wh-movement (which we 1 ave seen in Section 2.3). There is a difference however. In topicalization or wh-movement, the ‘moved’ element occupies a structure. The “mov eir ’ ammatically defined position in the element in a long distance dependency of the type Karttunen is concerned about does not move into any structurally defined slot, it ‘moves’ freely in the host clause. 554 Natural Language It can be seen in (7), (8), and (9) that V2 and V3 have both ‘moved’ past NT’,. These ‘movements’ are not to any grammatical1 examples of 6 - defined- positions. Karttunen (1986) gives innish auxiliaries which show this long distance behavior and these can be worked out in our framework. (16) mina en ele aikout ruveta pelaamaan tennista I not have intend start play tennis (I have not intended to start to play tennis) (17) ele mine en a&out reveta pelaamaan tennista (18) en ele mina aikout reveta pelaamaan tennista. Further, it can be seen from (13), (14), and (15) that NP2 (the complement of V,) can be arbitrarily to the left of V, and does not occupy any-grammatically defined position. - The following exam les by Karttunen (1986) can also all be worked into the ramework. P (19) mina en ele aikout ruveta pelaamaan tennista not have intend start play (I’h, not intended to start to play tennis) tennis (20) en mina ele tennista a&out ruveta pelaamaan (2 1) en mina ele a&out tennista ruveta pelaamaan (22) en mina ele ikout ruveta tennista pelaamaan Karttunen (1986) uses the devices of type raising (in categorial grammars) or floatin (unpublished work) to types as proposed by Kaplan ac ieve i! dependencies. these long distance The elementary structures (ES) with their domination structure and the LP statements factor the constituency (domination) relationships from the linear order. The complex patterns arise due to the nature of the LP and the oneration of adjoining. The in constituencv point here is that %oth the relationshim ~includime the filler-ear, reldiowshi$) and & e li&ar jkecedenc! rekktionship irk ~e~~e~ om the e~e~e~t~r these ~e~~t~~~~i~. We i!i structures. Adjoining preserves ave already seen in Section 2 how the constituency relationships are preserved by ad’oining. Now we have seen how the linear recedence relations h preserved by adjoinin ? . Thus we R ips are ave a uniform treatment of these two kinds o dependencies; however, the crucial difference between these two kinds (as pointed by Karttunen) clearly shows up in our framework. The elementary trees of TAG have four properties that can be well matched to incremental building of concentual structures. These properties ax: local de&ability of all dependencies, local@ of feature checking, locality of the Af ar ument structure, and preservation of argument structure. 1 these properties have to do with the constituent and “movement” of constitutents to grammatical y defined r structure 4” sitions, as in WI&movement and topicalization. In Joshi 1986) it was shown how these propertres help in incremental generation. Our discussion in this section shows that the word-order variation (although distinct from constituent movement as described above) can be localized to elementary trees. The word-orders are specified for structures and adjoining preserves them. Th de ctures, as described in th am maintain the iwcrementa ding word-order variatioxa. References [Appelt, 19851 Ap It, D.E. r Planning En hsh Sentences. In tudies in Natural Lan F recessing, Cambridge University Press, Cambri B uage ge, 1985. [Appelt, 19831 A pelt, D.E. Telegram. In Proceedings IJCAI 1981 Karlsruhe, August 1983. [Gazdar et al, 19851 Gazdar, G., Klein, J.M.E., Pullum, G.K., and Sag, LA. Generalized Phrase Structure Grammar. Blackwell, Oxford, 1985. [Joshi, 19851 Joshi, A.K. I-Iow much context- sensitivity is required to provide reasonable structural descriptions: tree adjoining grammars. In D. Dowty, L. Karttunen, and A. ZwicQ, eds. Natural Language Processing: Psycholinguisttc, Computational And Theoretical Perspectives. Cambridge University Press, New York, 1985. (Originally presented in May 1983 at the Workshop on Natural Language Parsing at the Ohio State University.) [Joshi, 19861 Joshi, A.K. The relevance of tree adjoining grammar to 2nd International Wor b eneration. In Proceedings of the The Netherlands, 1986. hop on Generation, Nljmegen, [Karttunen, 19861 Karttunen, L. Radical lexicalism. To annear in M. Baltin and A. Kroch, eds., hr,w Cdnceptions of Phrase Structure, -MIT , Press, Cambridge, Massachusetts, 1987. [Kroch and Joshi, 19851 Kroch, A. and Joshi, A.K. Linguistic significance of tree adjoinin rammars. To 69 appear in Linguistics and Philosophy, 1 8 . &roch and Joshi, 19861 Kroch, A. and Joshi, A.K. Analyzing extraposition in a tree adjoining grammar. To appear in G. Huck and A. Ojeda, eds., Syntax and Semantics (Discontinuous Constituents), Academic Press, 1986. [McDonald, 19801 MchDFald,. D.D. *Natural language generation. 0 . dissertation, MIT, Cambridge, Mass., 1980. [McDonald and Pusteiovskv, 19851 McDonald, D.D. and Pustejovsky; J. TAG’s as a grammatical formalism for generation, In Proceedings of the 23rd.Association f?;8Zomputattonal Lmgutsttcs (ACL), Chicago, June, . [McKeown, 19851 McKeown, K.R. Text Generation. In Studies in Natural Language Processing, Cambridge University Press, 1985. [Pullum, 19821 Pullum, G.K. Free word order and phrase structure rules. In J. Pustejovsky and P. Sells, HIS., Proceedings of NELS (North Eastern Linguistic Society), Amherst, Massachusetts, 1982. [Shieber, 19861 Shieber, S. Unification Based A cp proaches to Grammars, University of Chicago Press, hicago, Illinois, 1986. [Uzkoreit, 19861 Uzkoreit, II. Constraints on order. Technical Report CSLI-86-46, Center for Study. of kg7 % uage and Information (CSLI), Stanford Umvernty, . [VijayJSitieeiand Joshi, 19851 Vijay-Shanker, K. and. Some computationally significant prop&ties ‘of tree ad’oining gr-ars. In Proceedin s o the 23rd Annua cf 1 Meetin omputational Linguistics, C i of the Association or icago, June, 1985. B Joshi 555
1987
98
696
1 Burlington Rd. Bedford, Mass;echusetts Ql.730 ail Stop A040 Abstract The KING KONG Ilngulstlc lnnterface was developed at MPTRE to be a portable natural Ian- guage interface for expert systems. It is possible to port KING KONG from one expert system to another without writing more than a modest amount of code, regardless of backend architec- ture. We describe porting it from its original ex- pert system backend to another expert system which was radically different in domain and rep- resentation. I. Introduction The KING KONG linguistic interface was devel- oped at MlTRE to be a portable natural language inter- face for expert systems. KING KONG has two character- istics that make it portable: it has a modular architec- ture, including domain-independent syntactic and mor- phological components, and a knowledge representation scheme which strongly adheres to the principle of de- clarative representation. Because of these two character- istics, it is possible to port KING KONG from one expert system to another without writing more that a modest amount of code, regardless of backend architecture, in a matter of months. In this paper we describe what makes KING KONG portable and how we ported it from its original expert system backend to another expert system which was radically different in domain and in knowl- edge representation. II. Background In 1985 MITE began development of a natural language interface for task-oriented expert systems that would have the following properties: it would make no assumptions about either the domain or the architecture of the backend, it would require no changes in the de- sign of the backend, it would minimize coding require ments on the programmer doing the port, and it would be easily extensible. KING KONG, the interface devel- oped to achieve these goals, was first implemented as This research was sponsored,by Rome Air Force Develop- ment Center under contract F19628-86-C-0001. the front end to KRS, an air mission planning program also developed at MITE. In 1986, KONG developers selected ISFI, an automatic programming system with a radically different architecture from KRS, to serve as the testbed backend in an experiment designed to demon- strate KONG’S portability. Within six months, one mem- ber of the group was able to port the interface to ISFI without having to write more than a small number of ISFI-specific accessing functions. KING KONG now serves as an interface to these two expert systems; we anticipate porting it to other systems in the future. II, The two expert system backends KlNG KONG has been used as an interface to the KRS air mission planning program for one year. The user interacts with KRS by typing to a Lisp window at the top of which is a picture of a mission template. As KRS fills in slots in response to the user’s commands, the mission template on the screen is also filled in. The KRS database stores information about the domain as FRL frames; KRS plans using a generate and test algo- rithm in which all possible plans are generated and checked by constraints. For details about KING KONG as it runs on KRS see Zweben86. 1II.B. HSFI A programmer using the ISFI system is required to specify the constraints between the significant vari- ables in a problem. The ISFI system then performs the problem solving necessary to derive a computation and write the corresponding computer program in a chosen target language. Lisp, C, and ADA are supported at pre- sent. A problem specification in the ISFI system consists of a network of structures representing objects and con- straints, with the system relying on a knowledge base containing information about certain classes of objects and types of constraints. An ISFI user specifies a net- work using the available object classes and constraint types, and ISFI derives a computational path through the network by propagating along constraints. In propagating through a network, the ISFI system makes use of a num- ber of problem solving devices, among them inheritance 556 Natural Language From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. of network structure stored with the object classes in the knowledge base, and application of network transforma- tion rules. The ISFI system has so far succeeded in pro- ducing computer programs in the domains of numerical computation, graphics display and databases (see Cleve- land86, Brown85). User interactions with the ISFI system generally fall into one of two categories: specifying a problem in a way that will allow the ISFI system to derive a computa- tion, and, less frequently, adding to ISFI’s knowledge base so that it can write programs in a new domain. The ISFI system has a graphics interface which displays sections of a network and provides menu driven inspec- tion and editing facilities. Two major problems with this interface are its inability to display a network of signifi- cant size, and its forcing the user inspecting a network to work through several levels of menu displays before finding needed information. These considerations made ISFI a good candidate for a natural language interface. As noted in the introduction, KONG has a modular architecture which aids p ity. By main- taining independent syntactic, morphological, semantic, and contextual components and explicitly specifying the interactions between them, KONG allows most of the information required in porting to be specified declara- tively, both speeding and easing the porting task. ponent of KONG employs a simple affix-stripping model. Its syntactic component is a modified Marcus parser, enhanced with strategies to handle grammatical relations. These com- ponents need not be modified during the port, since they embody information about English rather than the do- main of the target backend. KONG’s knowledge of lin- guistics means that a small amount of declarative infor- mation is enough to specify the morphosyntactic behav- ior of a given word. antics definition identifies a word with some con- cept in KONG’s model of the backend: either an object, which is located in a simple AK0 hierarchy, or a rela- tion, which may be located in a matrix of relation types, such as EXTENT or INTERVAL, or relation families, such as SPACE or TIME (see Bayer86). A word identi- fied with a relation must further specify the correspon- dence between its semantic arguments (derived via pars- ing the arguments of the chosen relation. Relations for include speed, size, operationality, range, carry- ing ability, etc. Relations for 133 include scope, loca- tion, complexity and other concepts that apply to auto- matic programming. KONG achieves its independence from particular backend architectures by building this model of the domain and filtering all interactions with the backend through it. .C. Context Contexts are captured using data structures called scenes. A scene in the KONG interface is a stereotypical context which records the kinds of objects expected to be in prominence at a given point in an expert system user’s interaction with the system, along with the user’s expected action at that point. During a discourse, in- stances of a scene are used to record the objects men- tioned in the ‘discourse, to perform basic focus tracking for anaphora resolution and to constrain inferencing on the user’s goals in the interaction. For example, as part of the KONG interface to the mission planner, there is a “CHOOSE-TARGET” scene, with prominent objects (known as scene roles) being a friendly airbase, am aircraft and a hostile airbase, and with the expected action being the filling of the target airbase slot in a KRS mission template. In order to derive actions and queries from lin- guistic input, KING KONG performs syntactic and se- mantic analysis and maps objects from the arguments and modifiers of the resulting case frame to arguments of relations and to the roles of a scene. v. osting tas In porting KING KONG from one backend to an- other, one must do the following: 1. Define scenes to represent contexts for the new domain 2. Define objects and relations for the domain 3. Define new vocabulary for the new backend 4. “Glue” the interface to the backend by writing the code that invokes backend commands or data base searches. This is the only task in porting that requires the writing of code. Since the first three tasks involve only declarative information, it is possible to enter it via menu-driven facilities. KONG currently supports facilities for word and scene definitions; other facilities are being planned. Scenes model a user’s interaction with the target system. The user thinks of KRS as a tool to fill in slots on a mission template, and KONG thus contains scenes for filling in each slot. The user of ISFI, on the other hand, regards ISFI as a tool for representing information about a programming problem, using the structures - nodes, constraints and so on - which ISFI makes avail- i Kalish and Cox 557 able. The port of KONG to ISFI includes scenes model- ing the user’s actions in specifying nodes and constraints in a network, and in testing the ability of ISFI to write a program from the resulting network. One must also specify the roles of a scene, the parts that various prominent objects play in the current context. In ISFI, a scene for transformation would have a role for %he transformation itself, and for the network fragment matched and transformed. Finally, one must locate the scenes in a hierarchy with respect to each other. There may be links between scehes such as parent, child, or likely successor. The degree of structure this hierarchy exhibits corresponds to the strictness of the task structure of the target sys- tem. As noted above, the ISFI user typically either builds and updates a network representing some problem, or adds to the system’s knowledge base. The user’s pro- gress in these tasks will vary dramatically, depending on the difficulty of the problem to be solved and the amount of problem’solving information ISFI is able to bring to bear in the domain. The structure of the scene hierarchy for ISFI is correspondingly much more flexible than that for KRS. An example of a scene definition from the auto- matic programming system is: (def-scene isfi-node :goal :fill-central-role :lexical-triggers isfi-node-mapping :inferiors (isfi-obj-class connect-to-constraint isfi-node-state) :superior in-problem-network :prominent-roles (obj-class constraint constraint-terminal node)) This scene represents the context of talking about a node in the network. It may have inferior contexts in which a user talks about constraint connections to the node, the state in which the node resides, or the object class of the object inside the node. In the node context, prominent roles are likely to be constraints, object classes, the node itself, and the constraint terminals that connect nodes to constraints. It is not necessary to write this definition by hand; the menu-driven scene defini- tion tool provides facilities for specifying all these op- tions and produces the definition. It is possible to perform a partial port by defining vocabulary but by not defining scenes. In fact, the initial port to ISFI was carried out this way. Users were still able to ask questions about the automatic programming system in English, but they did not have the benefits of the context and discourse tracking. This meant that they could not use pronouns or most forms of paraphrase in referring to objects, nor was KONG able to make any inferences about their goals in asking questions. The re- sult was a “dumb” interface which processed individual sentences, but not discourse segments. As scenes were added to the system, KONG was able to make limited inferences and understand various forms of context de- pendent reference. ‘or, definitions We will now offer two examples of word defini- tions and how they were performed for KIXS and for ISFI. We shall show the actual code, even though some of its details may be a little obscure, because we wish to prove that KING KONG’s knowledge representations are Here is an example of how the verb “destroy” was defined for both expert sytems. First, the mor- phosyntactic definition which applies to both domains: (defkong destroy (make-word newform 'destroy features (copytree *VERB-DEFAULT*) subcategorization '(:direct-object) semantics (make-kernel part-of-speech 'v))) KONG has extensive knowledge about syntax so the definition can take advantage of information about defaults. All one has to tell the system is that “destroy” is a verb and it subcategorizes for a direct object. All of this is accomplished through the menu-driven word defi- nition tool. We wish KM to respond to an input such as “De- stroy Mermin” by filling in a slot on a mission template that corresponds to the target. We specify this by defin- ing a mapping from this sense of -“destroy” to the two backend goals “fill-slot” and “change-slot.” This, also, is done through a menu which contains choices for all the backend goals possible for KRS. KING KONG then writes the following code: (defmapping destroy-target-mapping ((obj . target) (instr . aircraft));destroy the target with a plane nil (:fill-slot :change-slot)) The action to achieve this goal is associated with a particular scene, such as CHOOSE-TARGET, through another declarative definition. This action is now avail- able for all verbs in this context, and does not need to be defined again. 55% Natural Language (defbackend-action fill-slot choose-target :backend-goals (:fill-slot :change-slot) :discourse-goals (:act) :role-names ((mission :optional (roles)) (target :present (clause)) (airbase :absent (clause)) (aircraft :absent (clause)) (ordnance :absent (clause)) (td :absent (clause)) (tot :absent (clause)) (unit :absent (clause)) (ac-num :absent (clause)) (pd :absent (clause)))) (defmethod (backend-action-delete :execute) (things-to-destroy) (loop for isfi-object in things-to-destroy for object-window = (loop for window in (send (get-right-graphics-window) :exposed-inferiors) if (eq isfi-object (send window :displayed-object)) return window) if object-window do (send object-window :erase) (selectq (typep isfi-object) (isfi:node The actual execution of this backend action is one of the only aspects of the port which involves program- ming. A stripped down version of “‘fill-slot” follows: (defmethod (backend-action-fill :execute) (scene) (send self :select-window scene nil) (let* ((actual-roles (get-roles-with-values (remove (send self :mission-type scene) (send scene :prominent-roles)) roles-to-use)) (mission (get-role-value (get-role (send self :mission-type scene)) roles-to-use)) (backend-mission (get-backend-object mission))) (loop for role in actual-roles for slot-name = (get-backend-object (get-role-name role)) and backend-role-value = (get-backend-object (get-role-value role)) do (user:dump-mission-values baokend-mission (list (cons slot-name backend-role-value))) finally (send scene :set-goal-filled t) (return :success)))) This code matches the arguments of the verb to the roles of the current scene, locates the actual slot item by accessing KRS’s database, and puts it into the mission template. It is included only to show readers ex- actly what must be written by hand as part of the do- main specific interface “glue.” V. SF1 In ISFI, an input containing “destroy” is likely to be “destroy node x in the network.” To interpret sen- tences like this, one defines the mapping to backend goals through a menu to produce: (defmapping destroy isfi-node-mapping ((obj . node)) nil (delete)) The action to achieve the :delete goal is declared as follows: (defbaokend-action backend-action-delete isfi-node :baokend-goals (:delete) :discourse-goals (:act) :role-names ((network :present (roles)) (node :present (clause)) (transformation :absent (clause roles)))) Now one must write domain specific code for de- fining the action “delete:” (isfi:destroy isfi-object 'isfi:node)) (isfi:constraint (isfi:destroy isfi-object 'isfi:constraint))) finally (return :success))) series The second example is the preposition “in”, a word whose basic meaning applies to both the KRS and ISFI domains. We start by showing the definition for the word’s morphosyntactic behavior: (defkong in (make-word newform 'in semantics (make-kernel part-of-speech 'p))) All this definition does is specify that the word is a preposition; since prepositions, unlike nouns and verbs, do not exhibit complex morphosyntactic behavior such as declension, there is no need to specify more. Now, to add meaning to this preposition, one needs to tie it in to KONG’s relational model of seman- tics. First, one must define a relation LGCATION with which the word will be associated. The definition for this relation is: (def-db-relation location (object position) :type position :family space :default-relation-p t) This definition says that LOCATION relates two concepts, an object and a position, and locates this rela- tion in the type-family matrix. Both these definitions are general across do- mains. But one needs domain specific code to tie the interpretation of a sentence like “the runway at Halfort” to the actual database objects it designates. To do this, one defines what is called a “relation-action” which as- sociates a relation and its arguments with a set of object messages to send to the relation in order to extract the relevant information from the backend database. Below are several examples of such messages. The first “:country-of-airport” associates an airport’s position with its home country. These messages are the only part of the port which requires programming. Kalish and Cox 559 (def-relation-action location '(((airport country) . ((position . :country-of-airport))) ((airport lat//long) . ((position . :lat-of-airport))) ((object lat//long) . ((position . :lat-of-object))))) (defmethod (location :country-of-airport) (obj ignore) (car (user:mget (kong-instance-backend-object obj) 'user:apo)))) (defmethod (location :lat-of-airport) (obj ignore) (car (user:mget (kong-instance-backend-object obj) 'user:lat//long))) (defmethod (location :lat-of-object) (obj ignore) (car (user:mget (kong-instance-backend-object obj) 'user:lat//long nil '(user:,apo)))) Here is the domain specific code for the auto- matic programming system’s interpretation of “in”. (def-relation-action location '(((inherit node) . ((object . :inherit-event) (position . :inherited-on)))) (defmethod (location :inherit-event) (ignore node) (mapcar #'get-accessor-from-backend-name (loop with node-ref = (get-referent node) with net = (isfi:node-network node-ref) for creator-record in (union (mapcar #'get-creation-record (everything-created-in-network net))) when (and (typep creator-record 'isfi:inherit-creator) (eq node-ref (send creator-record :node))) collect creator-record))) (defmethod (location :inherited-on) (to-inherit ignore) (send (get-referent to-inherit) :node))) As with backend actions,. the code for these data- base accesses need only be written once; as soon as they are defined, reference- to them is possible through ail relevant menu-driven definition facilities, and they are available for all subsequent definitions. VI. conclusions Clearly, we have made some assumptions about expert systems and how they are used. We believe that in most cases, there is a limited enough number ) of things a user will want to do, so that one can capture the contexts he is likely to enter by defining a small number of scenes. For both KRS and ISFI this has, in fact, been true. Expert systems are usually designed to carry out a few specific tasks; if we encountered a system in which we could not identify a clearly defined, fairly small set of such tasks, we would find ‘our scene mechanism in- adequate to capture context or so bulky that it would be impossible to use. We do not believe that this is likely, but it is a good reason for believing that our system would be incapable of understanding narrative, for ex- ample. Similar reasoning applies to our use of a rela- tional model of the backend; if we were faced with a huge array of relations including extensive overlap in the meaning of some relations we would find word defini- tion to be prohibitively difficult. When defining words one is still forced to think about the specific sentences in which they will be used. This is unfortunate since it introduces ad hoc, domain specific reasoning into the definition process; it also means that word definitions are rather simplistic. KING KONG has no ability to reason about word meaning in any sophisticated way, lacking the abilities of CD based systems to reason about consequence, for example. This is a serious weakness, but one that expert systems inter- faces can, by and large, live with for the reasons de- scribed above. There is nothing to prevent an extension of King KONG to a richer semantics; however, we are not comfortable with the semantic models available to- day because they are all, even CD’s, difficult to repre- sent declaratively. We hope to have shown that the port- ability of KING KONG follows directly from its modular- ity and declarativeness. References Bayer, S., Kalish C. E., and Joseph, L. E. (1986) “Grammatical Relations as the Basis for Lan- guage Parsing and Text Understanding,” pre- sented IJCAI-85, August 1985 Los Angeles. Pro- ceedings AAAI-86, pp.788-790. Bayer, S. (1986) “A Relational Representation of Modi- fication,” Proceedings AAAI-86, pp. 1074-1077. Brown, R. II. (1985) “Automation of Programming: -The ISFI Experiments,” M85-21, June 1985. Pre- sented at the Expert Systems in Government Symposium, October 1985, McLean VA. Cleveland, G. A. (1986) “Mechanisms in ISFI A Tech- nical Overview,” M86-17, April 1986. Presented to the Canadian Society for Computational Stud- ies of Intelligence-1986 Conference Proceedings, Montreal Canada. Zweben, M., Chase, M. P. and Kalish, C. E., (1986) “Tracking Discourse & Context for an Expert System Interface,” Proceedings of the Second Aerospace Applications of Artificial Intelligence Conference, 1986, pp. 200-209. 560 Natural Language
1987
99
697
Stephanie A. Miller & Lenhart K. Schubert Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2Hl Abstract Theorem provers are prone to combinatorial explosions, espe- cially when the reasoning chains needed to establish a desired result are lengthy. To alleviate this problem, special purpose inference methods have been developed that exploit properties of certain domains to shorten chains of reasoning with types, temporal relations, colors, numeric relations, and sets, to name a few. The problem investigated here is how to use these efficient, but limited methods in a more general enviromrrent. Although much research has been done on this problem, most of the resulting systems either restrict what they can represent and rea- son with, limit the types of special mechanisms that can be used, or are difficult to extend with other specialists. We develop a uniform interface to specialists which allows them to assist a resolution-based theorem prover in function evaluation, literal evaluation, and generalized resolving and fac- toring. The specialists incorporated into this system include a temporal reasoner, a type hierarchy, a number specialist, a set specialist, and a simple color specialist. Each new specialist was found to make possible fast proofs of questions previously beyond the scope of the theorem prover. Examples from the fully operational hybrid system are included. 1. Introduction General theorem provers all suffer from combinatorial explo- sions. However, for some frequently encountered subdomains, special purpose inference methods have been developed that reason faster than any general method can by exploiting the pro- perties of those subdomains. These specialists may use com- pletely different representations and methods to achieve their performance. For example, a type specialist can use a type hierarchy to short-cut the chain of reasoning involved in deter- mining that a wolf is a living-thing (the sequence of inferences wolf -2 warm-blooded-quadruped -> larger-animal -> animal -> creature -> living-thing can be reduced &I wolf -> living- thing, using a preorder numbering scheme in the hierarchy). Similarly, a time specialist would compress reasoning about transitive temporal orderings, such as inferring that event1 is before event4, given that event1 is before eve&, event2 is before eve&, and event3 is before event4. Other specialists could deal with arithmetic relationships, sets, spatial relation- ships, colors, and so on. The question then arises as to how to use these efficient, but limited methods in a more general environment. This is the problem central to this paper. Ideally, the specialists should be integrated with the general mechanism in such a way that the specialists will be used when appropriate, and the general method when no specialists apply. This would give us a system with a wider domain than all the specialists combined, while avoiding much of the combinatorial searching usually associ- ated with a large domain. We shall describe such a hybrid approach, in which a resolution-based theorem prover which operates on a semantic net is combined with several specialists (building on the ideas in [Schubert et al., 19871). The specialists include a temporal inference specialist, a number/arithmetic specialist, a type hierarchy specialist, a set specialist and a very simple color spe- cialist. Although hybrid approaches have been tried for these subdomains before, most use disjoint specialists, and do not systematically address the problem of communication among specialists. The combined inference system developed here is intended for low level inferencing in a natural language understanding system (ECoSystem [de Haan and Schubert, 19861) under development at the University of Alberta. The portion of Ecosystem presented here is called ECoNet. It accepts asser- tions in the form of first order predicate logic propositions, and answers questions phrased in the same form. The system is implemented in Lucid Common Lisp and runs on a Sun 3/75. A related paper wller and Schubert, 19881 contains details on the temporal specialist incorporated into this system. 2. Specialists Before going any further, we should indicate what is meant by specialists, or special purpose inference mechanisms, and what can be gained by using these methods. A special inference method takes advantage of special properties of the predicates, terms and functions in the domain it works with, using efficient representations and methods for reasoning in that domain. Its reasoning steps may shortcut lengthy chains of standard infer- ences. For example, the temporal specialist uses a graph struc- ture to represent times and temporal relations passed to it, and uses efficient graph algorithms to do its reasoning. Because lengthy chains of temporal cormections may determine the rela- tionship between two times, establishing such relationships by general methods can be computationally expensive. Any inference made by a specialist must be sound, but there is no requirement that the specialist be complete, as the general method can fill in any inference gaps, albeit less efficiently. Also, since the specialists are to be used to accelerate the sys- tem, they must ALWAYS return an answer, and do so quickly (unknown is an acceptable answer). Schubert et al. [Schubert et al., 19871 suggest several ways for a specialist to accelerate a theorem prover using derived rules of inference, including literal evaluation and generalized resolution and factoring. They discuss the relationship to Stickel’s theory resolution [Stickel, 19831 in detail. In addition, a specialist can evaluate functional terms to simplify literals. Literal evaluation uses a specialist’s special representation to evaluate literals to true or false, and hence to simplify input clauses, and resolvents generated by the theorem prover. For example, if “a strictly before b” was represented in the temporal specialist’s representation (the timegraph), it can be used to simplify literal [a titer b] to false. Function evaluation simplifies a term by evaluating it (for example, (m-of nl) may be simplified to a number, say 3 by the number specialist). Miller and Schubert i61 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Generalized resolving and factoring quickly determine incompatibility or subsumption of one literal by another. This allows resolution and factoring to be done where they usually cannot (in one step). For example, the type specialist can resolve [d dog] against 1 [x animal], to get the null clause, even though the two predicates are distinct. Similarly, the color specialist can resolve [x bhe] against [c red] to get the null clause, even though the signs are the same. The type specialist can give us generalized factor [d animal] from literals [d dog] and [x animal]. Similarly, the temporal specialist can factor [T [x during a] or [x during b]] to [a during b] (if we have [a dur- ing b] represented in the timegraph). In addition, even though we may not have incompatibility, a specialist can determine the conditions for incompatibility and return these as a residue. A residue (from Stickel’s partial theory resolution), is a literal or a set of literals whose negation would make the two literals being resolved incompatible (resolvable in one or more steps to the null clause). For example, if we resolve [a before b] against [a after b], we get a residue of [a equaZ b]. If that residue is later determined to be false, we have the null clause. --..--m* Represents literals for ent or evaluation functions for simplification pairs of Ti terals for resohion or factoring, rem.& of requested evaluations As long as the operations a specialist is allowed to perform are equivalent to sets of standard deductive steps, the specialist is guaranteed to be logically sound. This restriction is satisfied by the operations we have implemented (literal evaluation, gen- eralized resolving, etc). 3. Overview of the System gr Re resents sim lified functional terms evaluations of lterals, rest ues from resolving. and factors P 2 from factoring. Alsq, functional terms and literals to be evaluated and mterested party specifications for concepts Figure 1. Architecture of ECoNet Having discussed how a specialist can assist a resolution- based theorem prover, we now need to consider the design of a general interface that will allow the theorem prover to invoke the specialists when appropriate. This interface should be gen- eral enough to handle any specialist we might imagine, and efficient enough that its cost is modest in comparison with the savings made possible by the specialists. ECoNet’s architecture is shown in Figure 1. The core of the system is a resolution-based theorem prover which has been under development at the University of Alberta for several years, most recently by de Haan [de Haan and Schubert, 19861. The theorem prover uses a semantic net representation, and features automatic topical classif?cation of entered clauses and organization in a topic hierarchy. The inference method used is resolution, enhanced by topical retrieval of clauses to resolve against the problem clauses. Inference is also accelerated by a concept hierarchy that enables type inheritance. Since the hierarchy specialist is used both for organizing knowledge in the theorem prover, and as a type specialist, it is shown inside the theorem prover box. members of the curriculum committee are members of the faculty council, we should be able to figure out that John is a member of the faculty council. To accelerate reasoning in these areas, the number/arithmetic specialist and the set specialist were incorporated. The number specialist uses a graph structure to represent and reason about orderings on integers and real numbers, and can evaluate numeric functions like add. The set specialist maintains the contents of sets, and can do simple operations on sets, like union, intersection, and testing set membership. In addition, there is a very simple color specialist, which currently assumes that all color predicates are disjoint (e.g. btue and red are incompatible). A much better specialist based on a geometric three-dimensional color space has been designed [Schubert et al., 19871, which can handle subsumption of colors (e.g., crimson is subsumed by red), intermediate shades (e.g., blue-green), and some modified shades (e.g., sort of brown), and will eventually replace this one. 4. The Specialist Interface The most elaborate specialist in this system is the temporal specialist. The system is to be used for natural language under- standing, which often deals with a number of temporally related events or episodes. Reasoning about these orderings, as well as the quantitative aspects of time (durations and dates) is required. As mentioned earlier, such inferences in a general theorem prover can be computationally expensive. Details of the temporal specialist can be found in [Miller and Schubert, 19881. There were several issues to consider in designing the inter- face. When and how does a specialist get invoked? How is the decision made that a particular specialist is likely to be helpful? How can useful information be transmitted between the snecial- ists and the general theorem prover, and between spe&lists? To answer these questions, we first need to review how the gen- eral theorem prover works. Figure 2 shows a high level absuac- tion of the algorithm used by the general theorem prover, with notes in bold print showing where the various specialist opera- tions described earlier fit. An agenda is used to keep track of possible actions which Temporal reasoning is certainly not the only domain requir- can be used to carry out a proof - resolving actions and access ing potentially explosive inferencing. Reasoning about actions. A successful resolving action causes the resolvent to numbers, and about set membership are also problems which be entered; a successful access action causes clauses to be we will have to deal with, even for understanding ordinary retrieved which are likely to resolve against a particular clause. discourse. For example, if we are told that eleven of twelve jurors agreed on a guilty verdict, we should be able to figure out Agenda items are ordered so that the action most likely to result that exactly one juror demurred. Similarly, if we are told that in success (i.e. the null clause) is at the top. When given a John is a member of the curriculum committee, and all question, the theorem prover starts with the clauses correspond- ing to the question, and the clauses corresponding to the nega- 162 Automated Reasoning tion of the question, and enters both sets just like resolvents (and flags them appropriately). The classification phase of this entry will add the first access actions to the agenda. If a con- tradiction is eventually derived from the question clauses and the knowledge base, the answer will be NO (hence it is called the disproof attempt); with the negation of the question clause, the answer will be YES (the proof attempt). The proof and dis- proof attempts are carried out concurrently, using a common agenda. Assertion Simplify asserted clause -> literal evaluation -> function evaluation Classify asserted clause Enter into Semantic net and concept and topic hierarchies -> entry into specialists’ representations Question Loop through agenda; take top item from agenda: If Resolving Action: Calculate resolvent Simplify resolvent -> literal evaluation -> function evaluation Classify resolvent and add relevant access actions to agenda Check for factoring possibilities -> generalized factoring Enter into semantic net and concept and topic hierarchies If Access action (involves ckzuse, fopic, and concepf): Compare each clause retrieved under topic of concept with the given clause If resolvable, add a resolving action to agenda -B generalized resolving Figure 2. Overview of Algorithm During an access action where literals are being compared for resolvability, if the traditional test fails (same predicate, different signs), a specialist could potentially find a generalized resolving action. “Generalized resolutions” found by the spe- cialists are added directly to the agenda. Similarly, if a factor- ing test fails (same predicate, same sign), a specialist might find a generalized factor. Clause simplification involves both literal evaluation and function evaluation (a form of term simplilication), both of which a specialist may assist with. Even if a literal or its nega- tion have not been asserted before, a specialist may be able to detect its truth or falsity. In all of these cases, a decision process must be invoked that can quickly decide which specialists, if any, apply, and invoke them. 4.1. Extensions to the Theorem R-over To get maximum flexibility for the specialists and the spe- cialist interface without sacrificing efficiency, some enhance- ments to the theorem prover were needed. 4.1.1. Expressiveness First, some syntactic refinements were needed. To enable the specialist interface to decide when a specialist is appropriate for some problem, terms serving as predicate arguments should be sortally distinct, so that predications about physical objects, events, numbers, and so on, are easily distinguished from each other. (Predicates alone do not necessarily determine their argument sorts in our system. For example, the predicate equal may relate a number of different kinds of entities, including events or numbers.) This is accomplished by allowing a “sort tag” to accompany an argument, expressed by following the term with an underscore and the sort. Possible sorts include: physical object, event/episode, time, number, symbolic expres- sion and set. Sorts are considered pairwise disjoint (in contrast with types, which in this system are unary predicates whose extensions may overlap). Also, some entities, such as numbers, structured values, and symbolic expressions are not easily expressed as semantic net concepts, or are too numerous to represent that way. For exam- ple, we do not want an individual concept for every possible number! To avoid this problem, we allow quoted expressions as terms. For example, the functional expression (date ‘1987 mm-number ‘1 ‘0 ‘0 ‘0) evaluates to a quoted expression representing a structured value, ‘(time 1987 mm I 0 0 0), which is an absolute time representation recognized by the time spe- cialist. A quoted atom is assumed to denote the string of char- acters making up the atom. For example, the denotation of ‘Mary is the string of alphabetic characters ‘Mary”, and the denotation of ‘35 is the string of numerals “35”. By formally identifying the natural numbers with the strings of numerals normally used to represent them (in base lo), we ensure that quoted numerals are denotationally equivalent to unquoted ones’. The denotation of ‘(t 1 t2 . * * t,) is the tuple consisting of the denotation of t 1, followed by the denotation of t2, . . . . fol- lowed by the denotation of t, . Thus, the above structured value is the 7-tuple whose lirst element is the string “time”, and whose remaining elements are numbers (the second one of these being whatever number is denoted by mm). Note that although this structured value looks very similar to the original functional term, if the terms for the function had been more complex (like (add ‘I ‘1986) instead of ‘1987), the quoted expression would look considerably simpler than the original term. Term simplification is done bottom up, and guarantees to the special- ists that when they are invoked with a literal or functional term, the terms have been simplified as much as possible. This allowed simpler and more elegant implementations of the inter- face and specialists. Because the “generalized resolutions” performed by special- ists do not necessarily involve identical predicates, and may do the work of many ordinary resolution steps, the order of unification of arguments in two literals need not be the same. This is decided by the specialists, and details depend on the axiomatization (theory) assumed to underlie the specialist’s domain. After the specialists have decided in what order the arguments should be unified when resolving or factoring, they invoke the unification process themselves. If a factoring or resolving action results, the specialist passes these substitutions back to the theorem prover. Specialists may also return a residue (see example given ear- lier). The theorem prover has to incorporate this residue into the resolvent. Also, when attempting to resolve, specialists may provide more than just substitutions and residues - they may also supply the evaluations of the two literals after substitution. Then, even if the two literals are not incompatible, we can use the evalua- tions to simplify the clauses. For example, when resolving --, [x before a] in [-I [x before a] I [x during a]] against a clause con- taining literal [b after y], and if furthermore, we have “a strictly before b” represented in the timegraph, we can simplify the first clause to 1 [b before a]. This simplification will then be added to the clauses to be considered. This enables us to confine numbers to be computationally convenient. to quoted contexts, which turns out Miller and Schubert 163 4.2. The Decision Process This section briefly outlines how the specialists are selected for each operation they may perform. Details are in [Miller, 19881. The mechanism for deciding which specialists should be called is just a cursory check to find specialists that are likely to be interested. Since much of the time no specialist will be involved, we do not want to waste too much time decid- ing which, if any, to call. To get quick and easy access to the specialists that may be useful, they are kept on the predicate’s or function’s property list. 4.2.1. Literal Entry and Evaluation For literal entry and evaluation, the predicate involved, and its first argument are used to determine which specialist to call. The specialist itself is responsible for checking the other argu- ments to ensure that they are in its domain. Another possibility was to use pattern matching on the predicate-argument patterns, but this can be quite slow. On entry, all applicable specialists are called, as any one might later be required to use the informa- tion in inferencing. For literal evaluation, each specialist is called, one at a time, mtil an answer other than unknown is returned. For example, asserting [nl-integer less-than r2_real] would cause the number specialist to be invoked to enter it. 4.2.2. Generalized Resolution and Factoring For generalized factoring and resolution, the two predicates alone are used to decide which specialists to call, using the intersection of the lists of interested specialists for each. No checking is done on arguments yet, as unification has not taken place (and cannot until the specialist decides how it is to be done), and substitutions may be involved. Final checks on whether the resulting literals (after unification and substitution) are in a specialist’s domain must be done by the specialist. For example, when resolving [tl-time before t2_time] vs [x equal t3 time], both predicates have the temporal special% on their property lists (although equal has other specialists as well), and so only that specialist would be called to try to resolve them. All appli- cable specialists are called for both generalized resolving and factoring, as each may find different resolving or factoring actions. 4.23. Function Evaluation The function alone is used to determine which specialist(s) to call. All arguments are simplified before the function is evaluated, recursively. For example (date (add ‘1987 ‘I) ‘4 ‘1 ‘12 ‘0 ‘0)’ would first use the number specialist to calculate the year argu- ment, and then the temporal specialist to calculate the absolute time with all the arguments 5. Communication between Specialists Sometimes a specialist may need additional information to complete its task, and it is possible that another specialist may be able to supply it. For example, assertion of [el-episode before (date ‘1987 nun number ‘01 ‘12 ‘00 ‘OO)] would require that the time specia&t ask about the bounds of mm (the territory of the number specialist). Similarly, a naive or qualitative physics specialist might need to know if the event of the robot hand going across the table happened before or after the vase of flowers was placed there in order to determine the current position of the vase. Also, when the information requested is not immediately available, or likely to be further constrained by clauses added later (for example, bounds on numbers), we want the specialist to be not&d if and when it is. The essential idea of our approach is to channel all commun- ication between specialists through the interface. Thus special- 164 Automated Reasoning ists need not know which other specialists can help them. Two types of communication have been implemented for the special- ists: immediate evaluation, where a specialist asks for the evaluation of a particular functional term or literal, and delayed communication, where the specialist is notified that something of interest in another domain has been asserted. Immediate evaluation may be useful during either assertion or question answering, while delayed communication can only be used to further enhance asserted information. 5.1. Immediate Evaluation For immediate evaluation, the functional term or literal is sent out from the specialist to the specialist interface for evalua- tion. The interface decides which specialist(s) will be able to evaluate it as described earlier, and passes the item on. The result (a concept or quoted expression for a functional term; yes, no, or unknown for a literal), is sent back to the specialist that requested it. In the previous example, the temporal specialist could request the value of (max-of ~GWZ)~, and use that value in the absolute time specification. 5.2. Delayed Communication A specialist can communicate to the interface a particular concept (currently only constants) it wants “watched”, and a literal to reassert when something involving that concept is asserted. This information is kept on m interested party list for the concept. Whenever anything new is asserted about that con- cept, each literal on the interested party list is reasserted to the specialist that put it there. There is no point asserting the new literal to an interested specialists because it may not recognize the predicates and terms involved. In the previous example, the temporal specialist would add an entry to mm’s interested party list, consisting of the specialist (time-specialist) and the literal given to the temporal specialist (after term evaluation) [el before ‘(time 1987 mm 1 12 0 O)]. Later, if [mm less-than ‘31 were asserted, [el before ‘(time 1987 mm 1 12 0 0)] would be re-asserted to the temporal specialist. 6. Example The system can answer questions which involve mixed rea- soning - both specialist inference and ordinary resolution. The example in Figure 3 from the story of Little Red Riding Hood shows a few capabilities of the system. The example does not show either the number or set specialists, as examples involving them were too lengthy to include. Currently the number spe- cialist is used mainly to support the temporal specialist by assisting it in maintaining the most constrained absolute times and durations. The set specialist is quite new and its capabili- ties have not been fully exploited in our Little Red Riding Hood knowledge base yet. 7. Comparison with Other Approaches ECoJet’s representational capabilities allow it to handle first order predicate calculus, minus equality. (The specialists provide partial handling of equality, but full equality reasoning should be added to the general theorem prover). The special- ists, in principle, are used only to accelerate inference. This differentiates it from systems like KL-TWO [V&in, 19851, where the core of the system is a computationally efficient sub- set of first order logic, and the specialist (a terminological infer- ence mechanism) adds both to the overall expressiveness and reasoning power of the system. ’ We always want the most complete, but correct information available. Since mm can never get bigger than its maximum, this is the safest bound to use on assekon. However, if we were trying to evaluate this literal, we would use the minimum, since if el were less than the minimum of that time. we can be assured that it will be less than any other value that time might have. ; did a creature kill the wolfwhile he was in the cottage? ==> ?(E J E z-ep (y kill W z) & (y creature) & (z during wolf-in-cot)) Entering proof clauses: ((- U-VAR-1 CREATURE) I (” U-VAR-1 KILL W EPISODE-VAR-1) I (- EPISODE-VAR-1 DURING WOLF-II?-COT)) Using the temporal specialist: Time Sue&list: Truing to resolve (- EPiSODE-VA%l-DURING WOLF-IN-COT) against (WOLF-DEMISE DURlNG-l-0 WOLF-IN-COT) T$e Specialist: Resolved with evaluation: NO with substitutions: ((EPISODE-VAR- 1 by WOLF-DEMISE 2)) Resolved (- EPISODE-VAR- 1 DURING WOLF-IN-COT) iu the proof clause ((- U-VAR.1 CREATURE) I (- U-VAR- 1 KILL W EPISODE-VAR.1) I (- EPISODE-VAR.1 DURING WOLF-IN-COT)) against (WOLF-DEMISE DURING-l -0 WOLF-IN-COT) iu (WOLF-DEMISE DURING- 1-O WOLF-IN-COT) yielding . . . ((- U-VAR.1 CREATURE) I (- U-VAR.1 KILL W WOLF-DEMISE)) Ordinary resolution: Resolved (- U-VAR- 1 KILL W WOLF-DEMISE) iu the proof clause ((- U-VAR.1 CREATURE) I (- U-VAR.1 KILL W WOLF-DEMISE)) against (WOODCUTTER KILL W WOLF-DEMISE) in (WOODCUTTER KILL W WOLF-DEMISE) yielding . . . (-WOODCUTTER CREATURE) Using the type hierarchy specialist: Resolved (- WOODCU’ITER CREATURE) in the proof clause (-WOODCUTTER CREATURE) against (WOODCUTTER MAN) iu (WOODCUTTER MAN) yielding the null clause. YES . Were all the capes blue? 8 ==> ?(A x (x cape) =B (x blue)) Entering disproof clauses: ((- U-VAR.1 CAPE) I (U-VAR.1 BLUE)) Using the color specialist: Resolved (U-VAR.1 BLUE) iu the disproof clause ((- U-VAR.1 CAPE) I (U-VAR-1 BLUE)) against (SCON-340 RED) in (SCON-340 RED) yielding . . . (- SCON-340 CAPE) (SCON-340 CAPE) evaluated to YES. The clause evaluated to NO. NQ Figure 3. Example of ECoNet in operation3 Another hybrid, Krypton [Bra&man et al., 119851 has a powerful core system, also enhanced with a terminological spe- cialist. It is unclear whether the technique used to integrate this specialist could be used for other specialists as well (like time). Krypton is one of the few other systems that actually allow the specialists to do generalized resolving. However, the residues calculated by Krypton can be quite complex, involving quantification. Although ECoNet has no constraint on the com- plexity of residues that a specialist may create, the specialists implemented so far generate quite simple residues, further sim- plifying the resolving process. The domain independent special mechanisms used by Bundy et al. [Bundy et al., 19821 allow for more than one specialist, but all must be in the form of specialized logical rules (preclud- ing more efficient representations, like graphs). The only hmi- tations on the specialists that may be incorporated into ECoNet are that they must make sound inferences. Another thing that makes ECoNet unique is the communica- tion among specialists. Although some earlier systems did have 3 This example shows actual output of the system, edited for clarity aud brevity. Bold prmt is user input; comments iu italics; the rest, system output. The timegraph contained, among other relations, that wolf-demise (episode correspoudiug to the wolf being killed) is during wolf-in-cot (episode corresponding to the wolf beiig in the cottage). [a during-l -0 b] => [(start-of a) > (start-of b)] & [(end-of a) = (end-of b)]. some forms of communication (for example, Nelson and Oppen [Nelson and Oppen, 19791, and Shostak [Shostak, 1984]), only equalities could be communicated. ECoNet is much more flexi- ble, and unlike these systems, does not require that the special- ists be disjoint (no overlapping functions or predicates). 8. Conclusions Our intent was to combine efficient special methods with a general purpose theorem prover in such a way that the efficiency of the special methods ‘averts the combinatorial explosions usually associated with a general theorem prover. Previous approaches limit the overall domain, or cannot easily accomodate a variety of specialists, or don’t fully exploit the specialists’ capabilities. The interface we developed allows specialists to accelerate the resolution-based theorem prover in literal evaluation, func- tion evaluation, and generalized resolution and factoring. In addition, assertions are given to the specialists to store in their own representations. Several specialists were incorporated into the system using the interface, including a type specialist, a temporal specialist. a number/arithmetic specialist, a set/list specialist, and a very simple color specialist. Communication between specialists is achieved by allowing specialists to request evaluation of functional terms or literals, and by main- taining interested party lists to notify specialists that something of interest to them has been derived. The temporal and number specialists communicate through absolute time and duration specifications. Experience with the interface conkms that new specialists can be added with relative ease. This is because the specialists interact with the general theorem prover in a small, fixed set of ways. Each new specialist was found to make possible fast proofs of questions previously beyond the scope of the theorem prover. eferemes [Bra&man et al., 19851 Ronald J. Bra&man, Victoria Pigman Gilbert ancl Hector J. Leveque, An Essential Mybrid Reasoning System: Knowledge and Symbol Level Accounts of KRYPTON, Proceedings of IJCAI-85 I, (1985), 532-539. ~und~&alll9~.n B;;dy, JLa;;;ce Byrd and Chris Mellish, Jndependent, Inference Mechanisms, Proc: E&U-82, Grsay, France, 1982, 67-74. [de Haan and Schubert, 19861 Johannes de Maan and Lenhart K. SchuM Inference in a Topically Organized Semantic Net, Proc. M-86 I, (1986), 334-338. [Miller and Schubert, 19881 Stephanie A. Miller and Lenhart K. Schubert., Time Revisited, CSCSZ-88, Edmonton, Alberta, 1988. [Miller, 19881 Stephanie A. Miller, Time Revisited, M.Sc. Thesis, Department of Computing Science, IJniversity of Alberta, 1988. melson and Oppen, 19791 Greg Nelson and Derek C. @pen, Simplification by Cooperating Decision Procedures, ACM Transactions on Progrmnnaing Languages and Systems 1, 2 (1979). 245-257. [Schubert et al., 19871 L.K. Schubert, MA. Papalaskaris and J. Taugher, Accelerating Deductive Inference: Special Methods for Taxonomies, Colours and Times, in The Knowkdge Frontier, N. Cercone and G. McCalla (ed.), 1987. [Shostak, 19841 Robert E. Shostak. Deciding Combinations of Theories, JournuZ ofthe ACM 31, 1 (1984), 1-12. [Stickel, 19831 Mark E. Stickel, Theory Resolution: Building in Nonequational Theories, Proc. AAAI-83, Washington, DC., 1983.391-397. [Vilain, 19851 Marc Vilain, The Restricted Language Architecture of a Hybrid Representation System, Proceedings of IJCAI-85 I, (1985), 547-551. Miller and Schubert 165
1988
1
698
Rim Dechter Cognitive Systems Laboratory Computer Science Department University of California, Los Angeles, CA 90024 Abstract This paper presents a constraint network formulation of belief maintenance in dynamically changing environments. We focus on the task of computing the degree of support for each proposition, i.e., the number of solutions of the constraint network which are consistent with the proposition. The paper develops an efficient distributed scheme for calculat- ing and revising beliefs in acyclic constraint net- works. The suggested process consists of two phases. In the first, called support propagation, each variable updates the number of extensions con- sistent with each of its values. The second, called contradiction resoluticm, is invoked by a variable upon detecting a contradiction, and identifies a minimal set of assumptions that potentially account for the contradiction. Reasoning about dynamic environments is a central issue in Artificial Intelligence. When dealing with a complex environ- ment, we normally have only partial description of the world known explicitly at any given time. A complete picture of the environment can only be speculated by making simplifying assumptions which are consistent with the available informa- tion. When new facts become known, it is important to main- tain the consistency of our view of the world so that queries of interest (e.g., is a certain proposition believed to be true?) can be answered coherently at all times. Various non-monotonic logics as well as truth-maintenance systems have been devised to handle such tasks [Reiter 1987, Doyle 1979, de Kleer 19861. In this paper we show that constraint networks and their associated constraint satisfaction problems provide an attractive paradigm for modeling dynamically changing environments. The language of constraint networks was *This work was supported in part by the National Science Foundation, Grant #DCR 85-01234, and by the Air Force Office of Scientific Research, Grant #AFOSR-88-0177. Avi Dechtes Department of Management Science California State University, Northridge, CA 9 1330 originally developed for expressing static problems, i.e., that require a one-time solution of a system of constraints representing all the available information (for example, pic- ture processing [Ivlontanari 1974, Waltz 19751). A substantial body of knowledge for solving such problems has been developed [Montanari 1974, Mackworth 1977, Freuder 1982, Dechter 19871. Structuring knowledge by means of constraint networks leads, as we will show, to efficient algorithms for consistency maintenance and query processing. Indeed, truth-maintenance systems often utilize algorithms found in constraint processing in general, e.g., dependency-directed backtracking, constraint propagation, etc. [Stallman 1977, McAllester 1980, Doyle 19791. The use of constraint networks as the framework for modeling the task of dynamic belief management, allows us to develop an efficient processing algorithm built upon tech- niques used in the solution of constraint satisfaction problems. Two characteristic features of these techniques are that they are “sensitive” to the structure of the problem so as to take advantage of special structures, and that their performance can be analyzed and predicted. Such theoretical treatment is usu- ally not available in current TM3 research. The paper is organized as follows. Section 2 provides a brief review of the constraint network model and discusses the problem of belief maintenance in its context. The suggested belief revision process consists of two phases, presented first for singly connected binary constraint networks. The first, ropagation, is described in Section 3, and the second, ~~~tra~~~t~~~ resdmtion, is the subject of Section 4. In Section 5 the algorithm is extended to acyclic networks. Section 6 discusses the extension of the algorithm to general networks, and Section 7 contains a summary and some final remarks. ode1 A constraint network (ClV) involves a set of M variables, Xl,. . . ,X,, their respective domains, RI, . . . ,R,, and a set of ~~~~t~~i~t~. A constraint Ci(Xi,, . . . ,X,> is a subset of the Cartesian product Ri, x * * * x Rij that specifies which values of the variables are compatible with each other. A binary con- straint network is one in which all the constraints are binary, i.e., involve at most two variables. A binary CN may be asso- ciated with a ~~~§tr~~t~gr~~~ in which nodes represent vari- ables and arcs connect those pairs of variables for which Dechter and Dechter 37 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. constraints are given. Consider, for instance, the CN presented in Figure 1 (modified from [Mackworth 19771 ). Each node represents a variable whose values are explicitly indicated, and each link is labeled with the set of value-pairs permitted by the constraint between the variables it connects (observe that the constraint between connected variables is a strict lexicographic order along the arrows). (a, (a, (h Figure 1: An example of a binary CN A solution (also called an extension) of a constraint net- work is an assignment of values to all the variables of the net- work such that all the constraints are satisfied. The (static) constraint satisfaction problem associated with a given con- straint network is the task of finding one or all of the exten- sions. In this paper we focus on a related problem, that of finding, for each value in the domain of certain variables, the number (or relative frequency) of extensions in which it parti- cipates. We call these figures supports and assume that they measure the degree of belief in the propositions represented by those values. (If the set of all solutions was assigned a uni- form probability distribution, then the degree of support is pre- cisely the marginal probability of the proposition, namely, its “belief” in the corresponding Bayes network [pearl 19861 .) In particular, we say that a proposition is believed if it holds in all extensions (i.e., is entailed by the current set of formulas). The support figures for the possible values of each variable constitute a support vector for the variable. A dynamic Constraint-Network (DCN) is a sequence of static CNs each resulting from a change in the preceding one, representing new facts about the environment being modeled. As a result of such an incremental change, the set of solutions of the CN may potentially decrease (in which case it is con- sidered a restriction) or increase (i.e., a relaxation). Restrictions occur when a new constraint is imposed on a subset of existing variables (e.g., forcing a variable to assume a certain value), or when a new variable is added to the system via some links. Restrictions always expand the model, i.e., they add variables and add constraints so that the associated constraint graph (representing the knowledge) grows mono- tonically. 38 Automated Reasoning Relaxations occur when constraints that were assumed to hold are found to be invalid and, therefore, may be removed from the network. However, it is not necessary to actually remove such constraints in order to cause the effect of relaxa- tion. This can be achieved by modeling each potentially relaxable constraint in a special way which involves the inclu- sion of a bi-valued variable whose values indicate whether the constraint is “active” or not. Thus, we may assume, as is common in truth maintenance systems, that constraints that are added to the system are never removed. In the next section we present an efficient scheme for pro- pagating the information necessary for keeping all support vectors consistent with new external information. ropagation in Trees It is well known that a constraint network whose constraint graph is a tree can be solved easily preuder 1982, Dechter 19871. Consequently, the number of solutions in which each value in the domain of each variable participates (namely, the support of this value), can also be computed very efficiently on such tree-structured networks. In this section we present a distributed scheme for calculating the support vectors for all variables, and for their updating to reflect changes in the net- work. Consider a fragment of a tree-network as depicted in Fig- ure 2. s( Figure 2: A fragment of a tree-structured CN The link (X,Y) partitions the tree into two subtrees: the sub- tree containing X, Tm(X), and the subtree containing Y, Txr(Y). Likewise, the links (XJJ), (X,V), and (X,Z), respec- tively, define the subtrees TX&U), T’(V) and T’(Z). Denote by s&) the overall support for value x of X, by s&/Y) the support for X =x contributed by subtree Txv(V) (i.e., the number of extensions of this subtree which are consistent with X =x), and by sr(y/-X) the support for Y = y in T’(Y). (These notations will be shortened to s(x), s(xlY) and s (y/-X), respectively, whenever the identity of the variable is clear.) The support for any value x of X is given by: s(x) = I-I YE Xts neighbors SWY) , (1) namely, it is a product of the supports contributed neighboring subtree. The support that Y contributes canbe further decomposed as follows: s(xlY) = c w-m, (&YW(x.r) by each tox=x where C(X,Y) denotes the constraint between X and Y. Namely, since x can be associated with several matching values of, Y, its support is the sum of the supports of these values. Equalities (1) and (2) yield: Equation (3) lends itself to the promised propagation scheme. Suppose that variable X gets from each neighboring node, Y, a vector of restricted supports (referred to as the support vec- tor from Y to X), (S o) ] I- X), l * . ,s ty☺-X)) , where yi is in Y’s domain. It can then calculate its own sup- rt vector according to equation (3) and, at the same time, generate an appropriate message to each of its own neighbors. The message X sends to Y, s(xl-Y), is the support vector reflecting the subtree T’(X), and can be computed by: The message generated by a leaf-variable is a vector consist- h of zeros and ones representing, respectively, legal and s(x/--Y) = n c s(z/-X) . (4) Zd’s neighbors , Z*Y(x,,)a c(~,~) illegal values of this variable. Assume that the network is initially in a stable state, namely, all support vectors reflect correctly the constraints, and that the task is to restore stability when a new input causes a momentary instability. The updating scheme is initiated by the variable directly exposed to the new input.. Any such vari- able will recalculate and deliver the support vector for each of its neighbors. When a variable in the network receives an update-message, it recalculates its outgoing messages, sends them to the rest of its neighbors, and at the same time updates its own support vector. The propagation due to a single out- side change will propagate through the network only once (no feed-back), since the network has no loops. If the new input is a restriction, then it may cause a contradictory state, in which case all the nodes in the network will converge into all zero support vectors. To illustrate the mechanics of the propagation scheme described above, consider again the problem of Figure 1. In Figure 3(a) the support vectors and the different messages are presented. The order within a support vector corresponds to the order of values in the originating variable, e.g., message (8,1) from X3 to X1 represents (~~,(a/-Xi) , sx,(6/-Xi). Suppose now that an assertion stating the value of Xz = b has arrive4l. In that case X2 will originate a new message to X3 of the form (O,l,O). This, in turn, will cause Xs to update its sup- ports and generate updated messages to X 1 ,X1 and X5 respec- tively. The new supports and the new updated messages are illustrated in Figure 3(b). (a) W Figure 3: Support vectors before and after a change If one is not interested in calculating numerical supports, but merely in indicating whether a given value has some sup- port (i.e., participates in at least one solution), then flat support-vectors, consisting of zeros and ones, can be pro- pagated in exactly the same manner, except that the tion operation in (3) should be replaced by the logic OR, and the multiplication can be replaced by AND. summa- operator andling Ass ptions and Contra When, as a result of new input, the network enters a contradic- tory state, it often means that the new input is inconsistent with the current set of assumptions, and that some of these assumptions must be modified in order to restore consistency. We assume that certain variables of the network are desig- nated as assumption variables which initially are assigned their default values, but may at any time assigned other values as needed. The task of restoring consistency by changing the values assigned to a subset of the assumption variables is called contradiction resolution. The subset of assumption variables that are modified in a contradiction resolution process should be minimal, namely, it must not contain any proper subset of variables whose simul- taneous modification is sufficient for that purpose (i.e., like the maximal assumption sets in [Doyle 19791 ). A sufficient (but not necessary) condition for this set to be minimal is for it to be as small as possible. Other criteria for conflict resolution sets are suggested in [Petrie 19871. In this section we show how to identify, in a distributed fashion, the minimum number of assumptions that need to be changed in order to restore con- sistency. Unlike the support propagation scheme, however, the contradiction resolution process has to be synchronized. Assume that a variable which detects a contradiction pro- pagates this fact to the entire network, creating in the process a directed tree rooted at itself. Given this tree, the contradic- tion resolution process proceeds as follows. With each value v of each variable V we associate a weight w(v), indicating the minimum number of assumption variables that must be changed in the directed subtree rooted at V in order to make v consistent in this subtree. These weights obey the following recursion: W(v)=x min yi (‘*Yijk c W,Yi) w&), (5) Dechtor and Dechter 39 where {Yi) are the set of V’s children and their domain values are indicated by yd ; i.e. yg is the jr* value of variable Yi, (see Figure 4). V V Yl y2 y3 yi Figure 4: Weight calculation for node v The weights associated with the values of each assumption variable are “0” for the value currently assigned to this vari- able, and ” 1” to all other possible values. For leaf nodes which are not assumption variables, the weights of their legal values are all “0”. The computation of the weights is per- formed distributedly and synchronously from the leaves of the directed tree to the root. A variable waits to get the weights of all its children, computes its own weights according to (5), and sends them to its parent. During this bottoara-ullP-propag~~ion a pointer is kept from each value of V to the values in each of its child-variables, where a minimum is achieved. When the root variable X receives all the weights, it computes its own weights and selects one of its values that has a minimal weight. It then initiates (with this value) a top-down propaga- Gone. down the tree, following the pointers marked in the bottom-up-propagation, a process which generates a consistent extension with a minimum number of assumptions changed. At termination this process marks the assumption variables that need to be changed and the appropriate changes required. There is no need, however, to activate the whole network for contradiction resolution, because the support information available clearly points to those subtrees where no assumption change is necessary. Any subtree rooted at V whose support vector to its parent, P, is strictly positive for all “relevant” values, can be pruned. Relevance can be defined recursively as follows: the relevant values of V are those values which are consistent with some relevant value of its parent, and the relevant values of the root, X, are those which are not known to be excluded by any outside-world-change, independent of any change to the assumptions. To illustrate the contradiction resolution process, consider the network given in Figure 5(a), which is an extension of the network of Figure 1 (the constraints are strict lexicographic order along the arrows). Variables X1, X6 and X7 are assumption variables, with the current assumptions indicated by the unary constraints associated with them. The support messages sent by each variable to each of its neighbors are explicitly indicated. (The overall support vectors are not given explicitly.) It can be easily shown that the value a for X3 is entailed and that there are 4 extensions altogether. Sup- pose now that a new variable Xs and its constraint with X3 is added (this is again a lexicographic constraint). The value a of Xs is consistent only with value b of X3 (see Figure 5(b)). Since the support for a of X3 associated with this new link is zero, the new support vector for X3 is zero and it detects a contradiction. Variable X3 will now activate a subtree for con- tradiction resolution, considering only its value b as “relevant” (since value a is associated with a “0” support coming from Xs which has no underlying assumptions). In the activation process, X4 and Xs will be pruned since their sup- port messages to X3 are strictly positive. Xi will also be pruned since it has only one relevant value c and the support associated with this value is positive. The resulting activated tree is marked by heavy lines in Figure 5(b). Contradiction resolution of this subtree will be initiated by both assumption variables X6 and X7, and it will determine that the two assumptions X6 =c and X7 = c need to be replaced with assuming d for both variables (the process itself is not demon- strated). Once contradiction resolution has terminated, all assump- tions can be changed accordingly, and the system can get into a new stable state by handling those changes using support propagation. If this last propagation is not synchronized, the amount of message passing on the network may be propor- tional to the number of assumptions changed. If, however, these message updating is synchronized, the network can reach a stable state with at most two message passing on each arc. Figure 5(c) gives the new updated messages after the sys- tem stabilized. contradiction (b) Figure 5: The contradiction resolution process $0 Automated Reasoning 5. Support propagation in acyclic networks The support propagation algorithm presented in Section 3 for tree-structured binary networks can be adapted for use with general, non-binary networks, whose dual constraint grap are trees. The dual constraint graph can be viewed as the pri- mal graph of an equivalent binary constraint network, where each of the constraints of the original network is a variable (called a c-variable) and the constraints call for equality of the values assigned to the variables shared by any two c-variables. For example, Figure 6(a) depicts- the dual constraint- graphs of a network consisting of the variables A,B,C,D,E,F, with constraints on the subsets (AK),(AW), (CL%‘), and (MX’) (the constraints themselves are not specified). The graph of Figure 6(a) contains cycles. Observe, how- ever, that the arc between (AEF) and (ABC) can be eliminated because the variable A is common along the cycle (AFE)-A- (ABC)-AC-(ACE)-AEZ-(AFE), so the consistency of the variable A is maintained by the remaining arcs. Similar ;ugu- ments can be used to show that the arcs labeled C and E are redundant at-d may be removed as well, thus transformin; l.k dual graph into a tree (Figure 6(b)). (4 W Figure 6: A dual constra ! !I t graph of a CSP A constraint network whose dual constraint graph can be reduced to a tree is said to be acyclic. Acyclic constraint net- works are an instance of acyclic databases, and the tree- structured dual constraint graph is a join-tree of the database (see, for example peeri 19831, ). Now, consider the fragment of a tree-structured dual con- straint graph, whose nodes represent the constraints C, U1 ,hj2,U3, and Uq, given in Figure 7. We denote by tC an arbitrary tuple of C. With each tuple, tC, we associate a support number s(t’), which is equal to the number of extensions in which all values of tC participate. Let s(t” I U) denote the support of tC coming from subtree TcU(U), and let s(t” I-C) denote the support for tU restricted to subtree Tc&U) (we use the same notational conventions as in the binary case). The support for tC is given by: W) = u E c l-l s wighbors (t” I U) . (6) The support U contributB~ s L I can be dc -ived from the sup port it contributes to the projection of I” on CnU, denoted by tucnu9 and this, in turn, can be computed by summing all the supports of tuples in U restricted to subtree T,-,(U) that have the same assignments as tC for variables in CnU. Namely: s (t” I U) = s(t&J I U) = s(tU I-C). (7) t’cnu = t’cnu Equations (6) and (7) yield s(t”)= II U E C’s neighbors s(t” I-C). (8 tYcnu = t’cnu The propagation scheme emerging from (8) has the same pattern as the propagation for binary constraints. Each con- straint calculates the support vector associated with each of its outgoing arcs using: s (t”c(y I-0 = xx, S(P I-C) . (9 tYc-U = t mu The message which U sends to C is the vector (s (t”c(-)u I--C>> 3 w where i indexes the projection of constraint U on CnU. Using this message, C can calculate its own support (using (8)) and will also generate updating messages to be sent to its neighbors. Having the supports associated with each tuple in a con- straint, the supports of individual values can easily be derived by summing the corresponding supports of all tuples in the constraint having that value. Contradiction resolution can also be modified for acyclic networks using the same methodology. Support propagation and contradiction resolution take, on join-trees, the same amount of message passing as their binary network counter- parts. Thus, the algorithm is linear in the number of con- straints and quadratic in the number of tuples in a constraint (in fact, due to the special nature of the “dual constraints”, being all equalities, the dependency of the complexity on the number of tuples t can be reduced from t2 to tlogt, using an indexing technique). An illustration of this process is provided in the full paper [Dcchter 1988a] where an application of this technique to a circuit diagnosis problem is discussed. Figure 7: A fragment of a dual c,jnc;traint gr.iph When the constraint network is not acyclic, the method of tree-clustering [Dechter 1988b] can bc used prior to applica- Dechter and Dechter 41 tion of the propagation schemes described above. This method uses aggregation of constraints into equivalent con- straints involving larger cluster of variables in such a way that the resulting network is acyclic. In this, more general case, the complexity of the procedure depends on the complexity of solving a constraint satisfaction problem for each cluster and is exponential in the size of the larger cluster. For more details see [Dechter 1988b]. 7. Summary and conclusions We presented efficient algorithms for support propagation and for contradiction-resolution in acyclic dynamic constraint net- works, and indicated how these algorithms can be extended for a general network using the tree-clustering method. The propagation scheme contains two components: support updat- ing and contradiction resolution. The first handles non- contradictory inputs and requires one pass through the net- work. The second finds a minimum set of assumption- changes which resolve the contradiction. Contradiction reso- lution may take five passes in the worst case: activating a diagnosis subtree (one pass), determining a minimum assump- tion set (two passes) and updating the supports with new assumptions (two passes). The belief-maintenance mechanism presented here is par- ticularly useful for cases involving minor topological changes, for example, when observations arrive regarding the restric- tion of an existing constraint rather then the introduction of a new (non-unary) constraint. In such cases the structure of the acyclic network, which may be compiled initially via tme- clustering, does not change. We do not consider this work to be a proposal for another TMS, since some basic assumptions currently obeyed by TMS developers are not followed here. TMSs try to model the rea- soning process of a general knowledge-based system, where the knowledge part is purposely separated from the reasoning part. The TMS, whose input is provided by the reasoner, per- forms a limited amount of deduction, detects contradictions, and performs dependency-directed backtracking, keeping track of which assertions are assumptions and premises and which ones were deduced. Having this view, the TMS is nor- mally not a complete inference procedure whose existence is justified by maintaining consistency within the explicated rea- soning process in an efficient way. Our view is different in that no separation is made between the knowledge and the reasoning process based on this knowledge. We provide a knowledge-base in its declara- tive form with a given amount of derivation already performed on it without keeping track of the derivation process. The dependencies in the knowledge-base are also declerative and undirectional. Our goal is to maintain the knowledge con- sistent and complete under possible changes coming from the outside world (e.g., observations). The dependency structure explicates dependencies in the knowledge structure and not in a particular reasoning path which is based on this knowledge (although these are related). Explanations are not an integral task, although they can be given a declerative definition and can be easily derived from the knowledge. 42 Automated Reasoning References [Beeri 1983]Beeri, C., R. Fagin, D. Maier, and N. Yannakakis, “On the desirability of Acyclic database schemes,” JACM, Vol. 30, No. 3, July, 1983, pp. 479-513. [Dcchter 1987]Dechter, R. and J. Pearl, “Network-based heuristics for constraint-satisfaction problems,” Artificial Intelligence, Vol. 34, No. 1, December, 1987, pp. l-38. [Dechter 1988a]Dechter, R. and A. Dechter, “Belief mainte- nance in dynamic constraint networks,” UCLA, Los Angeles, CA, Tech. Rep. R-108, February, 1988. [Dechter 1988b]Dechter, R. and J. Pearl, “A Tree-Clustering Scheme for Constraint Processing,” in Proceedings AAAI-88, St. Paul, MI: August 1988. [de Kleer 1986]de Kleer, J., “An assumption-based TMS,” Artificial Intelligence, Vol. 28, No. 2, 1986. [Doyle 1979]Doyle, J., “A truth maintenance system,” Artificial Intelligence, Vol. 12,1979, pp. 231-272. Freuder 1982]Freuder, E.C., “A sufficient condition of backtrack-free search.,” Journal of the ACM, Vol. 29, No. 1, January 1982, pp. 24-32. [Mackworth 1977]Mackworth, A.K., “Consistency in net- works of relations,” Artificial intelligence, Vol. 8, No. 1, 1977, pp. 99-l 18. [McAllester 198O]McAllester, D.A., “An Outlook on Truth- Maintenance,” MIT, Boston, Massachusetts, Tech. Rep. AI Memo No. 55 1, August, 1980. [Montanari 1974]Montanari, U., “Networks of constraints: fundamental properties and applications to picture process- ing,” Information Science, Vol. 7,1974, pp. 95-132. pearl 1986]Pearl, J., “Fusion Propagation and structuring in belief networks,” Artificial Intelligence Journal, Vol. 3, Sep- tember 1986, pp. 241-288. [Petrie 1987]Petrie, C.J., “Revised Dependency-Directed Backtracking for Default Reasoning,” in Proceedings AAAI- 87, Seattle, Washington: July, 1987, pp. 167-172. [Reiter 1987]Reiter, R., “A Logic for Default Reasoning,” in Reading in Nonmonotonic Reasoning, M.L. Ginsberg, Ed. Los Altos, Cal.: Morgan Kaufman, 1987, pp. 68-93. waltz 1975]Waltz, D., “Understanding line drawings of scenes with shadows,” in The Psychology of Computer Vision, P.H. Winston, Ed. New York, NY: McGraw-Hill Book Company, 1975.
1988
10
699
Design for Testability Peng Wu MIT AI Lab 545 Technology Sq., Rm833 Cambridge, MA 02139 Abstract This paper presents an implemented system for modifying digital circuit designs to enhance testa- bility. The key contributions of the work are: (1) setting design for testability in the context of test generation, (2) using failures during test genera- tion to focus on testability problems, (3) indexing from these failures to a set of suggested circuit modifications. This approach does not add testa- bility features to the portions of the circuit that a test generator can already handle, therefore, it promises to reduce the area and performance overhead necessary to achieve testability. While the system currently has only a small body of do- main knowledge, it has demonstrated its ability to integrate different DFT techniques and to in- troduce only sharply focused modifications on a textbook microprocessor, an ability that is miss- ing in previous DFT systems. a ntroduction *This paper describes research done at the Artificial Intelli- gence Laboratory of the Massachusetts Institute of Technology. Support for the author’s research is provided by the DigitaI Equipment Corporation, Wang Corporation, and the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract NOOO14-85-K-0124. The key contributions of this work are: (1) setting design for testability in the context of test generation, (2) using failures during test generation to focus on testability prob- lems, (3) indexing from these failures to a set of suggested . I - circuit modifications. Testing Focus --I ‘\ Shift Figure 1: MAC-2: A textbook microprocessor Figure 1 shows a textbook microprocessor. The left half is the datapath, the right half is the micro sequencer. The components in the sequencer have testability problems be- cause they are internal to the circuit and are not easily accessible from outside. To solve the testability problem for one of those com- ponents, the read-only memory (ROM), our system starts by consulting its library to see how a ROM can be tested. According to the library, a ROM can be tested by applying an exhaustive counting sequence on it, address input, then verifying that its outputs are correct. When trying to ap- ply a counting sequence to the ROM address, the system fails because it doesn’t have direct access to the ROM in- put (hence it cannot directly input a counting sequence), and because even in normal use (i.e., getting addresses from the UPC), only a fraction of all the addresses might be applied. Our system then suggests that a register and an incrementor can be used as a counter (and hence pro- vide a counting sequence) when connected in a loop, and indicates that the UPC, the Incrementor and the Multi- plexor can do this if the Multiplexor always connects the UPC and the Incrementor during testing. The output side of the ROM is more difficult - ac- cording to the system’s TG algorithm, there is no way in the current design to observe the output, so the system encounters another test generation failure. A heuristic as- sociated with this particular kind of failure indicates that the output can be observed by adding a shift function to 358 Common Sense Reasoning From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. a register. In this case, it suggests adding a shift function to the microinstruction register uIR, connecting the addi- tional shift-out port to an output of the circuit, then using the uIR to shift the ROM contents out so that they can be verified. The system’s overall approach to finding testability problems in a circuit and fixing them by modifying the circuit is a four step process: 1. It runs an external test generator [Shirley, 19861 on the circuit to identify the untestable components. 2. It further examines the testing problem by attempting to generate tests for the untestable components and analyze the reasons of failure. The test generator for this purpose has a simple algorithm and our system has access to its internal. 3. When it encounters a test generation failure, it selects a modification according to the nature of the failure. 4. Finally, the system modifies the circuit and repeats the process until all untestable components are pro- cessed. Previous approaches to DFT have used heuristic testa- bility definitions that assume a limited test generation ca- pability, i.e., that of a classical, combinational test gen- erator. This may result in false testability problems. For instance, the LSSD (1 eve sensitive scan design) design rule 1 approach [Horstmann, 19831 defines testability problems as design rule violations. Since LSSD reduces sequential cir- cuit testing to combinational circuit testing, this approach assumes that the test generator is only able to handle com- binational circuits, and as a result would find “testability problems” that are not problems to some existing test gen- erator. For example, according to the LSSD design rule approach, every register in the datapath part of the MAC- 2 must be changed into a shift register, despite the fact that existing test generators (e.g., [Shirley, 19861) can test these parts as they are. 2 es& Generation The purpose of testing a circuit is to verify its behavior. This is done by exercising the circuit, i.e., by applying in- puts to it and comparing its outputs against the expected values. Each set of inputs and expected outputs is called a test pattern. Circuits as complex as microprocessors are tested using divide-and-conquer: first, partition the cir- cuit into components, then test each component. This partitioning for testing usually follows the schematic or a partitioning suggested by the designer. Testing a specific component (the focus of the test) in- volves three steps: 1. Work out test patterns for the focus. 2. Work out how to apply inputs to the focus via the surrounding components. 3. Work out how to observe the focus outputs, again via 0 the surrounding components. Working out how to test a component is a recursive sub- problem that bottoms out at primitive logic gates, e.g.. AND, OR, and NOT, for which there are simple, well- known tests. To test components internal to a VLSI chip, the test patterns must be executed through the surround- ing components. This typically involves routing signals through the circuit, which we refer to as routing tasks. st enesatio For the purposes of this paper, we define the testability of a circuit relative to a test generation algorithm. We say that a circuit is testable by an algorithm if the algorithm can generate a test for the circuit. If a circuit is not testable by that algorithm, then it is the job of our DFT system to suggest design changes which will enable the algorithm to succeed. Although DFT’s goal has been understood in this way previously, only recently have developments in TG technology made it possible to do so in an automatic DFT system. In our system, when a component has a testability prob- lem, the problem must be in routing, since its test pattern is assumed known (we have built up a library of test pat- terns for components by consulting testing experts). For example, in Figure 1 the ROM cannot get a counting se- quence since none of the five components in the sequencer are directly connected to primary inputs in the original circuit. Thus routing a counting sequence to the ROM fails and this is one of the reasons for the ROM being untestable. x I : SUPPlY I t I----, , Signal ---I le-- L---J * Task: Observe Response Figure 2: DFT in the context of TG failure Figure 2 shows how our DFT system will fix routing problems for a specific test Focus. In order to figure out what kind of modifications are helpful, the system checks each of a focus’ neighboring components to determine whether it can help to solve a routing task. A neighbor can Q help in any of several ways: The component may be able to complete the task. For example, assume that the Focus is a ROM that needs a counting sequence to exercise it. If the neighboring component driving it, X, happens to be a shift-register connected to a primary input, the task of providing a counting sequence can be accomplished by using X in its shifting mode (and a test equipment will drive the primary input with the counting sequence). The component may be able to pass the task along for other components to solve. If X has a mode in which its output can be made equal to its input, it can be made “transparent,” and the task of provid- ing a counting sequence can be passed to components further “upstream.” The component may be able to solve part of the task and cooperate with others. For example, suppose X is an ordinary register that happens to be in a loop with an incrementor (the dashed lines). By using them fo- gether we can generate the counting sequence. Thus, X accomplishes part of the original task. wu 359 In each of the three ways of accomplishing a routing logic operations. Each test pattern consists of several test task, design modifications may be involved. For instance, phases that are performed in sequence. At the bottom are if X is used to complete the task of providing a counting the signals that need to be routed to and from the P/O sequence for testing the ROM, i.e., used in the first way ports of a focus. The test pattern for a particular compo- as shown above, but it is a register without shift function nent can be selected by choosing one test pattern for each originally, it needs a modification. of its functions and merging them. Using a component to pass a signal differs from using it to cooperate with other components to route a signal. When passing a signal the component functions as no more than a wire; when cooperating with other components it plays a critical role, as for instance, when a register is used to hold the state when it is cooperating with an in- crementor as a counter. These two usages of components produce different subgoals that are handled differently - cooperation also involves circuit structure matching tasks (described in the next section) in addition to routing tasks. Circuits are represented at the register transfer level as in Figure 1. Component behaviors are represented as I/O mapping functions; this predicts how a component will re- spond to a signal such as a counting sequence. The map- ping functions are used to determine subgoals for handling routing tasks, for example, to determine what signals must appear on each I/O port of a component when passing a signal. 4 Domain Knowledge The mechanism presented in the previous section offers a framework for our DFT system. The domain knowledge needed to complete the system consists of (i) test gener- ation knowledge, such as test patterns for different com- ponents and the way of using components to accomplish routing tasks, and (ii) TG failure repair strategies, such as the component modifications that help routing tasks. The system uses compound component templates to specify how components can work together to accomplish routing tasks that none of the individual components can accomplish alone. A compound component template spec- ifies the required components, the connections between them, the kinds of routing tasks the compound component can handle, the I/O ports at which the compound compo- nent handles the routing tasks, and the routing tasks for the system to accomplish further. The test patterns specify the input stimulus to exer- cise components and the predicted responses of the com- ponents. Each type of component has its own test pattern specifications; we get these from experts. Counting, Exhaustive (4) Required Components - Incrementor - Register (6) Required Connections- Test Pattern for a type of component Test Patterns for functionalities (working modes) Test Patterns for single functionality Test Phases in test pattern 4 / . Signal Requirements on ports Observe all-bit-l APPLY APPLY APPLY walking-l walking-0 add on Output on input-l on input-0 on control Figure 3: Test pattern for ALU A test pattern for a specific type of component is repre- sented in our system as a tree (Figure 3). The root node indicates the component type, an ALU in this case. The second level nodes represent test patterns for each function that this type of component can perform. For example, an ALU can Add, And, Shift, etc. The third level nodes are alternative test patterns for testing the same function. Each of the functions has its own test patterns because a component may not have all the functions mentioned in the generic test pattern definition and its test pattern should vary accordingly. For instance, an ALU without logic operations should be tested differently from one with (6) Further RTs (7) Handling - Port Figure 4: A compound component Figure 4, for example, shows a compound counter that can generate a counting sequence. Its required compo- nents are a register and an incrementor, connected in a loop feeding each other. To match compound component templates to a circuit, each of the required components must match a compo- nent, and each of the required connections must match a signal path. The template of Figure 4 matches the circuit in Figure 1, with the register matched to the UPC, the incrementor matched to the Incrementor, the connection from the output of the incrementor to the input of register matched to the Multiplexor working at the proper mode, etc. So far we have discussed the system’s knowledge about test generation. Now we introduce how components can be modified to repair test generation fai1ures.l Our com- ponent modifications are all additive, that is, components are modified to perform more functions, never fewer. Con- ‘There are other strategies, such as swapping components to get different test patterns as in [Zhu and Breuer, 19851, that are not currently included. 360 CommonSenseReasoning sequently, the circuit can always perform its original func- tions after the modification. The system uses masimum function sets to represent what functions can be efficiently added to a particular component. Each type of component has its maximum function set, which includes all the functions commonly as- sociated with it. For instance, the maximum function set for register includes load, shift, linear-feedback-shift, etc. When a function is needed for a component to handle a routing task, and the function is in the maximum function set of the component but is not currently implemented, it can be added to the component through modification. stsaint elaxat i This system uses a constraint relaxation mechanism for three purposes: to control the search, to represent prefer- ences between solutions, and to represent criteria for solu- tion validity. For example, without a constraint explicitly ruling out solutions that loop (i.e., a signal going in a cir- cle), the program would produce many such low-quality solutions. Category Concern sharing Incompatible control for controls test segments sharing Incompatible modifications modifications to the same component control-observe Control path crosses intersection observation path signal Two control paths intersect or intersection two observation paths intersect loop A signal path intersects itself protect focus A focus is used to test itself Table 1: Constraint Categories use-component-once-except-for-focus loop-signal-stable (validity boundary) Table 2: The “loop” constraint category In our system, most of the constraints on the solu- tions are organized in a two-level hierarchical structure. Constraints are first divided into 6 categories according to the parts of the solutions they are concerned about (Table 1). For instance, the “loop” constraint category concerns whether a signal path intersects with itself, i.e., whether it forms a loop. Constraints in each category are then organized by a strictness ordering, that is, if a constraint is violated, all the constraints in the same category that are stricter are also violated. Therefore the violation of constraints in one category can be characterized by the weakest constraint violated. Table 2 shows the constraints in the “loop” cat- egory. Search Control To reduce the search space, the system first generates solutions incrementally in order to take advantage of the fact that whenever a constraint is violated in a partial so- lution, it is violated in any solution built from the partial solution. Whenever the system adds a building block to a partial solution, it checks whether the resulting partial so- lution is violating any constraint; if so, the resulting partial solution will be suspended. Second, the system starts with the strictest constraints - stricter than needed to guarantee the validity of solutions - and it relaxes the constraints gradually when there are not enough solutions under the enforced constraints. Since the stricter the constraints, the smaller the search space, and heuristically, the higher quality the solutions, the system is likely to be searching in the smallest search space that contains the best solutions. Solution Validity and Preference In addition to search control, the constraint relaxation mechanism also captures knowledge about solution validity and preferences. Preference is represented as a relaxable constraint as explained above. Validity is represented as a validity boundary in each constraint category. A validity boundary is the weakest constraint in a category that still guarantees the validity of a solution. An example of the validity boundary is the “loop-signal-stable” category in Table 2, which checks resource contention within a signal path. Usually the system will not relax the constraints be- yond the validity boundary. Mowever, if the system cannot find any valid solution, it will relax the constraint further, producing a partial solution for examination by the de- signer, to help him fix the remaining testability problems. Constraint Relaxation When all partial solutions are suspended before the sys- tem finds a given number of solutions, the constraints are relaxed so that some of the suspended solutions may be completed. Each time, the system relaxes the constraints minimally, that is, just enough to re-invoke at least one suspended partial solution. This is done in three steps: 1. For each of the constraint categories, collect the weak- est violated constraints from each of the suspended partial solutions into a Weakest-violation set. If a constraint in this set is relaxed, at least one of the suspended partial solutions will be re-invoked. Collect the strictest constraint in each of the cat- egories from the Weakest-violation set to form a Strictest-Weakest-violation set. This set contains the candidates for a minimal relaxation. Relax the constraints in the Strictest-weakest- violation set one at a time according to the category order in Table 1 until one suspended partial solution is re-invoked. Unlike the constraints within a category, the constraint categories do not have a clean logical relationship, i.e., given that a partial solution violates some constraints in one category, little can be said about whether the par- tial solution violates any constraint in other categories. Therefore the order of relaxation among the categories is based on a heuristic: relax the constraint first that is re- lated to the latest stage of solution construction. For in- wu 341 stance, given that our system constructs one signal path at a time, the ControLobserve intersection category (involving two signal paths) is related to a later construction stage than that of the loop category (involving only one signal path). Hence the former is relaxed earlier. This heuristic can be justified by noting that, among all the suspended partial solutions, those at the latest stage of construction are closest to completion, therefore re-invoking them first is likely to yield complete solutions with least constraint relaxation. 6 Related Work This research has been inspired by the flexibility and preci- sion demonstrated by human DFT experts. For instance, multiplexors are used to partition the MC68020 and on- chip signature analysis is used only where the accessibility is poor [Kuban and Salick, 19841. As one test expert re- marked, the strategy is to “introduce just enough hardware to make the circuit testable.” Our research is an effort to automate some of the techniques used by human experts. The work on test generation in [Shirley, 1986; Shirley et cd., 19871 h as had a strong impact on this research. Shirley’s work recognizes that test generation effort can be traded off against DFT effort. Therefore, it may be appropriate for a test generator to give up quickly on the hardest portions of a circuit, when DFT techniques can solve the problem more inexpensively. This is the kind of test generator needed to identify testability problems. The point at which the test generator gives up can be chosen based on the relative costs of generating tests vs modifying the circuit. Horstmann’s DFT system [Horstmann, 19831 takes a de- sign rule approach, using rules from LSSD design stan- dards. Abadir’s DFT system [Abadir and Breuer, 19851 uses a “testable structure embedding” approach, employ- ing general circuit structure models, similar to our com- pound components, to represent structured DFT methods. Our approach differs from these DFT systems in the fol- lowing aspects. o These systems tend to prevent testability problems from arising while our system solves testability prob- lems as they arise. Previous DFT systems define a testability problem to be either a design rule viola- tion [Horstmann, 19831 or a testable structure mis- match [Abadir and Breuer, 19851. Rule violations and structure mismatches are only heuristically related to real testability problems in a circuit. This uncertainty forces a conservative strategy that can result in unnec- essary modifications. l Our approach examines more of a circuit’s potential behavior than previous systems and, therefore, can use existing components in a larger variety of ways. For example, our system can use a register as part of a counter but previous systems do not. e Our approach can employ a larger variety of DFT techniques, both structured and ad hoc DFT tech- niques, flexibly. In comparison, Horstmann’s sys- tem specializes only in LSSD; Abadir’s system bun- dles components that accomplish control/observation tasks with the focus, and thus has a coarser granular- 362 Common Sense Reasoning ity of testable structure than our system does if both are viewed in terms of testable structure matching. Zhu’s system [Zhu and Breuer, 19851 is more an opti- mization system than a DFT system. This system special- izes in replacing components, i.e., selecting from candidate replacements according to trade-offs among incompatible requirements. In a sense, we are solving a different prob- lem; Zhu’s system repairs TG failures caused by compo- nents that have no test patterns by swapping in compo- nents with known test patterns; our system repairs TG failures in signal routing. This research has been inspired in part by [Horstmann, 1983; Abadir and Breuer, 19851. However, in our view these DFT systems fail to answer adequately the critical question about what a testability problem is, an issue that has been central to this research. We define a testability problem as a test generation failure, use a test generator to locate testability problems, and organize circuit modi- fications according to TG failures they repair. 7 Current Implementation Status This research is still at the prototype stage, demonstrat- ing the plausibility of our approach. We think that test generation knowledge accounts for large part of the flex- ibility and precision of human DFT experts. The exam- ple shown in the introduction of this paper is interesting because it shows that our system does not introduce DFT hardware on portions of the circuit which we already know how to test; and where real testing problems exist, the system introduces DFT hardware to solve the actual prob- lems, e.g., it introduces only a modification to the output side of the ROM. Additional solutions for all the five com- ponents in the sequencer part of MAC-2 and for a circuit from [Abadir and Breuer, 19851 can be found in [Wu, 19881. To date the system has not been tested on real circuits. What remains to be seen is how this approach scales up with real circuits and whether precise DFT modifications actually yield lower total DFT overhead than would result with a more structured approach. Limitations The test generation process underlying our DFT pro- cess is computationally intensive since it involves satis- fying conjunctive goals. When more capability is added to the system in order to make wider changes to circuits (e.g., adding connections between internal circuit nodes), the complexity problem will become more acute. Using more abstract or hierarchical circuit representations might help, but more experiments with real circuits are needed. Our approach is TG-failure-driven. Therefore the sys- tem can employ only DFT techniques that can be viewed as repairs to TG-failures. Other techniques, such as parti- tioning a circuit, or using bus structure to reduce the TG complexity, fall outside our framework. Our approach provides a framework for suggesting pre- cise DFT modifications. However, due to a lack of re- lated knowledge, the system is currently incapable of di- rectly evaluating the resulting chip area overhead (requir- ing layout knowledge), test time (requiring details of signal sequences), fault coverage (requiring quality of test pat- terns), etc. The work in [Wu, 19881 considers this issue in more detail. Finally, our approach is not intended to deal with gate- level circuits. That level of detail is, first of all, compu- tationally impractical for microprocessor-scale circuits. In addition, in order to avoid expensive late design changes, our system is intended to work at early design stages when only high-level circuit descriptions are available. This seems appropriate since many DFT techniques are con- cerned with only high-level circuit structures, e.g., built-in self testing. Future Directions Ideally, testability should be considered while design- ing a circuit. However, due to the scale of VLSI circuits, simply providing the device functionality correct is very difficult. As a result, it is common practice to pay atten- tion only to functionality at first, then deal with secondary goals like testability by debugging, minimally perturbing the design while maintaining the primary goals. Our DFT system accomplishes a variety of “minimum perturbation” because it works only on true testing problems (as defined by test generation failures) and because it has a library of minimal design modifications indexed by failure type. The general idea employed in our approach is to use a simulation process to find defects in given design, then use the defects to guide the redesign process. Possible addi- tional applications of this idea are design for manufactura- bility and design for diagnosability. For instance, simu- lating how parts of a machine can be put together might reveal assembly problems in the design and the solutions to them. 8 Conclusion Knowledge about test generation is critical to constructing a competent DFT system, yet this knowledge has not been used previously. This research proposes that test genera- tion knowledge can be introduced into a DFT system by following the principle of repairing test generation failures. The implemented system currently has only primitive domain knowledge and needs more work. It can modify a circuit by adding functions to components, but cannot add connections due to a lack of circuit layout knowledge. Except for the constraint relaxation mechanism, it does not have a sophisticated evaluation function. However, armed with the knowledge of circuit testing behavior and test generation, it has already demonstrated its ability to integrate different DFT techniques and to introduce only sharply focused modifications on a textbook microproces- sor, an ability that is missing in previous DFT systems. Acknowledgments The author wishes to thank Prof. Randall Davis for su- pervising this work and Mark Shirley for many discussions. Both have made great contributions to the ideas presented here and to the presentation of the paper. Gordon Robin- son, of GenRad, provided many discussions on testing, Walter Hamscher and Reid Simmons carefully read the draft and offered many useful comments, while Choon P. Goh supplied encouragement, a careful reading of the draft and many useful comments. efesences [Abadir and B reuer, 19851 Magdy S. Abadir and Melvin A. Breuer . A Knowledge-Based System for Designing Testable VLSI Chips. IEEE Design U Test of Comput- ers, :56-68, August 1985. [Bennetts, 19841 R. G. Bennetts. Design of Testable Logic Cir- cuits, chapter Foreword, pages v-v. Addison-Wesley Pub- lishing Company, 1984. [Davis, 19821 Randall Davis. Expert Systems: Where are we? and where do we go from here? Technical Report A.I. Memo N. 665, MIT, Artificial Intelligence Laboratory, June 1982. [Davis, 19831 Randall Davis. Reasoning from First Principles in Electronic Troubleshooting. Int J. Man-Machine Stud- ies, (19):403-423, 1983. [Horstmann, 19831 Paul W. H orstmann. Design for Testability Using Logic Programming. In Proceedings of 1983 Inter- national Test Conference, pages 706-713, 1983. [Kuban and Salick, 19841 John R. Kuban and John E. Salick. Testing Approached in the MC68020. VLSI Design, :22- 30, November 1984. [Lai, 19811 Kwok-Woon Lai. Functional Testing for Digi- tal Systems. Technical Report CMU-CS-148, Carnegie- Mellow University, 198 1. [McCluskey, 19851 Edward J. McCluskey. Built-In Self-Test Techniques. IEEE Design and Test of computers, 2(2):21- 28, April 1985. [Robinson, 19851 Gordon D. Robinson. What is Testability? Feb 1985. Available from the author. [Shirley et al., 19871 M. Shirley, P. Wu, R. Davis, and G. Robinson. A Synergistic Combination of Test Generation and Design for Testability. In International Testing Con- ference 1987 Proceedings, pages 701-711, The Computer Society of the IEEE, 1987. [Shirley, 19831 Mark H. Shirley. Digital Test Generation from Hierarchical Models and Failure Symptoms. Master’s the- sis, Massachusetts Institute of Technology, May 1983. [Shirley, 19861 Mark Harper Shirley. Generating Test by Ex- ploiting Designed Behavior. In Proceedings of the Fifth National Conference on Artificial Intelligence (AAAI-86), pages 884-890, AAAI, August 1986. [Williams and others, 19731 M. J. Y. Williams and others. En- hancing Testability of Large-Scale Integrated Circuits via Test Points and Additional Logic. IEEE trana on Com- puters, C-22( 1):46-60, January 1973. [Wu, 19881 Peng Wu. Test Generation Guided Design for Testability. Master’s thesis, Massachusetts Institute of Technology, May 1988. [Zhu and Breuer, 19851 Xi-an Zhu and Melvin A. Breuer. A knowledge based system for selecting a test methodology for a PLA. In PTOC. 22nd Design Automation Conference, pages 259-265, June 1985. wu 363
1988
100