| { |
| "paper_id": "T75-2036", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:43:10.437117Z" |
| }, |
| "title": "THE COMMONSENSE ALGORITHM AS A BASIS FOR COMFUTER MODELS OF HUMAN MEMORY, INFERENCE, BELIEF AND CONTEXTUAL LANGUAGE COMPREHENSION", |
| "authors": [ |
| { |
| "first": "Chuck", |
| "middle": [], |
| "last": "Rieger", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Maryland", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "T75-2036", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "The reason for this seems to be that the original experiences which caused the structures and processes to exist in the first place come in chunks themselves; knowledge is never gained outside of some context, and in gaining some piece of knowledge X in context C, X and C become inseparable. This suggests that it is meaningless to model \"a piece of knowledge\" without regard for the larger structures of which it is a part.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "If our goal is to build a robot which behaves and perceives in manners similar to a human, this means that the process by which the robot selects a piece of knowledge as being applicable to the planning, executory, inferential or interpretive process at hand at the moment is a function not only of the specific problem, but also of the larger context in which that instance of planning, execution, inference or interpretation occurs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "If, for example, our robot sees his friend with a wretched facial expression, the inference he makes about the reasons for his friend's misery will reflect the larger picture of which he is aware at the time: his friend has just returned from a trip to purchase opera tickets vs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "his friend has Just eaten the cache of mushrooms collected yesterday vs .....", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The same pervasiveness of context exists in the realm of the robot's interpretations of visual perceptions: the very same object (visible at eye level) will 18o be perceived out of the corner of his eye in one situation as the cylindrical top of his electric coffee grinder (he is at home in his kitchen), but as the flasher of a police car (he is speeding on the freeway) in another.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This suggests that at every moment, some fairly large swatch of his knowledge about the world somehow has found its way to the foreground to exert its influence; as our robot moves about, swatches must fade in and out, sometimes coalescing, so that at any moment, just the right one is standing by to help guide acts of planning, infePence and perception.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 180, |
| "text": "suggests that at every moment, some fairly large swatch of his knowledge about the world somehow has found its way to the foreground to exert its influence;", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Marvin Minsky has captured this whole idea very neatly in his widely-circulated \"Frames\" paper [MI] . While this paper describes an overall approach to modeling human memory, inference and beliefs, we still lack any specific formulation of the ingredients which make up the large, explicitly-unified structures which seem to underlie many higher-level human cognitive functions.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 99, |
| "text": "[MI]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "It is the purpose of this paper to define the notion \"commonsense algorithm\" (CSA) and to propose the CSA as the basic Cognitive structure which underlies the human processes of planning, inference and contextual interpretation of meaning. ", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 82, |
| "text": "(CSA)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "EVOLUTION OF THE CSA IDEA", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "III.", |
| "sec_num": null |
| }, |
| { |
| "text": "The next section will define a CSA as a network-like structure consisting of events tied together by primitive links.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "III.", |
| "sec_num": null |
| }, |
| { |
| "text": "Taken as a whole, the CSA specifies a process: how to get something done, how something works, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "III.", |
| "sec_num": null |
| }, |
| { |
| "text": "A computer scientist's first reaction to this type of structure is \"Oh yes, that's an AND/OR problem-reduction graph\" (see Nilsson [NI] for example). Figure I shows an AND/OR graph for how to achieve the goal state \"a", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 135, |
| "text": "[NI]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 150, |
| "end": 159, |
| "text": "Figure I", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "III.", |
| "sec_num": null |
| }, |
| { |
| "text": "McDonald's hamburger is in P's stomach.\" Edges with an arc through them specify AND successors of a node (subgoals, all of which achieved imply the parent node has been achieved); edges with no arc through them specify OR successors (subgoals, any one of which being sufficient to achieve the parent goal). Figure 7 shows both how to conceive of the algorithm and how the algorithm will actually run.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 307, |
| "end": 315, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "III.", |
| "sec_num": null |
| }, |
| { |
| "text": "As a computer algorithm, this is not as fully explicit as might be desired: it lacks explicit iteration and explicit termination criterion testing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "III.", |
| "sec_num": null |
| }, |
| { |
| "text": "These will have to be worked out before the theory adequately handles repetition. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "III.", |
| "sec_num": null |
| }, |
| { |
| "text": "The algorithmic content of a CSA can be described at many different levels of resolution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VII. LEVELS OF RESOLUTIONS IN CSA'S", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, the \"action\" \"take a plane to San Francisco\" is quite a bit higher in level and more abstract than the action \"grasp a saw\". In the former, the act of taking a plane somewhere is not really an action at all, but rather a description of an entire set of actions, themselves related in a CSA; \"take a plane to San Francisco\" is a high level surrogate for a low level collection of true actions in the sense of actually performing physical movements, etc. in the real world (things like grasping a saw, reaching into pants pocket for some money, and so on). That is, for a car, OKFOR means \"gas in tank\", \"tires inflated\", \"battery charged\",..., whereas (OKFOR TOILET FLUSHING) means quite a different set of things.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VII. LEVELS OF RESOLUTIONS IN CSA'S", |
| "sec_num": null |
| }, |
| { |
| "text": "The basic issue is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VII. LEVELS OF RESOLUTIONS IN CSA'S", |
| "sec_num": null |
| }, |
| { |
| "text": "should the memory plan and interpret in the abstract realm of OKFORedness, then instantiate with details later, or must the details serve as the primary planning basis, with the abstract ideas being reserved for other higher level processes such as reasoning by analogy, generalization and so forth? There is probably no cut-and-dried answer;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VII. LEVELS OF RESOLUTIONS IN CSA'S", |
| "sec_num": null |
| }, |
| { |
| "text": "however, the tendency in a CSA system would be to favor the details over the abstract.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VII. LEVELS OF RESOLUTIONS IN CSA'S", |
| "sec_num": null |
| }, |
| { |
| "text": "But the CSA representation is intended to be flexible enough to accomodate both the abstract and the concrete. The answer seems to be \"yes\", since it seems reasonable to regard enablement as a flow which can be cut off in much the same way as causality. is its inability to specify time sequencing; adoption of some traditional flowchart concepts will probably prove adequate for this.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VII. LEVELS OF RESOLUTIONS IN CSA'S", |
| "sec_num": null |
| }, |
| { |
| "text": "(3) There is no convenient way to model decision-making processes on the part of the planner of a CSA. This will have to be developed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VII. LEVELS OF RESOLUTIONS IN CSA'S", |
| "sec_num": null |
| }, |
| { |
| "text": "APPLICATIONS OF THE CSA How do such situations get judged \"anomalous\", and how does the perceiver try to explain or cope with them? The answer undoubtedly relates to expectancies and a knowledge of algorithms for putting things on one-another, getting somewhere in a hurry and antagonistic states when eating outdoors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IX.", |
| "sec_num": null |
| }, |
| { |
| "text": "By playing experience against CSA's we discover things which would not otherwise be discovered.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IX.", |
| "sec_num": null |
| }, |
| { |
| "text": "If a person is perceiving in a noisy or incomplete environment, having CSA's available to guide his interpretations of perceptions provides enough momentum to fill in missing details, scarcely noticing their absence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For filling in missing information", |
| "sec_num": "4." |
| }, |
| { |
| "text": "If John is hammering a nail into the wall with his hand on the backswing, but the object in his hand is occluded, it requires very little effort to surmise that it is a hammer. If we believe that Mary is going to McDonald's to buy a hamburger, but she comes back into the house saying \"It won't start\", we have a pretty good idea \"it\" refers to the car.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For filling in missing information", |
| "sec_num": "4." |
| }, |
| { |
| "text": "This application of CSA's corresponds to the notion of a specification inference in (RI).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For filling in missing information", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Instead of a conclusion, I will simply state the order in which research along CSA lines should, and hopefully will at the University of Maryland, progress:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "X. CONCLUSIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "I. Reimplementation of the conceptual overlays prototype system described in (R3) to reflect the new CSA ideas and replace the ad-hoc AND/OR graph approach described in that report.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "X. CONCLUSIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "2. Implementation of a mechanism simulator which could accept, in CSA terms, the definition of a complex mechanism (electronic circuit or toilet), simulate it, respond to artificially-induced malfunctions, and answer questions about the mechanism's cause and effect structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "X. CONCLUSIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "3. Engineering of a new total conceptual memory, along the lines of the original one of (RI), but incorporatng CSA's and the new idea of a tendency. This would involve reimplementing the inference mechanism and various searchers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "X. CONCLUSIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "CSA interpreter which could not only use CSA's as data structures in the various cognitive processes, but also could execute them to drive itself. e.,~\\e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Development of a", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Hamburger algorithm, with actions, states, causality and enablement explicit. Computer algorithm to compute the average of TABLE(1),...,TABLE(N) expressed as a commonsense algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 2", |
| "sec_num": null |
| }, |
| { |
| "text": "(NOTE: Initialization has not been shown. The assumptions are that AC3 begins with zero, that ACI begins with zero, and that N and TABLE(1),...,TABLE(N) exist in core.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 2", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "They have provided considerable intellectual stimulation. We hope to continue as a group and eventually issue a working paper and computer system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Structure of Belief Systems", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Abelson", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abelson, R., \"The Structure of Belief Systems,\" in Schank and Colby (eds.),", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "Figure !Unrestricted AND/OR graph for getting aMcDonald's hamburger into stomach.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "text": "FIGURE 6 Vicious Cycles.", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table><tr><td>phrase</td><td>is</td><td>intended</td><td>to</td><td colspan=\"2\">encompass</td><td>such</td><td>I</td></tr><tr><td colspan=\"7\">seemingly diverse robot/human activities as: I. communicating with another robot/human (e.g., how to transfer</td><td>I</td></tr><tr><td colspan=\"3\">information,</td><td colspan=\"2\">instill</td><td colspan=\"2\">wants,</td></tr><tr><td colspan=\"6\">convince, etc.) 2. getting about in the world 3. building things (both physical</td><td>and</td><td>I</td></tr><tr><td colspan=\"2\">mental)</td><td>and</td><td colspan=\"3\">understanding</td><td>the</td></tr><tr><td colspan=\"7\">operation of things by others 4. conceiving, designing already implementing computer programs and built .and</td><td>I</td></tr><tr><td colspan=\"2\">other</td><td colspan=\"2\">commonsense</td><td colspan=\"2\">algorithms</td><td>(a</td></tr><tr><td colspan=\"5\">special form of building) 5. interpreting sequences perceptions (e.g.,</td><td colspan=\"2\">of language</td><td>I</td></tr><tr><td colspan=\"5\">utterances) in context</td><td/><td/></tr><tr><td colspan=\"7\">6. making inferences from perceptions contextually meaningful</td><td>I</td></tr><tr><td colspan=\"7\">I am convinced that all the world can and should be expressed in such dynamics a uniform of CSA formalism built around a relatively small number of cognitively</td><td>i</td></tr><tr><td colspan=\"4\">primitive ingredients.</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>I do not have a</td><td>complete</td><td>theory</td><td>yet:</td><td>I</td></tr><tr><td/><td/><td/><td/><td/><td/><td>the</td><td>intent</td><td>of</td><td>this</td><td>paper</td><td>is to record a</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">memory dump of ideas past few months and to show how II. accumulated THE SCOPE OF THE CSA'S APPLICABILITY over</td><td>the</td><td>i II</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>Most</td><td>of</td><td>human</td><td>knowledge</td><td>can</td><td>be</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">classified as either static or dvnamic. example, a person's static knowledge of automobile tells him its general physical For an shape, size, position of steering wheel,</td><td>I</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">wheels,</td><td>engine,</td><td>seats, etc.; these are the</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">abstract aspects of a</td><td>car</td><td>which,</td><td>although</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">many</td><td>differ</td><td>in detail from car to car, are</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">inherently unchanging.</td><td>They are in</td><td>essence</td></tr><tr><td/><td/><td/><td/><td/><td/><td>the</td><td>physical</td><td>definition</td><td>of a car.</td><td>On the</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">other hand, a person s dynamic knowledge</td><td>of</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">a car tells him the functions of the various</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">components and how</td><td>and</td><td>why</td><td>to</td><td>coordinate</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">them</td><td>when</td><td>the car is applied to some goal.</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">The static knowledge tells the person</td><td>where</td></tr><tr><td/><td/><td/><td/><td/><td/><td>to</td><td>expect</td><td>the steering wheel to be when he</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">gets in; the dynamic knowledge tells him how</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">to get in in the first place, and what to do</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">with the wheel (and why) once he is in.</td><td>For</td></tr><tr><td/><td/><td/><td/><td/><td/><td>a</td><td>robot</td><td>immersed</td><td>in</td><td>a</td><td>highly</td><td>kinematic</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">world --physically,</td><td>psychologically</td><td>and</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">socially --a very large part of his beliefs</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">and knowledge must relate to</td><td>dynamics:</td><td>how</td></tr><tr><td/><td/><td/><td/><td/><td/><td>he</td><td>can</td><td>effect</td><td>changes</td><td>in himself and his</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">world, and how</td><td>he</td><td>perceives</td><td>other</td><td>robots</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">effecting changes.</td><td>It is the purpose of the</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">CSA to capture the dynamics of the world</td><td>in</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">belief</td><td>structures</td><td>which</td><td>are</td><td>amenable</td><td>to</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">computer manipulation</td><td>of</td><td>plans,</td><td>inference</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">and contextual interpretation.</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>It should be stressed that</td><td>the</td><td>phrase</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">\"dynamics</td><td>of</td><td>the world\" is intended in its</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">broadest</td><td>possible</td><td>sense.</td><td>As</td><td>will</td><td>be</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">elaborated</td><td>upon</td><td>in</td><td>a</td><td>later</td><td>section, the</td></tr></table>", |
| "num": null, |
| "text": "they can unify my past ideas on inference and memory, as well as the ideas of others.", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">AND/OR graphs adequate in practice have aspects of problem-solving been for guiding various demonstrated behavior in existing robots (see [$I] for example). However, they are intuitively not theoretically adequate structures for representing general knowledge of world dynamics: their principal deficiency is that they are ad-hoc constructions which express neither the implicit conceptual relationships among their components, nor the inherent types of their components. Because of this, there is no constraint on their organization, and this means that two AND/OR graphs which accomplish or model the same thing might bear very little resemblance to one-another when in fact they are conceptually very similar. This may be little more than a nuisance in practice, but it is undesirable in principle because it makes learning, reasoning by analogy, sharing of subgoals, etc. tedious if not impossible in a generalized problem solver. a time aspect; statechanges are characterized by an value and object, a continuous state scale (temperature, degree of anger, distance, etc.), a time aspect and beginning and end points on the scale. It is the notion of a tendency which is new and which serves to unify a class of problems which have been continually experienced in representing processes. Basically, a tendency is an actorless action. Tendencies are characterized by specifying a set of enabling conditions ,and a set of result states and/or statechanges. Whenever the enabling conditions are satisfied, the tendency, by some unspecified means, causes the states and statechanges specified as the tendency's results. Hence, a tendency may be regarded as a special type of non-purposive action which must occur whenever all its enabling conditions are satisfied. Contrasting the notion of a tendency with the notion of an action yields a rather compact definition of what makes a \"volitional\" action volitional: a volitional action is an action which need not occur even though all its physical enabling conditions are met. The reason it may not occur is, of course, that the actor does not desire it to occur; tendencies have no such desires. The abstract notion of a tendency is meant to be general-purpose, to characterize a wide variety of phenomena which are not actions, but action-like. Examples of tendencies are: I. GRAVITY, PRESSURE, MAGNETISM, ATOMIC-FISSION, HEAT-FLOW, and the host of other physical principles. Commonsense GRAVITY might be captured as follows:** ((TYPE . TENDENCY) (REFERENCE-NAME . GRAVITY) (ENABLEMENTS . (UNSUPPORTED OBJ) (LESSP (DISTANCE OBJ EARTH) (ORDERMILES)) (RESULTS . (STATECHANGE OBJ VELOCITY X X+d (LOC OBJ) (LOC EARTH))) 2. human biological functions: a tendency to GROW-HUNGRY, GROW-SLEEPY, GROW-OLDER (sole enabling condition is the passage of time!), GROW-LARGER, etc. For example: ((TYPE . TENDENCY) (REFERENCE-NAME GROW-HUNGRY) (ENABLEMENTS . (iNOT (LOC NUTRIENTS STOMACH)) (DURATION * ORDERHOURS))) (RESULTS . (WANT P (INGEST P NUTRIENTS MOUTH STOMACH)))) 3. human psychological functions: the tendency to GROW-LONELY, the tendency to FORGET, etc. For example: **The LISP notation reflects some concurrent thinking on how a commonsense algorithm system might actually be engineered. A forthcoming States $I, S2 or statechangges SCI, tendency's primary use is at the edge of the A refinement of the notion of an AND/OR graph introduces the concepts of causality and enablement, and actions and states (statechanges); edges in the graph are distinguished as either causal or enabling, the nodes are distinguished as either actions or states, and the graph obeys the syntactic contraints: (a) actions cause states (b) states enable actions Bob Abelson [AI] was among the first to employ these historically very old concepts in the framework of a computer model of human belief, and since then, numerous computer-oriented systems of knowledge reDresentation (e.g., Schank's conceptual deDendency[S2], Schmidt's models of personal causation [$4]), as well as systems of inference (Rieger [RI], Charniak [CI]) have found these four concepts to be vital to meaning representation and inference. In some sense, enablement, causality, states and actions seem to be cognitive primitives. Figure 2 is a refinement of Figure I which makes explicit the nature of each node and each connecting arc, and hence the underlying gross conceptual structure of the algorithm. While the inclusion of these four concepts (and their resulting syntactic constraints) in the basic paradigm makes for a theoretically more coherent representation, the scheme is still too coarse to capture the kinds of detailed knowledge of algorithms people possess. The following section proposes an extended framework of event types and event connectors based on these four notions and some others. These event types and connectors will be regarded as model-primitives which hopefully are in correspondence with \"psychological primitives\" in humans. IV. DEFINITION OF THE COMMONSENSE ALGORITHM In the new formalism, a CSA consists of nodes of five types: I. WANTS 2. ACTIONS 3. STATES 4. STATECHANGES 5. TENDENCIES The first four types are not new (see [$3] for example), and will not be covered here beyond the following brief mention. A WANT is some goal state which is desired by a potential actor. An action is something an (animate) actor does or can do: it is enabled by certain states (certain conditions which must be true in order for the action to begin and/or proceed), and in turn causes other states (discrete) or statechanges (continuous) to occur. Actions are characterized by an actor, a model-primitive action, a time aspect, a location aspect, and a conceptual case framework which is specific to each model-primitive action. States are characterized by an object, an attribute, a ((TYPE \u2022 TENDENCY) (REFERENCE-NAME . GROW-LONELY) (ENABLEMENTS ((ALONE P) iDURATION * ORDERDAYS)) (RESULTS . (WANT P (COMMUNICATE P X)))) ((TYPE . TENDENCY) (REFERENCE-NAME . FORGET) (ENABLEMENTS . (INHEAD ITEM P) ((UNREFERENCED ITEM P) (DURATION * ORDER??))) (RESULTS (STATECHANGE ITEM REFERENCE-DIFFICULTY X X+d)) Tendencies, thus characterized, will play an important role in modeling algorithmic processes via CSA's. In fact, adopting the notion of a tendency as a model primitive points out a rather ubiquitous principle: humans spend a large amount of time in planning either how to overcome tendencies which stand in the way of their goals, or how to harness them at the proper times in place of an action (e.g., dropping the large rock on the coconut). Although a world model, where things happen simply because \"that's the way things are\", it will probably be desirable to have the ability to regard as tendencies things which in fact can be explained. Characterizing something as a tendency even though it may be reduceable to further algorithms is probably one tactic a human employs when confronted with the analysis of very complex, olny partially understood processes. Even though something ~ be further explained, the system of representation should allow that something to be treated as though it were a tendency. Tendencies have numerous aspects which will require explicit characterization in a computer model. Two such aspects relate to (I) the inherent rapidity with which a tendency exerts itself and (2) the tendency's periodicity, if any. how quickly does a person become hungry (slope of curve), how long does it take to forget something, how rapidly does an object accelerate, how fast does the water flow through the nozzle, etc.? If the tendency is periodic, what are the parameters describing its periodicity? The primitive CSA links described in the next section will serve in part to capture such aspects, but they not yet adequate. The CS~ nrimitive Lin~ Using these five event-types as building blocks (WANTS, ACTIONS, STATES, STATECHANGES, TENDENCIES), the goal is to be able to express the dynamics of just about anything, be it a physical device, a psychological tactic employed by one person on another, how a person purchases a McDonald's hamburger, or how a computer program functions or was constructed. There are 25 primitive links in the current formulation. They will only be defined here, leaving Justificaion and details of their use for the examples which will follow, and for subsequent papers on the subject. In the following definitions, W, A, S, SC and T will stand for WANT, ACTION, STATE, STATECHANGE and TENDENCY, respectively. SC2 are causally coupled; because of this coupling, changes in $I or SCI are synonomous with changes in $2 or SC2. This link provides a way of capturing the relatedness of various aspects of the same situation. TYPE 8: GATED CAUSAL STATE COUPLING State $2 or statechange (causally coupled to) $I states in [S] are true. SC2 is synonymous with or SCI, provided that all This link is similar to ungated state coupling, except for the existence of factors which could disrupt the coupling. To illustrate, the flow of a fluid into a container (a statechange in location of the water) is synonymous with an increase in the amount of water in As with ordinary dynamic antagonism, $I, $2 antagonistic states. Typically, $2 is required as an enabling state (continuous) for some action, but that action, or some other action, produces $I as a byproduct; this gives rise to the need for corrective action therby preserving the original required enablement. This link is intended to capture the execution dynamics of a situation in which antagonistic states are expected to arise. That is, it will provide representation wherein antagonisms can be anticipated in advance of the SCA's actual execution. An example of motivating dynamic antagonism is included in the next section. gravity is continually moving water from the tank to the bowl. This is synonomous with a optimization marker allows this phenomenon to be recorded. Its interpretation is: when state S becomes true, consider performing acton A~ because action A also has S as an enabling state. sewer. This action ceases becomes empty. Meanwhile, the tendency performed. when the bowl for all the compensating actions which were ~ denotes a savings. tendency, in turn, sustains the flow of water from the bowl to the waste channel, continually moving waste water into the after a successful plan involving antagonisms has actually been executed, this link provides a means of recording once and action may coincide with those needed for another. The the tendency capillary action. This such antagonisms before execution). Also, a When several actions arise in a plan, they may share enabling states. This means that when the plans are considered together, some of the states needed for one D into the waste channel; this movement thresholds when the channel fills, providing the beginning of continuous enablement of coexistence will probably not always be without antagonism --this allows the planning mechanism to anticipate and solve enablement for gravity to move the water selected to accomplish a task, their A which can suppress the byproduct, the execution of A in a particular environment. time it begins providing continuous (e.g. when two arbitrary algorithms are another State S is an enabling condition for action A, and this relationship makes possible an optimization during height in the bowl. The increase of bowl water height thresholds when the water reaches waste channel lip level, at which fabricated. This would likely occur via some sort of interrupt mechanism. But the antagonsim link allows for planning ahead TYPE 25: OPTIMIZATION MARKER height in the tank, and an increase of water and compensatory solutions dynamically are other state changes: a decrease of water detected at the execution time of the CSA TYPE 7: CAUSAL STATE COUPLING TYPE 16: MOTIVATING DYNAMIC ~NTAGONISM PRESSURE's enabling states start to exist. remains in the tank, of course.) This movement of water is synonomous with two It should again be pointed out that points of antagonism could alternatively be disenabled, and hence that the antagonism of one of tank to the bowl beneath (as long as water are State S's continued presence is requisite to action A's or tendency T's continuance. S's removal causes A or T to halt. Any given A or T will ordinarily have numerous continuous enablements, in which case all must reamin true in order for A or T to proceed. State $I or statechange SCI is antagonistic to state $2 or statechange SC2 along some dimension. are opposites link will make explicit the final link in some sort of feedback cycle in an algorithm. The link is hard to describe outside the context of an example; examples of P's body, the induced want is that this tendency be tendency of gravity to move water from the help,\" \"go to a vise,\" etc. will appear in the next section. temperature is too high in the room, the want is that the temperature become lower; if the tendency, PRESSURE, has been enabled, allowing blood to flow out raised. The float ball's being raised is synonomous with the flush valve being open, and this openness continuously enables the provide another dynamic antagonsim which would motivate the engagement of another compensatory action, such as \"call for antagonistic to S or SC. For example, if the continually causing the float ball to remain hands than were available. This would in some sense; typically the antagonism this link. Typically, W will be a stabe which is enables the tendency to float, in turn holding the wood down would require more This link relates two states Or statechanges which want W in a potential actor. Origins of wants can be explicitly represented via trip handle is pushed down, one-shot causing the flush-ball to be raised; this one-shot wood down. If we were to illustrate more of this algorithm, it might be found that State S's or statechange SC's existence induces schematic of Figure 4 is as follows: The to perform a compensatory action: hold the That is, TYPE I: ONE-SHOT CAUSALITY fA -r~ Action A or tendency T causes state S. 7 The action or tendency need occur only once; thereafter S will persist until altered by another action or tendency. For any given S, there will ordinarily be numerous alternative A's or T's in the algorithmic base which would provide the one-shot causality. TYPE 2: CONTINUOUS CAUSALITY Action or tendency A,T's continuing existence continually causes state or statechange S,SC. Whether one-shot or continuous causality is required to maintain S or SC is both a function of S or \u2022 SC and its particular environment in a particular algorithm (i.e., what other tendencies and actions are influencing it). Again, there will ordinarily be numerous actions or tendencies in the algorithmic base which could provide continuous causality for any given state or statechange. I TYPES 3,4: GATED ONE-SHOT AND CONTINUOUS CAUSALITY i l j l I i A,T causes S,SC either one-shot or continuously, [ i providing that all states in [S] are satisfied. ! ~4--'~ The flow of causality cannot occur unless states specified by [S] exist. That is, even though A,T is occuring and there is a potential causal ~r between A,T and S,SC, the relationship will not be i realized until the gating states become true. TYPE 5: ONE-SHOT ENABLEMENT l State S's one-time existence allows action A or tendency T to proceed. Thereafter, A,T's continuation is no longer a function of S. A,T will ordinarily have numerous one-shot enablements, in which case, all must be satisfied in order for A or T to proceed. ' This link will arise in the dynamics of an actual plan, rather than be stored originally in the algorithmic base explicitly. As plans evolve and actor learns to appear in the algorithmic base as well. Action concurrency is nearly always caused by multiple enabling states for some other action, all of which must be as a collection of one-shot enablements. The English description of the actual execution that it will be necessary TYPE 15: DYNAMIC ANTAGONISM TYPE 24: INDUCEMENT initial state, and does something useful in the process. depth, the two states joined by the motivating dynamic antagonsim link form an antagonistic pair, indicating in advance of eventually threshold at state S. itself, but restores the system to its sawing action to the statechange in cut continually present, or one-time synchronized an action or tendency. Normally, the action or tendency will, directly or indirectly, causally produce a statechange along some scale; this statechange will causality and enablement. By one simple action, a complex sequence of tendencies is unleasehed; the sequence not only stops among others, the byproduct of the board's moving. Since a stationary board is a gate condition on the flow of causality from the This link provides for the repeated application of tangible manifestation of human-concocted of decreasing the board's length produces, concurrency by rote, the link will begin state S becomes true. interesting both in its own right and as a this CSA, the act of sawing for the purpose the Action A or tendency T occurs repeatedly until some rather sophisticated human problem-solving. It is therefore and byproducts, as well as their compensatory actions have been omitted. In TYPE 23: REPETITION UNTIL THRESHOLD feedback mechanism which is the product of the larger algorithm; many enabling states Actions AI,...,An must be concurrently executed. demanding mechanism. It is a complex schematic of Figure 5 is only a fragment of relationship the container (another statechange), but only providing that there is no souL~ce of exit from the container's bottom. TYPE 9,10,11,12: GATED/NON-GATED) BYPRODUCT (0NE-SHOT/CONTINUOUS, That is, the actor of A, wishing to achieve state Sg or statechange SCg also produces state S or statechange SC. The byproduct link 'is truly a causal link; what is and is not a by product must obviously relate to the motive of the actor in performing the action. Where gated, all states in [S] must be satisfied in order for the byproduct to occur. These are the commonsense algorithm primitive links. It is felt that they are decrease in tank water height, and this decrease thresholds at point X, EXAMPLE 3. Vicious cycles <Figure 6) synonomous TYPE 17: GOAL-REALIZATION COUPLING State S is an alternative way of expressing original goal W or subgoal Sg. This link supplies a way of specifying termination criteria for CSA's involving repretition. Its use is illustrated in one of the examples~ TYPE 18: COMPOUND GOAL STATE DEFINITION conceptually independent enough of one-another so that unique algorithms will be forced into unique, or at least similar, representations under this formalism. Although it is the eventual intent of the theory to be able to capture all the nuances of intentional human problem-solving behavior, there is no real feeling yet for the completeness of this set of links in with the fresh water supply valve opening. This opening enables the tendency pressure to move water from the fresh water line into the tank; this is synonomous with Consider tendencies such as fire and forgetfulness. Both roughly follow the paradigm: a tendency has state S as a an increase in tank water height, continuous enablement, and produces the same but only providing that the flush valve is closed (this will have to wait for the movement of waste from tank to bowl to cease). When the tank water height finally begins state as continuous causality. Once started, such a system is self-sustaining. In the case of fire, a one-shot causing action causes a statechange in temperature its which thresholds at the point of the State S is a shorthand for expresing the set of goal states SI,...,Sn. This link allows a \"situation\" to be characterized this regard; all that can be said now is that they do seem to suggest a reasonable approach to representing large classes of purposive human behavior. The adequacy of increase, this increase will threshold at point X again, this time being synchronous with the ball cock supply valve's being closed, stopping material's combustion temperature; this enables the tendency to burn, which in turn produces as a continual byproduct heat, the fresh water and hence causing a vicious cycle. In forgetting, the as a collection of goal states. When all goal states are satisfied, the situation is satisfied. An example of a compound goal state would be: \"get the kids ready for the car trip\", where this means a set of things these primitives for representing devices and mechanisms, on the other hand, is easier to see, at least intuitively; the links seem to be adequate for some fairly complex the tank water height increase. At this point, the system has become quiescent again. (Note: in the actual simulation which will be performed, flow rates, or more tendency to forget X is enabled by not referencing X for periods of time; but as X grows more forgotten, it becomes less referenceable. Here, dynamic antagonism TYPE 13: ORIGINAL INTENT Want W is the original desire (goal state) of an actor. W is external to the CSA in that its origin is not explicable within the CSA itself; it is the outside directive which motivated the invocation of some acton. Within an algorithm for achieving some goal, motivations are explicable: every subaction is, by its nature, designed to produce subgoal states which, taken TYPE 14: ACTION CONCURRENCY useful for representing acts of disenabling unwanted tendencies. As a first test of the theory, the reverse-trap toilet is a relatively illustrates the concepts of motivating dynamic antagonsim, original intent and byproduct with respect to a goal. The conjunction with antagonism. They will be principally toilet [Figure 3] in two using a handsaw. This CSA together, meet the original intent. rather than one thing. TYPES 19,20,21,22: DISENABLEMENT (ONE-SHOT/CONTINUOUS, \"purposive\" mechanisms. Accordingly, the first example of their use will be to characterize a mechanism very dear to most of us. generally, rates of statechanges, will be incorporated.) lies at the root of the vicious cycle. EXAMPLE 4. Description (synthesis) of GATED/NON-GATED) Action A or tendency T one-shot/continually causes state S or statechange SC not to exist. These four forms are shorthands for causality in V. EXAMPLES OF COMMONSENSE ALGORITHMS EXAMPLE I. Operation of g reverse-trap EXAMPLE 2. Sawing a hoard in half to computer algorithm (Figure 7) decrease its length (Figure 5) Figure 5 is a bare-bones representation Of a purposive human process: sawing a board Suppose the goal is to compute the average of a table of numbers, TABLE(1),...,TABLE(n).</td><td>I i I I I I 1 I I I I I i I I I ! l I I i I i I i I I i II i I I I I I I i I I I I ! I</td></tr><tr><td>report will implementing the ideas in this paper. describe progress toward</td><td>182 184 186</td><td>I I</td></tr></table>", |
| "num": null, |
| "text": "State S or statechange SC is a causal byproduct of action A, relative to goal state Sg or SCg.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |