| { |
| "paper_id": "P91-1044", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:03:17.633462Z" |
| }, |
| "title": "Action representation for NL instructions", |
| "authors": [ |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Di", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Pennsylvania Philadelphia", |
| "location": { |
| "region": "PA" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "P91-1044", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "The need to represent actions arises in many different areas of investigation, such as philosophy [5] , semantics [10] , and planning. In the first two areas, representations are generally developed without any computational concerns. The third area sees action representation mainly as functional to the more general task of reaching a certain goal: actions have often been represented by a predicate with some arguments, such as move (John, block1, room1, room2) , augmented with a description of its effects and of what has to be true in the world for the action to be executable [8] . Temporal relations between actions [1] , and the generation relation [12] , [2] have also been explored. However, if we ever want to be able to give instructions in NL to active agents, such as robots and animated figures, we should start looking at the characteristics of action descriptions in NL, and devising formalisms that should be able to represent these characteristics, at least in principle. NL action descriptions axe complex, and so are the inferences the agent interpreting them is expected to draw.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 101, |
| "text": "[5]", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 114, |
| "end": 118, |
| "text": "[10]", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 436, |
| "end": 442, |
| "text": "(John,", |
| "ref_id": null |
| }, |
| { |
| "start": 443, |
| "end": 450, |
| "text": "block1,", |
| "ref_id": null |
| }, |
| { |
| "start": 451, |
| "end": 457, |
| "text": "room1,", |
| "ref_id": null |
| }, |
| { |
| "start": 458, |
| "end": 464, |
| "text": "room2)", |
| "ref_id": null |
| }, |
| { |
| "start": 583, |
| "end": 586, |
| "text": "[8]", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 624, |
| "end": 627, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 658, |
| "end": 662, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 665, |
| "end": 668, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As far as the complexity of action descriptions goes, consider:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Ex. 1 Using a paint roller or brush, apply paste to the wall, starting at the ceiling line and pasting down a few feet and covering an area a few inches wider than the width of the fabric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The basic description apply paste to the wall is augmented with the instrument to be used and with direction and eztent modifiers. The richness of the possible modifications argues against representing actions as predicates having a fixed number of arguments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Among the many complex inferences that an agent interpreting instructions is assumed to be able to draw, one type is of particular interest to me, namely, the interaction between the intentional description of an action -which I'll call the goal or the whyand 333 its executable counterpart -the how 1. Consider:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In both a) and b), the action to be executed is aplace a plank between two ladders ~. However, Ex. 2.b would be correctly interpreted by placing the plank anywhere between the two ladders: this shows that in a) the agent must be inferring the proper position for the plank from the expressed why \"to create a simple scaffoldL", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ex. 2 a) Place a plank between two ladders to create a simple scaffold. b) Place a plank between two ladders.", |
| "sec_num": null |
| }, |
| { |
| "text": "My concern is with representations that allow specification of both bow's and why's, and with reasoning that allows inferences such as the above to be made. In the rest of the paper, I will argue that a hybrid representation formalism is best suited for the knowledge I need to represent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ex. 2 a) Place a plank between two ladders to create a simple scaffold. b) Place a plank between two ladders.", |
| "sec_num": null |
| }, |
| { |
| "text": "As I have argued elsewhere based on analysis of naturally occurring data [14] , [7] , actions -action types, to be precise -must be part of the underlying ontology of the representation formalism; partial action descriptions must be taken as basic; not only must the usual participants in an action such as agent or patient be represented, but also means, manner, direction, extent etc. Given these basic assumptions, it seems that knowledge about actions falls into the following two categories:", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 77, |
| "text": "[14]", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 80, |
| "end": 83, |
| "text": "[7]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A hybrid action representation formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1. Terminological knowledge about an actiontype: its participants and its relation to other action-types that it either specializes or abstracts -e.g. slice specializes cut, loosen a screw carefully specializes loosen a screw. Also, non-terminological knowledge must include information about relations between action-types: temporal, generation, enablement, and testing, where by testing I refer to the relation between two actions, one of which is a test on the outcome or execution of the other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A hybrid action representation formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The generation relation was introduced by Goldman in [9] , and then used in planning by [1] , [12] , [2] : it is particularly interesting with respect to the representation of how's and why's, because it appears to be the relation holding between an intentional description of an action and its executable counterpart -see [12] . This knowledge can be seen as common.sense planning knowledge, which includes facts such as to loosen a screw, you have to turn it counterelockwise, but not recipes to achieve a certain goal [2] , such as how to assemble a piece of furniture.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 56, |
| "text": "[9]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 88, |
| "end": 91, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 94, |
| "end": 98, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 101, |
| "end": 104, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 323, |
| "end": 327, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 521, |
| "end": 524, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A hybrid action representation formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The distinction between terminological and nonterminological knowledge was put forward in the past as the basis of hybrid KR system, such as those that stemmed from the KL-ONE formalism, for example KRYPTON [3] , KL-TWO [13] , and more recently CLASSIC [4] . Such systems provide an assertional part, or A-Box, used to assert facts or beliefs, and a terminological part, or T-Box, that accounts for the meaning of the complex terms used in these assertions.", |
| "cite_spans": [ |
| { |
| "start": 207, |
| "end": 210, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 220, |
| "end": 224, |
| "text": "[13]", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 253, |
| "end": 256, |
| "text": "[4]", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A hybrid action representation formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the past however, it has been the case that terms defined in the T-box have been taken to correspond to noun phrases in Natural Language, while verbs are mapped onto the predicates used in the assertions stored in the A-box. What I am proposing here is that, to represent action-types, verb phrases too have to map to concepts in the T-Box. I am advocating a 1:1 mapping between verbs and action-type names. This is a reasonable position, given that the entities in the underlying ontology come from NL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A hybrid action representation formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The knowledge I am encoding in the T-box is at the linguistic level: an action description is composed of a verb, i.e. an action-type name, its arguments and possibly, some modifiers. The A-Box contains the non-terminological knowledge delineated above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A hybrid action representation formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "I have started using CLASSIC to represent actions: it is clear that I need to tailor it to my needs, because it has limited assertional capacities. I also want to explore the feasibility of adopting techniques similar to those used in CLASP [6] to represent what I called common-sense planning knowledge: CLASP builds on top of CLASSIC to represent actions, plans and scenarios. However, in CLASP actions are still traditionally seen as STRIPS-like operators, with preand post-conditions: as I hope to have shown, there is much more to action descriptions than that.", |
| "cite_spans": [ |
| { |
| "start": 241, |
| "end": 244, |
| "text": "[6]", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A hybrid action representation formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1V~ta.t executable means is debatable: see for example[12], p. 63ff.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Towards a general theory of action and time", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allen", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Artificial Intelligence", |
| "volume": "23", |
| "issue": "", |
| "pages": "123--154", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Allen. Towards a general theory of action and time. Artificial Intelligence, 23:123-154, 1984.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Modelling act-type relations in collaborative activity", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Balkanski", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Balkanski. Modelling act-type relations in collab- orative activity. Technical Report TR-23-90, Cen- ter for Research in Computing Technology, Harvard University, 1990.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "KRYP-TON: A Functional Approach to Knowledge Representation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Brachman", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Fikes", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Levesque", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Brachman, R.Fikes, and H. Levesque. KRYP- TON: A Functional Approach to Knowledge Repre- sentation. Technical Report FLAIR 16, Fairchild Laboratories for Artificial Intelligence, Palo Alto, California, 1983.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Living with CLASSIC: when and how to use a KL-ONE-IIke language", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bra~hman", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Mcgninness", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Patel-Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "Alperin" |
| ], |
| "last": "Resnick", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Borgida", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Bra~hman, D. McGninness, P. Patel-Schneider, L. Alperin Resnick, and A. Borgida. Living with CLASSIC: when and how to use a KL-ONE-IIke lan- guage. In J. Sowa, editor, Principles of Semantic Networks, Morgan Kaufmann Publishers, Inc., 1990.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Essays on Actions and Events", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Davidson", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Davidson. Essays on Actions and Events. Oxford University Press, 1982.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Plan-Based Terminological Reasoning", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Devanbu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Litman", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "To appear in Proceedings of KR 91", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Devanbu and D. Litman. Plan-Based Termino- logical Reasoning. 1991. To appear in Proceedings of KR 91, Boston.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A language for representing action descriptions", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Eugenio", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Di Eugenio. A language for representing action descriptions. Preliminary Thesis Proposal, Univer- sity of Pennsylvania, 1990. Manuscript.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A new approach to the application of theorem proving to problem solving", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Fikes", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 1971, |
| "venue": "Artificial Intelligence", |
| "volume": "2", |
| "issue": "", |
| "pages": "189--208", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Fikes and N. Nilsson. A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2:189-208, 1971.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A Theory of Human Action", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Goldman", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Goldman. A Theory of Human Action. Princeton University Press, 1970.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Semantics and Cognition. Current Studies in Linguistics Series", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Jackendoff", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Jackendoff. Semantics and Cognition. Current Studies in Linguistics Series, The MIT Press, 1983.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Inferring domain plans in questionanswering", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pollack", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Pollack. Inferring domain plans in question- answering. PhD thesis, University of Pennsylvania, 1986.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The Restricted Language Architecture of a Hybrid Representation System", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Vilmn", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "IJCAI-85", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. VilMn. The Restricted Language Architecture of a Hybrid Representation System. In IJCAI-85, 1985.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Free Adjuncts in Natural Language Instructions", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Webber", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [ |
| "Di" |
| ], |
| "last": "Eugenio", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings Thir-teen& International Conference on Computational Linguistics, COLING 90", |
| "volume": "", |
| "issue": "", |
| "pages": "395--400", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Webber and B. Di Eugenio. Free Adjuncts in Natural Language Instructions. In Proceedings Thir- teen& International Conference on Computational Linguistics, COLING 90, pages 395-400, 1990.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Because effects may occur during the performance of an action, the basic aspectua] profile of the action-type[11] should also be included. Clearly, this knowledge is not terminological; in Ex. 3 Turn the screw counterclockwise but don't loosen it completely. the modifier not ... completely does not affect the fact that don't loosen it completely is a loosening action: only its default culmination condition is affected.", |
| "html": null |
| } |
| } |
| } |
| } |