ACL-OCL / Base_JSON /prefixP /json /P92 /P92-1009.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P92-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:12:07.097093Z"
},
"title": "",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Green",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Delaware Newark",
"location": {
"postCode": "19716",
"settlement": "Delaware",
"country": "USA"
}
},
"email": "green@cis.udel.edu@carberry~cis.udel.edu"
},
{
"first": "Sandra",
"middle": [],
"last": "Carberry",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Delaware Newark",
"location": {
"postCode": "19716",
"settlement": "Delaware",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present algorithms for the interpretation and generation of a kind of particularized conversational implicature occurring in certain indirect replies. Our algorithms make use of discourse expectations, discourse plans, and discourse relations. The algorithms calculate implicatures of discourse units of one or more sentences. Our approach has several advantages. First, by taking discourse relations into account, it can capture a variety of implicatures not handled before. Second, by treating implicatures of discourse units which may consist of more than one sentence, it avoids the limitations of a sentence-at-a-time approach. Third, by making use of properties of discourse which have been used in models of other discourse phenomena, our approach can be integrated with those models. Also, our model permits the same information to be used both in interpretation and generation.",
"pdf_parse": {
"paper_id": "P92-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present algorithms for the interpretation and generation of a kind of particularized conversational implicature occurring in certain indirect replies. Our algorithms make use of discourse expectations, discourse plans, and discourse relations. The algorithms calculate implicatures of discourse units of one or more sentences. Our approach has several advantages. First, by taking discourse relations into account, it can capture a variety of implicatures not handled before. Second, by treating implicatures of discourse units which may consist of more than one sentence, it avoids the limitations of a sentence-at-a-time approach. Third, by making use of properties of discourse which have been used in models of other discourse phenomena, our approach can be integrated with those models. Also, our model permits the same information to be used both in interpretation and generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper we present algorithms for the interpretation and generation of a certain kind of conversational implicature occurring in the following type of conversational exchange. One participant (Q) makes an illocutionary-level request 2 to be informed if p; the addressee (A), whose reply may consist of more than one sentence, conversationally implicates one of these replies: p, \"-p, that there is support for p, or that there is support for \"-p. For example, in (1), assuming Q's utterance has been interpreted as a request to be informed if A went shopping, and given certain mutual beliefs e.g., that A's car breaking down would normally e sufficient to prevent A from going shopping, and i We wish to thank Kathy McCoy for her comments on this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "~i.e., using Austin's (Austin, 1962) distinction between locutionary and il]ocutionary force, Q's utterance is intended to function as a request (although it need not have the grammatical form of a question) that A's reply is coherent and cooperative), A's reply is intended to convey, in part, a 'no'.",
"cite_spans": [
{
"start": 22,
"end": 36,
"text": "(Austin, 1962)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Q:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Did you go shopping? A: a. My car~s not running.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "b. The timing belt broke.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Such indirect replies satisfy the conditions proposed by Grice and others (Grice, 1975; Hirschberg, 1985; Sadock, 1978) for being classified as particularized conversational implicatures. First, A's reply does not entail (in virtue of its conventional meaning) that A did not go shopping. Second, the putative implicature can be cancelled; for example, it can be denied without the result sounding inconsistent, as can be seen by considering the addition of (2) to the end of A's reply in (1.)",
"cite_spans": [
{
"start": 57,
"end": 87,
"text": "Grice and others (Grice, 1975;",
"ref_id": null
},
{
"start": 88,
"end": 105,
"text": "Hirschberg, 1985;",
"ref_id": "BIBREF10"
},
{
"start": 106,
"end": 119,
"text": "Sadock, 1978)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) A: So I took the bus to the mall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Third, it is reinforceable; A's reply in (1) could have been preceded by an explicit \"no\" without destroying coherency or sounding redundant. Fourth, the putative implicature is nondetachable; the same reply would have been conveyed by an alternative realization of (la) and (lb) (assuming that the alternative did not convey a Manner-based implicature). Fifth, Q and A must mutually believe that, given the assumption that A's reply is cooperative, and given certain shared background information, Q can and will infer that by A's reply, A meant 'no'. This paper presents algorithms for calculating such an inference from an indirect response and for generating an indirect response intended to carry such an inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our algorithms are based upon three notions from discourse research: discourse expectations, discourse plans, and implicit relational propositions in discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2.1"
},
{
"text": "At certain points in a coherent conversation, the participants share certain expectations (Reichman, 1984; Carberry, 1990) about what kind of utterance is appropriate. In the type of exchange we are studying, at the point after Q's contribution, the participants share the beliefs that Q has requested to be informed if p and that the request was appropriate; hence, they share the discourse expectation that for A to be cooperative, he must now say as much as he can truthfully say in regard to the truth of p. (For convenience, we shall refer to this expectation as Answer-YNQ(p).) A discourse plan operator 3 (Lambert & Carberry, 1991 ) is a representation of a normal or conventional way of accomplishing certain communicative goals. Alternatively, a discourse plan operator could be considered as a defeasihle rule expressing the typical (intended) effect(s) of a sequence of illocutionary acts in a context in which certain applicability conditions hold. These discourse plan operators are mutually known by the conversational participants, and can be used by a speaker to construct a plan for achieving his communicative goals. We provide a set of discourse plan operators which can be used by A as part of a plan for fulfilling Answer-YNQ(p). (Mann ~z Thompson, 1983; Mann & Thompson, 1987) have described how the structure of a written text can be analyzed in terms of certain implicit relational propositions that may plausibly be attributed to the writer to preserve the assumption of textual coherency. 4 The role of discourse relations in our approach is motivated by the observation that direct replies may occur as part of a discourse unit conveying a relational proposition. For example, in (3), (b) is provided as the (most salient) obstacle to the action (going shopping) denied by (a); 3in Pollack's terminology, a recipe-for-action (Pollack, 1988; Grosz & Sidner, 1988 )~ 4Although they did not study dialogue, they suggested that it can be analyzed similarly. Also note that the relational predicates which we define are similar but not necessarily identical to theirs.",
"cite_spans": [
{
"start": 90,
"end": 106,
"text": "(Reichman, 1984;",
"ref_id": "BIBREF27"
},
{
"start": 107,
"end": 122,
"text": "Carberry, 1990)",
"ref_id": "BIBREF2"
},
{
"start": 612,
"end": 637,
"text": "(Lambert & Carberry, 1991",
"ref_id": "BIBREF15"
},
{
"start": 1251,
"end": 1275,
"text": "(Mann ~z Thompson, 1983;",
"ref_id": null
},
{
"start": 1276,
"end": 1298,
"text": "Mann & Thompson, 1987)",
"ref_id": "BIBREF20"
},
{
"start": 1852,
"end": 1867,
"text": "(Pollack, 1988;",
"ref_id": "BIBREF25"
},
{
"start": 1868,
"end": 1888,
"text": "Grosz & Sidner, 1988",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2.1"
},
{
"text": "Note that given appropriate context, the (b) replies in (3) through (5)would be sufficient to conversationally implicate the corresponding direct replies. This, we claim, is by virtue of the recognition of the relational proposition that would be conveyed by use of the direct reply and the (b) sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mann and Thompson",
"sec_num": null
},
{
"text": "Our strategy, then, is to generate/interpret A's contribution using a set of discourse plan operators having the following properties: (1) if the applicability conditions hold, then executing the body would generate a sequence of utterances intended to implicitly convey a relational proposition R(p, q);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mann and Thompson",
"sec_num": null
},
{
"text": "(2) the applicability conditions include the condition that R(p, q) is plausible in the discourse context;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mann and Thompson",
"sec_num": null
},
{
"text": "(3) one of the goals is that Q believe that p, where p is the content of the direct reply; and (4) the step of the body which realizes the direct reply can be omitted under certain conditions. Thus, whenever the direct reply is omitted, it is nevertheless implicated as long as the intended relational proposition can be recognized. Note that property (2) requires a judgment that some relational proposition is plausible. Such judgments will be described using defeasible inference rules. The next section describes our discourse relation inference rules and discourse plan operators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mann and Thompson",
"sec_num": null
},
{
"text": "A typical reason for the failure of an agent's attempt to achieve a domain goal is that the agent's domain plan encountered an obstacle. Thus, we give the rule in (6) for inferring a plausible discourse relation of Obstacle. s (iii) B is a proposition that a) a normal applicability condition of T did not hold, or b) a normal precondition of T failed, or c) a normal step of T failed, or d) the agent did not want to achieve a normal goal of T, then plausible(Obstacle(B,A)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Plan Operators and Discourse Relation Inference Rules",
"sec_num": null
},
{
"text": "In (6) and in the rules to follow, 'coherentlyrelated(A,B)' means that the propositions A and B are assumed to be coherently related in the discourse. The terminology in clause (iii) is that of the extended STRIPS planning formalism (Fikes 5For simplicity of exposition, (6) and the discourse relation inference rules to follow are stated in terms of the past; we plan to extend their coverage of times. & Nilsson, 1971; Allen, 1979; Carberry, 1990; Litman & Allen, 1987) .",
"cite_spans": [
{
"start": 404,
"end": 420,
"text": "& Nilsson, 1971;",
"ref_id": "BIBREF4"
},
{
"start": 421,
"end": 433,
"text": "Allen, 1979;",
"ref_id": "BIBREF0"
},
{
"start": 434,
"end": 449,
"text": "Carberry, 1990;",
"ref_id": "BIBREF2"
},
{
"start": 450,
"end": 471,
"text": "Litman & Allen, 1987)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Plan Operators and Discourse Relation Inference Rules",
"sec_num": null
},
{
"text": "Examples of A and B satisfying each of the conditions in (6.iii) are given in (7a) -(7d), respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Plan Operators and Discourse Relation Inference Rules",
"sec_num": null
},
{
"text": "[A]I didn't go shopping. The discourse plan operator given in (8) describes a standard way of performing a denial (exemplified in (3)) that uses the discourse relation of Obstacle given in (6). In 8, as in (6), A is a proposition that an action of type T was not performed. (optional) S inform H that A 2) TelI(S,H,B) Goals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "1) H believe that A 2) H believe that Obstacle(B,A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "In (8) (and in the discourse plan operators to follow) the\" formalism described above is used; 'S' and 'H' denote speaker and hearer, respectively; 'BMB' is the one-sided mutual belief s operator (Clark & Marshall, 1981) ; 'inform' denotes an illocutionary act of informing; 'believe' is Hintikka's (Hintikka, 1962 ) belief operator; 'TelI(S,H,B)' is a subgoal that can be achieved in a number of ways (to be discussed shortly), including just by S informing H that B; and steps of the body are not ordered. (Note that to use these operators for generation of direct replies, we must provide a method to determine a suitable ordering of the steps. Also, although it is sufficient for interpretation to specify that step 1 is optional, for generation, more information is required to decide whether it can or should be omitted; e.g., it should not be omitted if S believes that H might believe that some relation besides Obstacle is plausible in the context. 7 These are areas which we are currently investigating; for related research, see section 3.)",
"cite_spans": [
{
"start": 196,
"end": 220,
"text": "(Clark & Marshall, 1981)",
"ref_id": "BIBREF3"
},
{
"start": 299,
"end": 314,
"text": "(Hintikka, 1962",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "Next, consider that a speaker may wish to inform the hearer of an aspect of the plan by which she accomplished a goal, if she believes that H may not be aware of that aspect. Thus, we give the rule in (9) for inferring a plausible discourse relation of Elaboration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "e'S BMB p' is to be read as 'S believes that it is mutually believed between S and H that p'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "ZA related question, which has been studied by others (Joshi, Webber ~ Weischedel, 1984a; Joshi, Webber & Weischedel, 1984b) , is in what situations is a speaker required to supply step 2 to avoid misleading the hearer?",
"cite_spans": [
{
"start": 54,
"end": 89,
"text": "(Joshi, Webber ~ Weischedel, 1984a;",
"ref_id": "BIBREF13"
},
{
"start": 90,
"end": 124,
"text": "Joshi, Webber & Weischedel, 1984b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "(9) If (i) (ii)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "coherently-related(A,B), and A is a proposition that an agent performed some action of act type T, and (iii) B is a proposition that describes information believed to be new to H about a) the satisfaction of a normal applicability condition of T such that its satisfaction is not believed likely by H, or b) the satisfaction of a normal precondition of T such that its satisfaction is not believed likely by H, or c) the success of a normal step of T, or d) the achievement of a normal goal of T, then plausible(Elaboration(B,A)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "Examples of A and B satisfying each of the conditions in (9.iii) are given in (10a) -(10d), respectively. The discourse plan operator given in (11) describes a standard way of performing an affirmation (exemplified in (4)) that uses the discourse relation of Elaboration. Finally, note that a speaker may concede a failure to achieve a certain goal while seeking credit for the partial success of a plan to achieve that goal. For example, the [B] utterances in (10) can be used following (12) (or aIone, in the right context) to concede failure. Thus, the rule we give in (13)for inferring a plausible discourse relation of Concession is similar (but not identical) to (9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(7)",
"sec_num": null
},
{
"text": "If (i) (ii)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(13)",
"sec_num": null
},
{
"text": "coherently-related(A,B), and A is a proposition that an agent failed to do an action of act type T, and (iii) B is a proposition that describes a) the satisfaction of a normal applicability condition of T, or b) the satisfaction of a normal precondition of T, or c) the success of a normal step of T, A discourse plan operator, Deny (with Concession), can be given to describe another standard way of performing a denial (exemplified in (5)). This operator is similar to the one given in (8), except with Concession in the place of Obstacle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(13)",
"sec_num": null
},
{
"text": "An interesting implication of the discourse plan operators for Affirm (with Elaboration) and Deny (with Concession) is that, in cases where the speaker chooses not to perform the optional step (i.e., chooses to omit the direct reply), it requires that the intended discourse relation be inferred in order to correctly interpret the indirect reply, since either an affirmation or denial could be realized with the same utterance. (Although (9) and (13) contain some features that differentiate Elaboration and Concession, other factors, such as intonation, will be considered in future research.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(13)",
"sec_num": null
},
{
"text": "The next two discourse relations (described in (14) and (16)) may be part of plan operators for conveying a 'yes' similar to Affirm (with Elaboration).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(13)",
"sec_num": null
},
{
"text": "If (i) coherently-related(A,B), and (ii) A is a proposition that an agent performed an action X, and (iii) B is a proposition that normally implies that the agent has a goal G, and (iv) X is a type of action occurring as a normal part of a plan to achieve G, then plausible( Motivate-Volitional-Action(B,A)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "15) shows tile use of Motivate-Volitional-Action MVA) in an indirect (affirmative) reply.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "(15) Q: Did you close the window? A: I was cold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "(16) If (i) coherently-related(A,B), and (ii) A is a proposition that an event E occurred, and (iii) B is a proposition that an event F occurred, and (iv) it is not believed that F followed E, and (v) F-type events normally cause E-type events, then plausible(Cause-Non-Volitional (B,A) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 286,
"text": "(B,A)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "/17) shows the use of Cause-Non-Volitional (CNV) m an indirect (affirmative) reply.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "(17) Q: Did you wake up very early?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "A: The neighbor's dog was barking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "The discourse relation described in (18) may be part of a plan operator similar to Deny (with Obstacle) for conveying a 'no'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(14)",
"sec_num": null
},
{
"text": "If (i) coherently-related(A,B), and (ii) A is a proposition that an event E did not occur, and (iii) B is a proposition that an action F was performed, and (iv) F-type actions are normally performed as a way of preventing E-type events, then plausible (Prevent(B,A) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 265,
"text": "(Prevent(B,A)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(18)",
"sec_num": null
},
{
"text": "(19) showsthe use of Preventin an indirect denial. The discourse relation described in (20) can be part of a plan operator similar to the others described above except that one of the speaker's goals is, rather than affirming or denying p, to provide support for the belief that p. (ii) B is a proposition that describes a typical result of the situation described in proposition A, then plausible(Evidence (B,A) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 412,
"text": "(B,A)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(18)",
"sec_num": null
},
{
"text": "Assuming an appropriate context, (21) is an .example of use of this relation to convey support, Le., to convey that it is likely that someone is home. A similar rule could be defined for a relation used to convey support against a belief.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(18)",
"sec_num": null
},
{
"text": "Consider the similar dialogues in (22) and (23). First, note that although the order of the sentences realizing A's reply varies in (22) and (23), A's overall discourse purpose in both is to convey a 'yes'. Second, note that it is necessary to have a rule so that if A's reply consists solely of (22a) (=23c), an implicated 'yes' is derived; and if It consists solely of (22b) (=23a), an implicated 'no'. In existing sentence-at-a-time models of calculating implicatures (Gazdar, 1979; Hirschberg, 1985) , processing (22a) would result in an implicated 'yes' being added to the context, which would successfully block the addition of an implicated 'no' on processing (22b). However, processing (23a) would result in a putatively implicated 'no\" bein S added to the context (incorrectly attributing a fleeting intention of A to convey a 'no'); then, on processing (23c) the conflicting but intended 'yes' would be blocked by context, giving an incorrect result. Thus, a sentence-at-a-time model must predict when (23c) should override (23a). Also, in that model, processing (23) requires \"extra effort\", a nonmonotonic revision of belief not needed to handle (22); yet (23) seems more like (22) than a case in which a speaker actually changes her mind.",
"cite_spans": [
{
"start": 471,
"end": 485,
"text": "(Gazdar, 1979;",
"ref_id": "BIBREF5"
},
{
"start": 486,
"end": 503,
"text": "Hirschberg, 1985)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicatures of Discourse Units",
"sec_num": "2.3"
},
{
"text": "In our model, since implicatures correspond to goals of inferred or constructed hierarchical plans, we avoid this problem. (22A) and (23A) both correspond to step 2 of Affirm (with Elaboration), TelI(S,H,B); several different discourse plan operators can be used to construct a plan for this Tell action. For example, one operator for Tell(S,H,B) is given below in (24); the operator represents that in telling H that B, where B describes an agent's volitional action, a speaker may provide motivation for the agent's action. Motivate-Volitional-Action(q,p)) Body (unordered):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicatures of Discourse Units",
"sec_num": "2.3"
},
{
"text": "1) Tell(S,H,q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicatures of Discourse Units",
"sec_num": "2.3"
},
{
"text": "2) S inform H that p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicatures of Discourse Units",
"sec_num": "2.3"
},
{
"text": "I) H believe that p 2) H believe that Motivate-Volitional-Action(q,p) (We are currently investigating, in generation, when to use an operator such as (24). For example, a speaker might want to use (24) in case he thinks that the hearer might doubt the truth of B unless he knows of the motivation.) Thus, (22a)/(23c) corresponds to step 2 of (24); (22b) -(22c), as well as (23a) -(23b), correspond to step 1. Another operator for Tell(S,H,p) could represent that in telling H that p, a speaker may provide the cause of an event; i.e., the operator would be like (24) but with Cause-Non-Volitional as the discourse relation. This operator could be used to decompose (22b)-(22c)/(23a)-(23b). The structure proposed for (22A)/(23A) is illustrated in Figure 1 . s Linear precedence in the tree does not necessarily represent narrative order; one way of ordering the two nodes directly dominated by TeII(MVA) gives (22A), another gives (23A). (Narrative order in the generation of indirect replies is an area we are currently investigating also; for related research, see section 3.) Note that Deny (with Obstacle) can not be used to generate/interpret (22A) or (23A) since its body can not be expanded to account for b22a)/(23c). Thus, the correct implicatures can e derived without attributing spurious intentions to A, and without requiring cancellation of spurious implicatures.",
"cite_spans": [],
"ref_spans": [
{
"start": 747,
"end": 755,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Goals:",
"sec_num": null
},
{
"text": "8To use the terminology of (Moore & Paris, 1989; Moore & Paris, 1988) , the labelled arcs represent satellites, and the unlabelled arcs nucleii. However, note that in their model, a nucleus can not be optional. This differs from our approach, in that we have shown that direct replies are optional in contexts such as those described by plan operators such as Affirm (with Elaboration).",
"cite_spans": [
{
"start": 27,
"end": 48,
"text": "(Moore & Paris, 1989;",
"ref_id": "BIBREF22"
},
{
"start": 49,
"end": 69,
"text": "Moore & Paris, 1988)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goals:",
"sec_num": null
},
{
"text": "9Determiuing this requires that the end of the relevant discourse unit be marked/recognlzed by cue phrases, intonation, or shift of focus; we plan to investigate this problem. I. Select discourse plan operator: Select from the Ans,er-YHQ(p) plan operators all those for ,hich a) the applicability conditions hold, and b) the goals include S's goals. 2. If more than one operator was selected in step I, then choose one. Also, determine step ordering and whether it is necessary to include optional steps. (We are currently investigating how these choices are determined.) 3. Construct a plan from the chosen operator and execute it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals:",
"sec_num": null
},
{
"text": "1\u00b0We plan to implement an inference mechanism for the discourse relation inference rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals:",
"sec_num": null
},
{
"text": "11 Note that A's goals depend, in part, on the illocutionarylevel representation of Q's request. We assume that an analysis, such as provided in (Perrault & Allen, 1980) , is available.",
"cite_spans": [
{
"start": 145,
"end": 169,
"text": "(Perrault & Allen, 1980)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goals:",
"sec_num": null
},
{
"text": "(26) Interpretation of indirect reply:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals:",
"sec_num": null
},
{
"text": "I. Infer discourse plan: Select from the Ansver-YNQ(p) plan operators all those for ,hich a) the second step of the body matches S's contribution, and b) the applicability conditions hold, and \u00a2) it is mutually believed that the goals are consistent with S's goals. 2. If more than one operator was selected in step I, then choose one. (We are currently investigatin E what factors are involved in this choice. Of course, the utterance may be ambiguous.) 3. Ascribe to S the goal(s) of the chosen plan operator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals:",
"sec_num": null
},
{
"text": "Most previous work in computational or formal linguistics on particularized conversational implicature (Green, 1990; Horacek, 1991; Joshi, Webber & Weischedel, 1984a; .]oshi, Webber Weischedel, 1984b; Reiter, 1990; Whiner & Maida, 1991) has treated other kinds of implicature than we consider here. ttirschberg (Hirschberg, 1985) provided licensing rules making use of mutual beliefs about salient partial orderings of entities in the discourse context to calculate the scalar implicatures of an utterance. Our model is similar to Hirschberg's in that both rely on the representation of aspects of context to generate implicatures, and our discourse plan operators are roughly analogous in function to her licensing rules. However, her model makes no use of discourse relations. Therefore, it does not handle several kinds of indirect replies which we treat. For example, although A in (27) could be analyzed as scalar implicating a 'no' in some contexts, Hirschberg's model could not account for the use of A in other contexts as an elaboration (of how A managed to read chapter 1) intended to convey a 'yes'. 12 Furthermore, Hirschberg provided no computational method for determining the salient partially ordered set in a context. Also, in her model, implicatures are calculated one sentence at a time, which has the potential problems described above. Lascarides, Asher, and Oberlander (Lascarides & Asher, 1991; Lascarides & Oberlander, 1992) described the interpretation and generation of temporal implicatures. Although that type of implicature (being Manner-based) is somewhat different from what we are studying, we have adopted their technique of providing defeasible inference rules for inferring discourse relations.",
"cite_spans": [
{
"start": 103,
"end": 116,
"text": "(Green, 1990;",
"ref_id": "BIBREF6"
},
{
"start": 117,
"end": 131,
"text": "Horacek, 1991;",
"ref_id": null
},
{
"start": 132,
"end": 166,
"text": "Joshi, Webber & Weischedel, 1984a;",
"ref_id": "BIBREF13"
},
{
"start": 167,
"end": 200,
"text": ".]oshi, Webber Weischedel, 1984b;",
"ref_id": null
},
{
"start": 201,
"end": 214,
"text": "Reiter, 1990;",
"ref_id": "BIBREF28"
},
{
"start": 215,
"end": 236,
"text": "Whiner & Maida, 1991)",
"ref_id": null
},
{
"start": 311,
"end": 329,
"text": "(Hirschberg, 1985)",
"ref_id": "BIBREF10"
},
{
"start": 1357,
"end": 1417,
"text": "Lascarides, Asher, and Oberlander (Lascarides & Asher, 1991;",
"ref_id": null
},
{
"start": 1418,
"end": 1448,
"text": "Lascarides & Oberlander, 1992)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Past Research",
"sec_num": "3"
},
{
"text": "In philosophy, Thomason (Thomason, 1990) suggested that discourse expectations play a role in some implicatures. McCafferty (McCafferty, 1987) argued that interpreting certain implicated replies requires domain plan reconstruction. However, he did not provide a computational method for interpreting implicatures. Also, his proposed technique can not handle many types of indirect replies. For example, it can not account for the implicated negative replies in (1) and (5), since their interpretation involves reconstructing domain plans that were not executed successfully; it can not account for the implicated affirmative reply in (17), in which no reasoning about domain plans is involved; and it can not account for implicated replies conveying support for or against a belief, as in (21). Lastly, his approach cannot handle implicatures conveyed by discourse units containing more than one sentence.",
"cite_spans": [
{
"start": 24,
"end": 40,
"text": "(Thomason, 1990)",
"ref_id": "BIBREF30"
},
{
"start": 113,
"end": 142,
"text": "McCafferty (McCafferty, 1987)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Past Research",
"sec_num": "3"
},
{
"text": "Finally, note that our approach of including rhetorical goals in discourse plans is modelled on the work of Hovy (Hovy, 1988) and Moore and Paris (Moore & Paris, 1989; Moore & Paris, 1988) , who used rhetorical plans to generate coherent text.",
"cite_spans": [
{
"start": 113,
"end": 125,
"text": "(Hovy, 1988)",
"ref_id": "BIBREF12"
},
{
"start": 130,
"end": 167,
"text": "Moore and Paris (Moore & Paris, 1989;",
"ref_id": "BIBREF22"
},
{
"start": 168,
"end": 188,
"text": "Moore & Paris, 1988)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Past Research",
"sec_num": "3"
},
{
"text": "12 The two intended interpretations are marked by different intonations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Past Research",
"sec_num": "3"
},
{
"text": "We have provided algorithms for the interpretation/generation of a type of reply involving a highly context-dependent conversational implicature. Our algorithms make use of discourse expectations, discourse plans, and discourse relations. The algorithms calculate implicatures of discourse units of one or more sentences. Our approach has several advantages. First, by taking discourse relations into account, it can capture a variety of implicatures not handled before. Second, by treating implicatures of discourse units which may consist of more than one sentence, it avoids the limitations of a sentence-at-a-time approach. Third, by making use of properties of discourse which have been used in models of other discourse phenomena, our approach can be integrated with those models. Also, our model permits the same information to be used both in interpretation and in generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "Our current and anticipated research includes: refining and implementing our algorithms (including developing an inference mechanism for the discourse relation rules); extending our model to other types of implicatures; and investigating the integration of our model into general interpretation and generation frameworks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Plan-Based Approach to Speech Act Recognition",
"authors": [
{
"first": "James",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen, James F. (1979). A Plan-Based Approach to Speech Act Recognition. PhD thesis, University of Toronto, Toronto, Ontario, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "How To Do Things With Words",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Austin",
"suffix": ""
}
],
"year": 1962,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Austin, J. L. (1962). How To Do Things With Words. Cambridge, Massachusetts: Harvard University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Plan Recognition in Natural Language Dialogue. Cambridge, Massachusetts",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Carberry",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carberry, Sandra (1990). Plan Recognition in Nat- ural Language Dialogue. Cambridge, Mas- sachusetts: MIT Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Definite reference and mutual knowledge",
"authors": [
{
"first": "H",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Marshall",
"suffix": ""
}
],
"year": 1981,
"venue": "Elements of discourse understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, H. & Marshall, C. (1981). Definite refer- ence and mutual knowledge. In A. K. Joshi, B. Webber, & I. Sag (Eds.), Elements of dis- course understanding. Cambridge: Cambridge University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Strips: A new approach to the application of theorem proving to problem solving",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Fikes",
"suffix": ""
},
{
"first": "N",
"middle": [
"J"
],
"last": "Nilsson",
"suffix": ""
}
],
"year": 1971,
"venue": "Artificial Intelligence",
"volume": "2",
"issue": "",
"pages": "189--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fikes, R. E. & Nilsson, N. J. (1971). Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 189-208.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Pragmatics: lmplicature, Presupposition, and Logical Form",
"authors": [
{
"first": "G",
"middle": [],
"last": "Gazdar",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gazdar, G. (1979). Pragmatics: lmplicature, Pre- supposition, and Logical Form. New York: Academic Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Normal state implicature",
"authors": [
{
"first": "Nancy",
"middle": [
"L"
],
"last": "Green",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 28th Annual Meeting, Pittsburgh. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Green, Nancy L. (1990). Normal state impli- cature. In Proceedings of the 28th Annual Meeting, Pittsburgh. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Logic and conversation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Grice",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paul",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and Semantics III: Speech Acts",
"volume": "",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grice, H. Paul (1975). Logic and conversation. In Cole, P. & Morgan, J. L. (Eds.), Syntax and Semantics III: Speech Acts, (pp. 41-58)., New York. Academic Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Plans for discourse",
"authors": [
{
"first": "Barbara",
"middle": [
"&"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "Candace",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1988,
"venue": "Intentions in Communication",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grosz, Barbara & Sidner, Candace (1988). Plans for discourse. In P. Cohen, J. Morgan, M. Pollack (Eds.), Intentions in Communica- tion. MIT Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Knowledge and Belief",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hintikka",
"suffix": ""
}
],
"year": 1962,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hintikka, J. (1962). Knowledge and Belief. Ithaca: Cornell University Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Theory of Scalar Implicature",
"authors": [
{
"first": "Julia",
"middle": [
"B"
],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirschberg, Julia B. (1985). A Theory of Scalar Implicature. PhD thesis, University of Penn- sylvania.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Exploiting conversational implicature for generating concise explanations",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Ttoracek",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings. European Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ttoracek, Helmut (1991). Exploiting conversa- tional implicature for generating concise expla- nations. In Proceedings. European Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Planning coherent multisentential text",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 26th Annual Meeting",
"volume": "",
"issue": "",
"pages": "163--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy, Eduard H. (1988). Planning coherent multi- sentential text. In Proceedings of the 26th An- nual Meeting, (pp. 163-169). Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Living up to expectations: Computing expert responses",
"authors": [
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of the Fourth National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "169--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind, Webber, Bonnie, & Weischedel, Ralph (1984a). Living up to expectations: Computing expert responses. In Proceedings of the Fourth National Conference on Artificial Intelligence, (pp. 169-175)., Austin, Texas.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Preventing false inferences",
"authors": [
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of Coling84",
"volume": "",
"issue": "",
"pages": "134--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind, Webber, Bonnie, & Weischedel, Ralph (1984b). Preventing false inferences. In Proceedings of Coling84, (pp. 134-138), Stanford University, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A tripartite plan-based model of dialogue",
"authors": [
{
"first": "Lynn",
"middle": [
"&"
],
"last": "Lambert",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Carberry",
"suffix": ""
}
],
"year": 1991,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lambert, Lynn & Carberry, Sandra (1991). A tri- partite plan-based model of dialogue. In Pro- ceedings of the 29th Annual Meeting, (pp. 47- 54). Association for Computational Linguis- tics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discourse relations and defensible knowledge",
"authors": [
{
"first": "Alex",
"middle": [
"&"
],
"last": "Lascarides",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 1991,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lascarides, Alex & Asher, Nicholas (1991). Dis- course relations and defensible knowledge. In Proceedings of the 2gth Annual Meeting, (pp. 55-62). Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Temporal coherence and defensible knowledge. Theoretical Linguistics",
"authors": [
{
"first": "Alex",
"middle": [
"&"
],
"last": "Lascarides",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Oberlander",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lascarides, Alex & Oberlander, Jon (1992). Tem- poral coherence and defensible knowledge. Theoretical Linguistics, 18.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A plan recognition model for subdialogues in conversation",
"authors": [
{
"first": "Diane",
"middle": [
"&"
],
"last": "Litman",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1987,
"venue": "Cognitive Science",
"volume": "11",
"issue": "",
"pages": "163--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Litman, Diane & Allen, James (1987). A plan recognition model for subdialogues in conver- sation. Cognitive Science, 11, 163-200.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Relational propositions in discourse",
"authors": [
{
"first": "William",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mann, William C. & Thompson, Sandra A. (1983). Relational propositions in discourse. Technical Report ISI/RR-83-115, ISI/USC.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "William",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1987,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "167--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mann, William C. & Thompson, Sandra A. (1987). Rhetorical structure theory: Toward a func- tional theory of text organization. Text, 8(3), 167-182.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Reasoning about Implicature: a Plan-Based Approach",
"authors": [
{
"first": "Andrew",
"middle": [
"S"
],
"last": "Mccafferty",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCafferty, Andrew S. (1987). Reasoning about Im- plicature: a Plan-Based Approach. PhD thesis, University of Pittsburgh.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Planning text for advisory dialogues",
"authors": [
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
},
{
"first": "Cecile",
"middle": [],
"last": "Paris",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 27th Annual Meeting",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, Johanna D. & Paris, Cecile (1989). Plan- ning text for advisory dialogues. In Proceed- ings of the 27th Annual Meeting, University of British Columbia, Vancouver. Association of Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Constructing coherent text using rhetorical relations",
"authors": [
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
},
{
"first": "Cecile",
"middle": [
"L"
],
"last": "Paris",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. lOth Annual Conference. Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, Johanna D. ~z Paris, Cecile L. (1988). Con- structing coherent text using rhetorical rela- tions. In Proc. lOth Annual Conference. Cog- nitive Science Society.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A plan-based analysis of indirect speech acts",
"authors": [
{
"first": "Raymond",
"middle": [
"&"
],
"last": "Perrault",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1980,
"venue": "American Journal of Computational Linguistics",
"volume": "6",
"issue": "3-4",
"pages": "167--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Perrault, Raymond & Allen, James (1980). A plan-based analysis of indirect speech acts. American Journal of Computational Linguis- tics, 6(3-4), 167-182.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Plans as complex mental attitudes",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Pollack",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pollack, Martha (1988). Plans as complex men- tal attitudes. In P. Cohen, J. Morgan, &",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Intentions in Communication",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Pollack (Eds.), Intentions in Communica- tion. MIT Press.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Extended personmachine interface",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Reichman",
"suffix": ""
}
],
"year": 1984,
"venue": "Artificial Intelligence",
"volume": "22",
"issue": "",
"pages": "157--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reichman, Rachel (1984). Extended person- machine interface. Artificial Intelligence, 22, 157-218.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The computational complexity of avoiding conversational implicatures",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 1990,
"venue": "Pittsburgh. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reiter, Ehud (1990). The computational complex- ity of avoiding conversational implicatures. In Proceedings of the 28th Annual Meeting, (pp. 97-104)., Pittsburgh. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "On testing for conversational implicature",
"authors": [
{
"first": "Jerrold",
"middle": [
"M"
],
"last": "Sadock",
"suffix": ""
}
],
"year": 1978,
"venue": "Syntax and Semantics",
"volume": "",
"issue": "",
"pages": "281--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadock, Jerrold M. (1978). On testing for conversa- tional implicature. In Cole, P. & Morgan, J. L. (Eds.), Syntax and Semantics, (pp. 281-297)., N.Y. Academic Press.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Accommodation, meaning, and implicature: Interdisciplinary foundations for pragmatics",
"authors": [
{
"first": "Richmond",
"middle": [
"H"
],
"last": "Thomason",
"suffix": ""
}
],
"year": 1990,
"venue": "Intentions in Communication. Cambridge, Massachusetts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomason, Richmond H. (1990). Accommoda- tion, meaning, and implicature: Interdisci- plinary foundations for pragmatics. In P. Co- hen, J. Morgan, &; M. Pollack (Eds.), In- tentions in Communication. Cambridge, Mas- sachusetts: MIT Press.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Good and bad news in formalizing generalized implicatures",
"authors": [
{
"first": "Jacques",
"middle": [
"&"
],
"last": "Wainer",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Maida",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Sixteenth Annual Meeting of the Berkeley Linguistics Society",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wainer, Jacques & Maida, Anthony (1991). Good and bad news in formalizing generalized impli- catures. In Proceedings of the Sixteenth An- nual Meeting of the Berkeley Linguistics Soci- ety, (pp. 66-71)., Berkeley, California.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "car~s not running. in (4), as an elaboration of the action (going shopping) conveyed by (a);",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "a. [B] The stores were closed. b. [B] My car wasn't run-ing. c. [B] My car broke doen on the way. d. [B] I didn't want to buy anything.",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "12) [A]I didn't go shopping today, but",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "or d) the achievement of a normal goal of T, and (iv) the achievement of the plan's component in B may bring credit to the agent, then plausible(Concession(B,A)).",
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"text": "19) Q: Did you catch the flu? A: I got a flu shot.",
"num": null,
"type_str": "figure"
},
"FIGREF9": {
"uris": null,
"text": "upstairs lights are on.",
"num": null,
"type_str": "figure"
},
"FIGREF10": {
"uris": null,
"text": ". My car's not running. b. The timing belt broke. c. (So) I had to take the bus.",
"num": null,
"type_str": "figure"
},
"FIGREF12": {
"uris": null,
"text": "algorithms are given in (25) and (26), respectively. They presuppose that the plausible discourse relation is available. 1\u00b0 The generation algorithm assumes as given an illocutionary-level representation of A's communicative goals.ll (25) Generation of indirect reply:",
"num": null,
"type_str": "figure"
}
}
}
}