ACL-OCL / Base_JSON /prefixW /json /W94 /W94-0322.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W94-0322",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:47:04.295824Z"
},
"title": "Generating Indirect Answers to Yes-No Questions",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Green",
"suffix": "",
"affiliation": {},
"email": "green@udel.edu"
},
{
"first": "Sandra",
"middle": [],
"last": "Carberry",
"suffix": "",
"affiliation": {},
"email": "carberry@udel.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An indirect answer to! a Yes-No question conversationally implicates the speaker's evaluation of the truth of the questioned proposition. We present the approach to generation used in our implemented system for generating and interpret~ing indirect answers to Yes-No questions in English. Generation of a discourse plan is performed in two phases: content planning and plan pruning. During content planning, stimulus conditions are used to trigger speaker goals to include appropriate extra information wit h the direct answer. Plan pruning determines what parts of this full response do not need to be stated explicitly-resulting in, in appropriate discourse contexts, the generation of an indirect answer.",
"pdf_parse": {
"paper_id": "W94-0322",
"_pdf_hash": "",
"abstract": [
{
"text": "An indirect answer to! a Yes-No question conversationally implicates the speaker's evaluation of the truth of the questioned proposition. We present the approach to generation used in our implemented system for generating and interpret~ing indirect answers to Yes-No questions in English. Generation of a discourse plan is performed in two phases: content planning and plan pruning. During content planning, stimulus conditions are used to trigger speaker goals to include appropriate extra information wit h the direct answer. Plan pruning determines what parts of this full response do not need to be stated explicitly-resulting in, in appropriate discourse contexts, the generation of an indirect answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The timing belt is broken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "According to one study of spoken English [Ste84] , 13 percent of responses to Yes-No questions were indirect answers. Thus, the ability to interpret indirect answers is required for robust dialogue systems. Furthermore, there are good reasons for generating indirect answers in a dialogue system. First, a direct yes or no alone may be misleading if extra information is needed to qualify the answer. Second, an indirect answer may contribute to a more efficient dialogue. For example, in addition to providing the requested information, it may anticipate a follow-up question from Q, or it may allow R to respond immediately without asking for clarification of the question. That is, the increased efficiency is more the result of avoiding the follow-up question or clarification subdialogue than the result of omitting the direct answer. Third, an indirect answer may be preferable for politeness considerations, as in (1).",
"cite_spans": [
{
"start": 41,
"end": 48,
"text": "[Ste84]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We have developed a computational model for the interpretation and generation of indirect answers to Yes-No questions in English. More precisely, by a Yes-No question we mean one or more utterances used as a request by Q (the questioner) that R (the responder) convey R's evaluation of the truth of a proposition p. An indirect answer implicitly conveys via one or more utterances R's evaluation of the truth of the questioned proposition p, i.e. that p is true, that p is false, that there is some truth to p, that p may be true, or that p may be false. Our model presupposes that Q's question has been understood by P~ as intended by Q, that Q's request was appropriate, and that Q and I:t are engaged in a cooperative goal-directed dialogue. Both the interpretation and generation components have been implemented in Common Lisp on a Sun SPARCstation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A language user's pragmatic knowledge of how language is used to answer Yes-No questions in English can be used to constrain the problem of generating and interpreting indirect answers. This knowledge is encoded in our model as a set of domain-independent discourse plan operators and a set of coherence rules, described in section 2.1 Our system is reversible in that the same pragmatic knowledge is used by the generation and interpretation modules. Generation is modeled as discourse plan construction and interpretation as discourse plan inference. The output of generation is a discourse plan which can be realized by a tactical generation component [McK85] , and the output of interpretation is the speaker's inferred discourse plan. We found that while the above pragmatic knowledge is sufficient for interpretation, it is not sufficient for the 1 Our main sources of data were previous studies [Hir85, Ste84] , transcripts of naturally occurring two-person-dialogue lAme92], and constructed examples.",
"cite_spans": [
{
"start": 655,
"end": 662,
"text": "[McK85]",
"ref_id": null
},
{
"start": 902,
"end": 909,
"text": "[Hir85,",
"ref_id": "BIBREF8"
},
{
"start": 910,
"end": 916,
"text": "Ste84]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "7th International Generation Workshop * Kennebunkport, Maine \u2022 June [21] [22] [23] [24] 1994 problem of content-planning during generation. This paper describes our approach to generation, including our solution to this problem.",
"cite_spans": [
{
"start": 68,
"end": 72,
"text": "[21]",
"ref_id": null
},
{
"start": 73,
"end": 77,
"text": "[22]",
"ref_id": null
},
{
"start": 78,
"end": 82,
"text": "[23]",
"ref_id": null
},
{
"start": 83,
"end": 87,
"text": "[24]",
"ref_id": null
},
{
"start": 88,
"end": 92,
"text": "1994",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Generation of a discourse plan, described in section 4, is performed in two phases: content planning and plan pruning. During content planning, stimulus conditions, described in section 3, are used to trigger new speaker goals to include appropriate extra information with the direct answer. Plan pruning determines what parts of this full response do not need to be stated explicitly. In appropriate discourse contexts, a plan for an indirect answer is generated. In other contexts, i.e. those in which the direct answer must be given explicitly, a plan for a direct answer with appropriate extra information is generated. Note that, according to the study mentioned earlier [Ste84] , 85 percent of direct answers are accompanied by such information. Thus, it is important to model this type of response as well. Section 5 discusses other factors in generating indirect answers, and section 6 surveys related work.",
"cite_spans": [
{
"start": 676,
"end": 683,
"text": "[Ste84]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A language user's knowledge of how to answer a Yes-No question appropriately can be described by a set of discourse plan operators representing full answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatic Knowledge",
"sec_num": "2."
},
{
"text": "A full answer consists of a direct answer (which we call the nucleus) and, possibly, extra relevant information (satellites) . An indirect answer can be modeled as the result of R's providing one or more satellites without expressing the nucleus explicitly. Due to coherence constraints holding between nucleus and satellite, Q may infer R's discourse plan by inferring which coherence relation between an indirect answer and a candidate direct answer is plausible. The coherence relations are similar to the subject-matter relations of Rhetorical Structure Theory (RST) [MT87] . ~ The discourse plan operators used to generate and interpret R's response in (1) are given in Figures 1 and 2. Figure 1 depicts the top-level operator for constructing a No answer. To explain our notation, s and h are constants denoting speaker (R) and hearer (Q), respectively. Symbols prefixed with \"?\" denote propositional variables. The variable in the header of Answer-no will be instantiated with the questioned proposition, e.g. in (1) that R is going shopping. Applicability conditions are necessary conditions for appropriate use of the operator. For example, for a No answer to be appropriate, R must believe that the questioned proposition p is false. Also, an answer is not appropriate unless s and h share the expectation 2The terms nucleus and satellite have been borrowed from RST. Note that according to RST, a property of the nucleus is that its removal results in incoherence. However, in our model, a direct answer may be removed without causing incoherence, provided that it is inferrable from the rest of the response. which is denoted as (discourse-expectation (informif s h p)). Primary goals describe the intended effects of the operator. We use (BMB h s p) to denote that h believes it to be mutually believed with s that p [CM81 ] .",
"cite_spans": [
{
"start": 112,
"end": 124,
"text": "(satellites)",
"ref_id": null
},
{
"start": 571,
"end": 577,
"text": "[MT87]",
"ref_id": "BIBREF20"
},
{
"start": 1830,
"end": 1837,
"text": "[CM81 ]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 692,
"end": 700,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pragmatic Knowledge",
"sec_num": "2."
},
{
"text": "In general, the nucleus and satellites of a discourse plan operator describe primitive or non-primitive communicative acts. Our plan formalism allows zero, one, or more occurrences of a satellite in a full answer. The expected (but not required) order of nucleus and satellites is the order they are listed in an operator. (inform s h p) denotes the primitive act of s informing h that p. The satellites in Figure 1 refer to non-primitive acts, described by discourse plan operators which we have defined (one for each coherence relation used in a full answer). For example, Use-obstacle, a satellite of Answer-no, is defined in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 415,
"text": "Figure 1",
"ref_id": null
},
{
"start": 629,
"end": 637,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Pragmatic Knowledge",
"sec_num": "2."
},
{
"text": "To explain the additional notation in Figure 2 , (cr-obstacle q p) denotes that the coherence relation In summary, we provide our system with a set of discourse plan operators representing shared domainindependent knowledge of the informational content of a full answer. (Operators whose names begin with Answer are top-level Operators.) Table 1 shows the satellites of each top-level discourse plan operator. In addition, we provide a set of coherence rules defining the plausibility of the Coherence relations generated by the satellite discourse plan operators. 4",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 338,
"end": 345,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Pragmatic Knowledge",
"sec_num": "2."
},
{
"text": "Applicability conditions prevent inappropriate use of a discourse plan operator. However, they do not model a speaker's motivation for choosing to provide extra information. A full answer might provide too much information if every satellite whose applicability conditions held were included in the full answer. On the other hand, at the time when he is asked a question, R may not have the primary goals of a potential satellite operator. Thus, a goal-driven approach to selecting satellites would provide insufficient information. To overcome these limitations, we have incorporated stimulus conditions into the discourse plan operators in our 3Although this is not one of the original relations of RST, it is similar to other causal relations defined in RST.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Conditions",
"sec_num": "3."
},
{
"text": "4In addition to RST Subject-matter relations, we have identified the following coherence relations in our corpus: cr-obstacle, cr-possible-cause, cr-possible-obstacle, and crusually. model. Playing a key role in content determination, stimulus conditions describe conditions motivating a speaker to include a satellite during plan construction. They can be thought of as triggers which give rise to new speaker goals. In order for a satellite to be included, all of its applicability conditions and at least one of its stimulus conditions must be true. While stimulus conditions may be derivative of deeper principles of cooperativity [Gri75a] or politeness [BL78] , they provide a level of precompiled knowledge which reduces the amount of reasoning required for content determination.",
"cite_spans": [
{
"start": 635,
"end": 643,
"text": "[Gri75a]",
"ref_id": "BIBREF4"
},
{
"start": 658,
"end": 664,
"text": "[BL78]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Conditions",
"sec_num": "3."
},
{
"text": "Our methodology for identifying stimulus conditions was to survey linguistic studies (described below), as well as to analyze the possible motivation of the speaker in the examples in our corpus. According to StenstrSm [Ste84] , the typical motivation for providing extra information is to answer an implicit wh-question. Other reasons cited are to provide an explanation justifying a negative answer, to qualify the answer, or \"social reasons\". Levinson [Lev83] claims that the presence of an explanation is a structural characteristic of dispreferred responses, such as refusals and unexpected answers. Brown and Levinson [BL78] show how various uses of language are motivated by politeness. For example, use of conversational implicature is one strategy for a speaker to avoid doing a face-threatening act, s and as we discuss in section 6, an indirect answer conversationally implicates the direct answer. Hirschberg [Hir85] claims that an indirect scalar response, rather than a Yes or No answer, is necessary in certain cases to avoid misleading the hearer.",
"cite_spans": [
{
"start": 219,
"end": 226,
"text": "[Ste84]",
"ref_id": "BIBREF22"
},
{
"start": 455,
"end": 462,
"text": "[Lev83]",
"ref_id": "BIBREF13"
},
{
"start": 624,
"end": 630,
"text": "[BL78]",
"ref_id": "BIBREF2"
},
{
"start": 921,
"end": 928,
"text": "[Hir85]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stimulus Conditions",
"sec_num": "3."
},
{
"text": "In the rest of this section, we describe each of the stimulus conditions used in our system. For each stimulus condition, we give one or more axioms defining sufficient conditions for it. Axioms are encoded as Horn clauses, where symbols prefixed with \"?\" are variables, all variables are implicitly universally quantified, and the antecedent conditions are implicitly conjoined. The head, i.e. the consequent, of an axiom is a stimulus condition appearing in one or more of our discourse plan operators. Table 2 summarizes which stimulus conditions appear in which operators. For example, the stimulus condition (explanation-indicated s h ?p ?q) occurs in the discourse plan operator shown in Figure 2 . Keep in mind that additional constraints on .\u00a2p and ?q arise from the applicability conditions of the operator containing the stimulus condition.",
"cite_spans": [],
"ref_spans": [
{
"start": 505,
"end": 512,
"text": "Table 2",
"ref_id": null
},
{
"start": 694,
"end": 702,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Stimulus Conditions",
"sec_num": "3."
},
{
"text": "This stimulus condition appears in all of the oper-SThey claim that another strategy is to give the direct answer last, i.e. in our terminology, to give the satellite(s) before the nucleus. Note that this stimulus condition may contribute to greater dialogue efficiency by anticipating a follow-up request for an explanation. Formally, the condition is defined by the following axiom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions Explanation-indicated",
"sec_num": "3.1"
},
{
"text": "This may be glossed as, s is motivated to give h an explanation q for p, if s suspects that h suspects that p is not true, unless it is the case that s has reason to believe that h will accept p on s's authority. (wbel agent p), which we usually gloss as agent suspects that p, denotes that agent has some confidence in the belief that p. Note tha, t R may acquire the suspicion that Q doubts that p is true by means of syntactic clues from the Yes-No question, e.g. the form of the question in it represents a different kind of motivation. A Yes-No question may be interpreted as a prerequest [Lev83] for a request, i.e. as an utterance used as a preface to another request. 7 Prerequests are often used to check whether a related request is likely to succeed, or to avoid having to make the other request directly. Thus, a negative answer to a Yes-No question used as a prerequest may be interpreted as a refusal. To soften the refusal, the speaker may give an explanation of the negative answer, as illustrated in (1). Formally, the condition is defined by the following axiom. In (3), R has interpreted the question in (a) as a prerequest for the wh-question shown in (b). Thus, (d) not only answers the question in (a) but also the anticipated wh-question in (b). Similarly in (4), R may interpret the question in (a) as a prerequest for the wh-question in (b), and so gives (d) to provide an answer to both (a) and (b). Formally, the condition is defined by the following axiom. This may be glossed as, s is motivated to provide h with q, if s suspects that it would be helpful for h to know the referent of a term t in q. The antecedent would hold wl~enever obstacle detection techniques [AP80] determine that h's not knowing the referent of t is an obstacle to an inferred plan of h's. However, note that not all helpful responses, in the sense described in lAPS0], can be used as indirect answers. For example, even if the clerk (R) at the music store believes that Q's not knowing the closing time could be an obstacle to Q's buying a recording, tl/e response (6), given instead of (5c), would not convey (55) since it cannot be coherently related to (55).",
"cite_spans": [
{
"start": 594,
"end": 601,
"text": "[Lev83]",
"ref_id": "BIBREF13"
},
{
"start": 1695,
"end": 1701,
"text": "[AP80]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "((explanation-indicated s h ?p ?q) <-(wbel s (wbel h (not ?p))) (unless (wbel s (accepts-authority h s))))",
"sec_num": null
},
{
"text": "We close at 5:30 tonight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R:",
"sec_num": "6."
},
{
"text": "This stimulus condition appears Use-elaboration, as illustrated by (7)3 0 On the strict interpretation of (9a), Q is asking whether R has gotten allof the letters, but on a looser interpretation, Q is asking if R has gotten any of the letters. Then, if R has gotten some but not all of the letters, a Yes would be untruthful. However, if Q is speaking loosely, then a No might lead Q to erroneously conclude that R has not gotten any of the letters. R's answer circumvents this problem, by conveying the extent to which the questioned proposition (on the strict interpretation) is true. Formally, the condition is defined by the following axioms. are expression alternatives if they are the same except for having comparable components ei and e j, respectively. Further, Hirschberg claims that in a particular discourse context, there may be a partial ordering of values which the discourse participants mutually believe to be salient, and that the ranking of ei and ej in this ordering can be used to describe the ranking of pi and pj. In the above example, (9b) is a realization of the highest true expression alternative to the questioned proposition, p, i.e. the proposition that R has gotten all the letters, is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clarify-concept-indicated",
"sec_num": null
},
{
"text": "This stimulus condition appears in Use-contrast, as illustrated by (10). 14 10.a. Q: Did you manage to read that section I gave you? b. R: I've read the first couple of pages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appeasement-indicated",
"sec_num": null
},
{
"text": "In (10), R conveys that there is some (though not much) truth to the questioned proposition in an effort to soften his answer. Note that more than one stimulus condition may motivate R to include the same satellite. E.g., in (10), R may have been motivated also by clarify-eztent-indicated, which was described above. However, it is possible to provide a context for (9) where appeasement-indicated holds but not clarifyextent-indicated, or a context where the converse is true. Formally, the condition is defined by the following axiom. heuristics of rational agency that might lead to beliefs about h's desires are 1) if an agent wants you to perform an action A, then your failure to perform A may be undesirable to the agent, and 2) if an agent wants you to do A, then it is desirable to the agent that you perform a part of A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appeasement-indicated",
"sec_num": null
},
{
"text": "The inputs to generation in our model consist of 1) a set of discourse plan operators augmented with stimulus conditions, 2) a set of coherence rules, 3) R's beliefs, and 4) the discourse expectation that R will provide R's evaluation of the truth of the questioned proposition p. The output of the generation algorithm is a discourse plan which can be realized by a tactical generation component [McK85] . We assume that when answer generation begins, the speaker's only goal is to satisfy the above discourse expectation35 Our answer generation algorithm has two phases. In the first phase, content planning, the generator creates a discourse plan for a full answer, i.e., a direct answer and extra appropriate information, e.g. (lc) given explicitly, followed by (ld) : (le). In the second phase, plan pruning, the generator determines which propositions of the planned full answer do not need to be explicitly stated. For example, given an appropriate model of R's beliefs, our system generates a plan for asserting only the proposition conveyed in (le) as an answer to (la) -(lb)36 Note that an advantage of our approach is that, even when it is not possible to omit the direct answer, a full answer is generated.",
"cite_spans": [
{
"start": 397,
"end": 404,
"text": "[McK85]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Algorithm",
"sec_num": "4."
},
{
"text": "Content planning is performed by top-down expansion of an answer discourse plan operator. The process begins by instantiating each of the operators with the questioned proposition until one is found such that its applicability conditions hold. Next, this (top-level) answer operator is expanded. A discourse plan operator is expanded by deciding which of its satellites to include in the full answer and expanding each of them (recursively). A satellite (e.g. Use-obstacle in Figure 2 ) is selected if, for some instantiation of its existential variable, all of the applicability conditions and at least one of the stimulus conditions of the instantiated operator hold.",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 484,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Content planning",
"sec_num": "4.1"
},
{
"text": "Note that testing of applicability conditions and stimulus conditions (including searching for an appro-aSThe plan that is output by our algorithm specifies an ordering of discourse acts based upon the ordering of coherence relations specified in the discourse plan operators. However, to model a speaker who has more than the above initial goal, reordering may be required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content planning",
"sec_num": "4.1"
},
{
"text": "16The tactical component must choose an appropriate referring expression to refer to R's car's timing belt, depending on whether (ld) is omitted. priate instantiation of the existential variable) is handled by a theorem prover. Our model of R's beliefs, represented as a set of Horn clauses, includes 1) general world knowledge presumably shared with Q, 2) knowledge about the preceding discourse, and 3) R's beliefs (including \"weaker be~liefs\" encoded using the wbel operator) about Q's beliefs. Much of the shared world knowledge needed to evaluate our coherence rules consists of knowledge from domain plan operators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content planning",
"sec_num": "4.1"
},
{
"text": "The output of the ~ontent planning phase, an expanded discourse plan representing a full answer, is the input to the plan pruning phase of generation. The expanded plan is represented as a tree of discourse acts. The goal of this phase is to make the response more concise, 17 i.e., to determine which of the planned acts can be omitted while lstill allowing Q to infer the full discourse plan. To do this, the generator considers each of the acts in the frontier of the tree from right to left. (This ensures that a satellite is considered before its related nucleus.) The generator creates a trial plan consisting of the original plan minus the nodes pruned so far and minus the current node. Then, using the interpretation imodule, the generator simulates Q's interpretation of a response containing the information that would be given explicitly according to the trial plan. If Q couldlinfer the full plan (as the most preferred interpretation), then the current node can be pruned. Otherwise, itlis left in the plan and the next node is considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plan pruning",
"sec_num": "4.2"
},
{
"text": "For example, consider Figure 3 as we illustrate the possible effect of pruni;ng on a full discourse plan. The leaf nodes, representing discourse acts, are numbered 1 -8. Arcs labelled N and S lead to a nucleus or satellite, respectively. Node 8 corresponds to the direct answer. Plan pruning would process the nodes in order from 1 to 8. The maximal set :of nodes which could be pruned in Figure 3 is the set containing 2, 3, 4, 7, and 8. That is, nodes 2 -4 might be inferrable from 1, node 7 from 5 or 6, and node 8 from 4 or 7. In the event that it is determined that no node can be pruned, the full plan would be output. Not:e that our interpretation algorithm (described in [GC94]) performs a subprocess, hypothesis generation, to recognize missing propositions other than the direct answer, i.e. the propositions at nodes 2, 3, 4, and 7.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 3",
"ref_id": "FIGREF7"
},
{
"start": 389,
"end": 397,
"text": "Figure 3",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Plan pruning",
"sec_num": "4.2"
},
{
"text": "This section gives an ~example of how the discourse plan shown in Figure 4 would be generated, corresponding to the answer shown in (11), where only (lld) is explicitly stated. lrConciseness is not the only possible motive for omitting the direct answer. As mentioned in section 3, an indirect answer may be used to avoid performing a face-threatening act. In phase one, the generator creates a discourse plan for a full answer. First, it must decide which top-level discourse plan operator to expand. Each of the top-level operators would be instantiated with the questioned proposition, (occur (go-party s) past). The top-level operators would be instantiated and tested until one was found whose applicability conditions held, namely,",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 4",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Example",
"sec_num": "4.3"
},
{
"text": "Answer-no. Next, the generator must decide which of the satellites of Answer-no to include. So, it would search for all propositions q such that, when the existential variable ?q of a satellite is instantiated with q, then all of the satellite's applicability conditions and at least one of its stimulus conditions hold. In this example, the proposition (not (in-state (can-sit sitter) past)) would be found to satisfy the applicability conditions and the explanation-indicated stimulus condition of Use.obstacle. (Note that this proposition might be chosen because the model of R's beliefs includes a domain plan operator for going to a party, with a precondition that the agent's sitter can baby sit.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "4.3"
},
{
"text": "Next, this instantiation of Use-obstacle must be expanded, i.e., the generator must decide which of the satellites of Use-obstacle to include. By a procedure similar to that described above, it would decide to include (another) Use-obstacle, where the existential variable is instantiated with the proposition (in-state (sick sitter) past). (Note that this proposition might be chosen because the model of R,'s beliefs includes the shared belief that being sick typically renders a sitter unfit to perform his or her duties.) The stimulus condition explanation-indicated might be the motivation for including this satellite. (For example, suppose it is shared knowledge that R's baby sitter is rarely unavailable.) If further attempts to expand the plan were unsuccessful, then the full plan would contain the discourse acts shown in (inform s h (in-state (sick sitter) past))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "4.3"
},
{
"text": "C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "4.3"
},
{
"text": "d. without undergoing plan-pruning, it would be realized as a direct answer (llb), and extra information (llc) -(lld).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "4.3"
},
{
"text": "In phase two, plan-pruning, the generator's overall goal is to make the answer as concise as possible. Therefore, it would consider the primitive acts specified in the full plan in the order (d), (c), (b). Since (d) is not inferrable from any other act, it would be automatically retained in the plan. Next, a trial plan with (c) pruned from it would be considered. Interpretation of the response consisting of (b) and (d) is simulated. Since the full plan could be inferred, (c) would be pruned. Next, another trial plan would be considered, where, in addition to (c), (b) has been pruned. Since the full plan could be inferred from the resulting trial response consisting of just (d), (b) would be pruned too. Thus, the output of phase two would be the plan shown in Figure 4 with (b) and (c) marked as pruned.",
"cite_spans": [],
"ref_spans": [
{
"start": 769,
"end": 777,
"text": "Figure 4",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Example",
"sec_num": "4.3"
},
{
"text": "The primary focus of our research has been on 1) use of pragmatic knowledge which is common to both the interpretation and generation of indirect answers, i.e. the so-called reversible knowledge, and 2) identifying stimulus conditions. In this section, we give a brief description of some other factors in generating indirect answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other factors in generation",
"sec_num": "5."
},
{
"text": "First, note that we made the simplifying assumption that R's only initial goal is to answer the question (accurately, efficiently, and politely). However, it is possible for R to construct an indirect answer which satisfies multiple initial goals of R. For example, R might have decided to give the answer in (12c) not only as an explanation, but also as background for R's request in (12d). Second, stylistic goals may affect the generation of indirect answers. In addition to affecting syntactic and lexical aspects of discourse [Hovg0, MG91, DH93], stylistic goals may affect which and how much information is given. For example, elaboration can be used to make an answer more lively, as shown in (13). stairs is true, in order to avoid the same type of confusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other factors in generation",
"sec_num": "5."
},
{
"text": "16.a. Q: I don't think you've been upstairs yet. b. R: Um only just to the ioo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other factors in generation",
"sec_num": "5."
},
{
"text": "Our work differs from most previous work in cooperative response generation in that, in our model, the information given in an indirect answer conversationally implicates [Gri75a] the direct answer. I-Iirschberg [Hir85] , who provided general rules for identifying when a scalar conversational implicature is licensed, claimed that speakers may give indirect responses to Yes-No questions in order to block potential incorrect scalar implicatures of a simple Yes or No. She implemented a system which determines whether the Yes/No 2\u00b0 licenses any unwanted scalar implicatures, and if so, proposes alternative scalar responses which do not. This type of response is similar, in our model, to a speaker's use of Use-contrast motivated by clarifyextent-indicated, as illustrated in (9). (As noted earlier, our coherence rules for the relation cr-contrast as well as our axioms for clarify-extent-indicated make use of the notion of a salient partial ordering which was elucidated by Hirsehberg.) However, Hirschberg's model does not account for quite a variety of types of indirect answers which can be generated using the other operators in our model, nor for other motives for using Use-contrast.",
"cite_spans": [
{
"start": 171,
"end": 179,
"text": "[Gri75a]",
"ref_id": "BIBREF4"
},
{
"start": 212,
"end": 219,
"text": "[Hir85]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related research",
"sec_num": "6."
},
{
"text": "Rhetorical or coherence relations [Gri75b, Hal76, MT87] have been used in several text-generation systems to aid in ordering parts of a text (e.g. [Hov88] ) as well as in content-planning (e.g. [McK85, MP93] ). The discourse plan operators based on coherence relations in our model (i.e. the operators used as satellites of top-level operators)play a similar role in contentplanning of an indirect answer. When coherence relations are used for content-planning, it is necessary to constrain the information which thereby may be selected. McKeown [McK85] uses discourse focus constraints. In [Moo89] , plan selection heuristics are used that maximize the heater's presumed familiarity with the concepts in the text, prefer general-purpose to less general operators, and minimize verbosity. Maybury's [May92] system uses \"desirable\" preconditions, preconditions that are not inecessary preconditions, to prioritize alternative operators. In contrast to the above, by the use of stimulus conditions, our model is able to trigger new, opportunistic speaker goals.",
"cite_spans": [
{
"start": 34,
"end": 42,
"text": "[Gri75b,",
"ref_id": "BIBREF5"
},
{
"start": 43,
"end": 49,
"text": "Hal76,",
"ref_id": null
},
{
"start": 50,
"end": 55,
"text": "MT87]",
"ref_id": "BIBREF20"
},
{
"start": 147,
"end": 154,
"text": "[Hov88]",
"ref_id": "BIBREF10"
},
{
"start": 194,
"end": 201,
"text": "[McK85,",
"ref_id": null
},
{
"start": 202,
"end": 207,
"text": "MP93]",
"ref_id": "BIBREF19"
},
{
"start": 538,
"end": 553,
"text": "McKeown [McK85]",
"ref_id": null
},
{
"start": 591,
"end": 598,
"text": "[Moo89]",
"ref_id": "BIBREF17"
},
{
"start": 799,
"end": 806,
"text": "[May92]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related research",
"sec_num": "6."
},
{
"text": "Moore and Pollack [MP92] show the need to distinguish the intentional and informational structure of discourse, where the latter is characterized by the 2\u00b0Hirschberg did not addresss other possible types of direct answers, represented by the Answer-Hedged, Answer-Maybe, and Answer-Maybe-Not operators in our model. sort of relations classified as subject-matter relations in RST. Note that in our interpretation component [GC94], informational relations (i.e. plausible coherence relations holding between indirect answers and candidate direct answers) are used to infer the speaker's goal to convey a particular answer. Moore and Paris [MP93] argue that it is necessary for generation systems to represent not only the speaker's top-level intentional goal, but also the intentional subgoals that a speaker hoped to achieve by use of a rhetorical relation so that, if a subgoal is not achieved, then an alternative rhetorical means can be tried. In our model, the operators used as satellites of top-level answer discourse plan operators are based on RST's subject-matter relations. The primary goals of these operators are similar to the effect fields of the corresponding RST relation definitions. As Moore and Paris argue, goals based on these relation definitions are rhetorical goals, not intentional goals. However, in our model, it is possible to determine the intentional subgoals by checking which stimulus conditions motivated the speaker to include a particular satellite operator. For example, if explanation-indicated motivates R to include a Use-cause satellite, then one can view R as having the implicit intentional goal of giving an explanation. Thus, our model provides the information necessary to recover from the possible failure of (all or part of) an indirect answer.",
"cite_spans": [
{
"start": 18,
"end": 24,
"text": "[MP92]",
"ref_id": "BIBREF18"
},
{
"start": 638,
"end": 644,
"text": "[MP93]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related research",
"sec_num": "6."
},
{
"text": "Finally, our use of simulated interpretation during plan pruning has some similarity to previous work. In Horacek's approach to generating concise explanations [I-Ior91], a set of propositions representing the full explanation i s pruned by eliminating propositions which can be derived from the remaining ones by contextual rules. Jameson and Wahlster [JW82] use an anticipation feedback loop algorithm to generate elliptical utterances.",
"cite_spans": [
{
"start": 353,
"end": 359,
"text": "[JW82]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related research",
"sec_num": "6."
},
{
"text": "An indirect answer to a Yes-No question conversationally implicates the speaker's evaluation of the truth of a questioned proposition. The generation of indirect answers is important if a dialogue system is to respond efficiently, accurately, and politely. This paper has presented the approach to generation used in our implemented system for generating and interpreting indirect answers to Yes-No questions in English. Ours is the first system to generate a wide range of types of indirect answers (as well as full answers). The system is reversible in that the same pragmatic knowledge is used in generation and interpretation. Generation is performed in two phases: content planning and plan pruning. During content planning, stimulus conditions are used to trigger speaker goals to include appropriate extra information with the direct answer. Plan pruning determines what parts of this full response do not need to be stated explicitly -resulting in, in appropriate discourse contexts, the generation of an indirect 7th International Generation Workshop \u2022 Kennebunkport, Maine \u2022 June [21] [22] [23] [24] 1994 answer.",
"cite_spans": [
{
"start": 1090,
"end": 1094,
"text": "[21]",
"ref_id": null
},
{
"start": 1095,
"end": 1099,
"text": "[22]",
"ref_id": null
},
{
"start": 1100,
"end": 1104,
"text": "[23]",
"ref_id": null
},
{
"start": 1105,
"end": 1109,
"text": "[24]",
"ref_id": null
},
{
"start": 1110,
"end": 1114,
"text": "1994",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Analyzing intention in utterances",
"authors": [
{
"first": "James",
"middle": [
"F"
],
"last": "Allen",
"suffix": ""
},
{
"first": "C",
"middle": [
"Raymond"
],
"last": "Perrault",
"suffix": ""
}
],
"year": 1980,
"venue": "Artificial Intelligence",
"volume": "15",
"issue": "",
"pages": "143--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James F. Allen and C. Raymond Perrault. An- alyzing intention in utterances. Artificial In- telligence, 15:143-178, 1980.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "American Express tapes. Transcripts of audiotape conversations made at SRI International",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "American Express tapes. Transcripts of au- diotape conversations made at SRI Interna- tional, Menlo Park, California. Prepared by Jacqueline Kowto under the direction of Patti Price.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Universals in language usage: Politeness phenomena",
"authors": [
{
"first": "Penelope",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Levinson",
"suffix": ""
}
],
"year": 1978,
"venue": "Questions and politeness: Strategies in social interaction",
"volume": "",
"issue": "",
"pages": "56--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Penelope Brown and Stephen Levinson. Uni- versals in language usage: Politeness phe- nomena. In Esther N. Goody, editor, Ques- tions and politeness: Strategies in social in- teraction, pages 56-289. Cambridge University Press, Cambridge, 1978.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Chrysanne DiMarco and Graeme Hirst. A computational theory of goal-directed style in syntax",
"authors": [
{
"first": "H",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Marshall",
"suffix": ""
}
],
"year": 1981,
"venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Clark and C. Marshall. Definite refer- ence and mutual knowledge. In A. K. Joshi, B. Webber, and I. Sag, editors, Elements of discourse understanding. Cambridge Univer- sity Press, Cambridge, 1981. Chrysanne DiMarco and Graeme Hirst. A computational theory of goal-directed style in syntax. Computational Linguistics, 19(3), 1993. Nancy Green and Sandra Carberry. A hybrid reasoning model for indirect answers. To ap- pear in Proceedings of the 32nd Annual Meet- ing of the Association for Computational Lin- guistics, 1994.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Logic and conversation",
"authors": [
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and Semantics lII: Speech Acts",
"volume": "",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Paul Grice. Logic and conversation. In P. Cole and J. L. Morgan, editors, Syntax and Semantics lII: Speech Acts, pages 41-58, New York, 1975. Academic Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Thread of Discourse. Mouton",
"authors": [
{
"first": "J",
"middle": [
"E"
],
"last": "Grimes",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. E. Grimes. The Thread of Discourse. Mou- ton, The Hague, 1975.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistic and Pragmatic Constraints on Utterance Interpretation",
"authors": [
{
"first": "",
"middle": [],
"last": "Elizabeth Ann Hinkelman",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Ann Hinkelman. Linguistic and Pragmatic Constraints on Utterance Interpre- tation. PhD thesis, University of Rochester, 1989.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Theory of Scalar Implicature",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Bell Hirschberg",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Bell Hirschberg. A Theory of Scalar Im- plicature. PhD thesis, University of Pennsyl- vania, 1985.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploiting conversational implicature for generating concise explanations",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Horacek",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings. European Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Horacek. Exploiting conversational implicature for generating concise explana- tions. In Proceedings. European Association for Computational Linguistics, 1991.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Planning coherent multisentential text",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "163--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard H. Hovy. Planning coherent multisen- tential text. In Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics, pages 163-169, 1988.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pragmatics and natural language generation",
"authors": [
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 1990,
"venue": "Artificial Intelligence",
"volume": "43",
"issue": "",
"pages": "153--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard H. Hovy. Pragmatics and natural language generation. Artificial Intelligence, 43:153-197, 1990.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "User modelling in anaphora generation: ellipsis and definite description",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Jameson",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Wahlster",
"suffix": ""
}
],
"year": 1982,
"venue": "Proceedings of the European Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Jameson and Wolfgang Wahlster. User modelling in anaphora generation: ellip- sis and definite description. In Proceedings of the European Conference on Artificial Intelli- gence, 1982.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pragmatics. Cambridge University Press",
"authors": [
{
"first": "S",
"middle": [],
"last": "Levinson",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Levinson. Pragmatics. Cambridge Univer- sity Press, Cambridge, 1983.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Communicative acts for explanation generation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maybury",
"suffix": ""
}
],
"year": 1992,
"venue": "International Journal of Man-Machine Studies",
"volume": "37",
"issue": "",
"pages": "135--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark T Maybury. Communicative acts for explanation generation. International Journal of Man-Machine Studies, 37:135-172, 1992.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text Generation",
"authors": [
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen R. McKeown. Text Generation. Cambridge University Press, 1985.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A computational rhetoric for syntactic aspects of text",
"authors": [
{
"first": "Marzena",
"middle": [],
"last": "Makuta-Giluk",
"suffix": ""
}
],
"year": 1991,
"venue": "Master's thesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marzena Makuta-Giluk. A computational rhetoric for syntactic aspects of text. Technical Report CS-91-56, Master's thesis, Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, December 1991.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Reactive Approach to Explanation in Expert and Advice-Giving Systems",
"authors": [
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johanna D. Moore. A Reactive Approach to Explanation in Expert and Advice-Giving Sys- tems. PhD thesis, University of California at Los Angeles, 1989.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A problem for rst: The need for multi-level discourse analysis",
"authors": [
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
},
{
"first": "Martha",
"middle": [
"E"
],
"last": "Pollack",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "537--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johanna D. Moore and Martha E. Pollack. A problem for rst: The need for multi-level discourse analysis. Computational Linguistics, 18(4):537-544, December 1992.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Planning text for advisory dialogues: Capturing intentional and rhetorical information",
"authors": [
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
},
{
"first": "Cecile",
"middle": [],
"last": "Paris",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "4",
"pages": "651--694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johanna D. Moore and Cecile Paris. Plan- ning text for advisory dialogues: Capturing in- tentional and rhetorical information. Compu- tational Linguistics, 19(4):651-694, December 1993.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "W",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1987,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "167--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. C. Mann and S. A. Thompson. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):167-182, 1987.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A plan-based analysis of indirect speech acts",
"authors": [
{
"first": "R",
"middle": [],
"last": "Perrault",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1980,
"venue": "American Journal of Computational Linguistics",
"volume": "6",
"issue": "3-4",
"pages": "167--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Perrault and J. Allen. A plan-based analy- sis of indirect speech acts. American Journal of Computational Linguistics, 6(3-4):167-182, 1980.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Questions and responses in english conversation",
"authors": [
{
"first": "Anna-Brita",
"middle": [],
"last": "Stenstrsm",
"suffix": ""
}
],
"year": 1984,
"venue": "Lund Studies in English 68. CWK Gleerup, MaimS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna-Brita StenstrSm. Questions and re- sponses in english conversation. In Claes Schaar and Jan Svartvik, editors, Lund Studies in English 68. CWK Gleerup, MaimS, Sweden, 1984.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Imagine a discourse context for (1) in which R's use i of just (ld) is intended to convey a No, i.e., that R is not going shopping tonight. (By convention, square brackets indicate that the enclosed text was not explicitly stated.) The part of R's response consisting of (ld) -(le) is what we 'call an indirect answer to a Yes-No question, and if (lc) had been uttered, (lc) would have been called a direct answer. l.a. O: I need a ride to the mall. b. Are you going shopping tonight? c. R: [no] d. My car's not running. e.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Discourse plan operator for Obstacle that s will provide s's evaluation of the truth of p,",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "International Generation Workshop \u2022 Kennebunkport, Maine \u2022 June 21-24Stimulus conditions of discourse plan operators ators for providing causal explanations. For example in (2), 6 which indirectly conveys a No, R gives an explanation of why R won't get a car. 2.a. q: actually you'll probably get a car won't you as soon as you get there b. R: can't drive",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Although this stimulus condition appears in some of the same causal operators as Explanation-indicated, ~Stenstr6m's (110)",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "in 7. Q: Do you have a pet? R: We have a turtle. In (7), R was motivated to elaborate on the type of pet they have, since turtles are not prototypical pets. Formally, the condition is defined by the following axiom. ((Clarify-concept-indicated s h ?p ?q) <-(concept ?p ?c) (has-atypical-insgance ?q ?c))This may be glossed as, s is motivated to clarify p to h with q, if p has a concept c, and q provides an atypical instance of c. Stereotypical knowledge is used to evaluate the second antecedent.",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "1\u00b0 Hirschberg's (177).Clarify-condition-indicatedThis stimulus condition appears in the operator Use-condition, as illustrated by (8). 11 8.a. Q: Um let me can I make the reservation and change it by tomorrow b. R:[yes] c.if it's still available.In(8), a truthful Yes answer depends on the truth of (c). Formally, the stimulus condition is defined by the following axioms.((Clarify-condition-indicated s h ?p ?q) <-(ignorant s ?q)) ((Clarify-condition-indicated s h ?p ?q) <-(wbel s (not ?q)))These may be glossed as, s is motivated to clarify a condition q for p to h 1) if s doesn't know if q holds, or 2) if s suspects that q does not hold.Clarify-extent-indicatedThis stimulus condition appears in Use-contrast, as illustrated by (9). 12 9.a. Q: Have you gotten the letters yet? b. R: I've gotten the letter from X.",
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"text": "Example of full discourse plan before pruning 11.a. Q: You went to the party, didn't you7 b. R: [no] c.[The baby sitter could not sit.]d.The baby sitter was sick.",
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"uris": null,
"text": "Figure 4. (The acts in the plan are labelled (b) -(d) corresponding to the part of (11) they represent.) Note that if this plan were output 7th International Generation Workshop \u2022 Kennebunkport, Maine \u2022 June 21-24, 1994 (Answer-no s h (occur (go-party s) past)): Nucleus: b. (inform s h (not (occur (go-party s) past))) Satellites: (Use-obstacle s h (not (occur (go-party s) past)))",
"num": null,
"type_str": "figure"
},
"FIGREF9": {
"uris": null,
"text": "Discourse plan for (11)",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>: Satellites of top-level operators</td></tr><tr><td>cr-obstacle holds between q and p.S Thus, the first ap-</td></tr><tr><td>plicability condition can be glossed as requiring that s</td></tr><tr><td>believe that the coherence relation holds. In the sec-</td></tr><tr><td>ond applicability condition, (Plausible (cr-obstacle q</td></tr><tr><td>p)) denotes that_, given what s believes to be mutually</td></tr><tr><td>believed with h, the coherence relation (cr-obstacle q</td></tr><tr><td>p) is plausible. In our model, this sort of condition is</td></tr><tr><td>evaluated using a set of coherence rules based on the</td></tr><tr><td>relation definitions of RST.</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF3": {
"content": "<table><tr><td colspan=\"2\">Substitute-indicated</td></tr><tr><td colspan=\"2\">This condition appears in Use-contrast, illustrated</td></tr><tr><td>by (5).</td><td/></tr><tr><td colspan=\"2\">S.a.Q: Do you have Verdi's Otello or Aida?</td></tr><tr><td colspan=\"2\">b.R: [no]</td></tr><tr><td>c.</td><td>We have Rigoletto.</td></tr><tr><td colspan=\"2\">Although Q may not have intended to use (5a) as a</td></tr><tr><td colspan=\"2\">prerequest for the qu#stion What Verdi operas do you</td></tr><tr><td colspan=\"2\">have.C, R suspects that the answer to this wh-question</td></tr><tr><td colspan=\"2\">might be helpful to Q. Formally, the condition is de-</td></tr><tr><td colspan=\"2\">fined by the following axiom.</td></tr><tr><td/><td>((Answer-re:f-indicated s h ?p ?q)</td></tr><tr><td/><td>&lt;-</td></tr><tr><td/><td>(wbel s (want h (knowref h ?t ?q)))</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "This may be glossed as, s is motivated to provide h with q, if s suspects that h wants to know the referent Tin analyses of discourse based on speech act theory, e.g.[PA80], a question such as (lb) might be described as an indirect request. However, since it may be responded to appropriately by a Yes or No, it is included in the class of Yes-No questions addressed in our model. SAmerican Express tape 1 9StenstrSm's (102). Note that a No answer may be conversationally implicated by use of (4d) alone.7th International Generation Workshop \u2022 Kennebunkport, Maine \u2022 June 21-24, 1994 of a term t in q. As in Excuse-indicated, techniques for interpreting indirect speech acts can be used to determine if the antecedent holds."
},
"TABREF4": {
"content": "<table><tr><td>holds, and</td></tr><tr><td>((clarify-extent-indicated s h</td></tr><tr><td>(some-truth ?p) ?q)</td></tr><tr><td>&lt;-</td></tr><tr><td>(wbel s (ignorant h ?q))</td></tr><tr><td>(believe s (highest-true-exp ?q ?p)))</td></tr><tr><td>((clarify-extent-indicated s h (not ?p) ?q)</td></tr><tr><td>&lt;-</td></tr><tr><td>(wbel s (ignorant h ?q))</td></tr><tr><td>(believe s (highest-true-exp ?q ?p)))</td></tr><tr><td>These may be glossed as, s is motivated to clarify to h</td></tr><tr><td>the extent q to which p is true, or the alternative q to</td></tr><tr><td>p which is true, if s suspects that h does not know if q</td></tr><tr><td>n American Express tape 10ab</td></tr><tr><td>a2 Hirschberg's (59)</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "s believes that q is the highest expression alternative to p that does hold. According to I-Iirschberg[Hir85] (following Gazdar), sentences Pi and pj (representing the propositional content of two utterances)"
},
"TABREF5": {
"content": "<table><tr><td>((appeasement-indicated s h (not ?p) ?q)</td></tr><tr><td>&lt;-</td></tr><tr><td>(wbel s (undesirable h (not ?p)))</td></tr><tr><td>(wbel s (desirable h ?q)))</td></tr><tr><td>((appeasement-indicated s h</td></tr><tr><td>(some-truth ?p) ?q)</td></tr><tr><td>&lt;-</td></tr><tr><td>(wbel s (undesirable h (not ?p)))</td></tr><tr><td>(wbel s (desirable h ?q)))</td></tr><tr><td>This may be glossed as, s is motivated to appease h</td></tr><tr><td>with q for p not holding or only being partly true, if s</td></tr><tr><td>suspects that (not p) is undesirable to h but that q is</td></tr><tr><td>desirable to h. The antecedents to this rule would be</td></tr><tr><td>evaluated using heuristic rules and stereotypical and</td></tr><tr><td>specific knowledge about h's desires. For example, two</td></tr><tr><td>13Recall that additional constraints on p and q arise from</td></tr><tr><td>the applicability conditions of operators containing this</td></tr><tr><td>stimulus condition, namely</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "Use-contrastin this case. Thus, another constraint is that it is plausible that cr-contrast holds. Note that cr-contrast also is defined in terms of such an ordering."
},
"TABREF6": {
"content": "<table><tr><td colspan=\"2\">b. R: [no]</td></tr><tr><td>c.</td><td>I have to get my brakes fixed.</td></tr><tr><td>d.</td><td>Can you recommend a garage near</td></tr><tr><td/><td>c ampus ?</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "12.a. q: Are you going to the lecture?"
}
}
}
}