ACL-OCL / Base_JSON /prefixW /json /W15 /W15-0106.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W15-0106",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:01:39.547712Z"
},
"title": "Clarifying Intentions in Dialogue: A Corpus Study *",
"authors": [
{
"first": "Julian",
"middle": [
"J"
],
"last": "Schl\u00f6der",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": "julian.schloeder@gmail.com"
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": "raquel.fernandez@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As part of our ongoing work on grounding in dialogue, we present a corpus-based investigation of intention-level clarification requests. We propose to refine existing theories of grounding by considering two distinct types of intention-related conversational problems: intention recognition and intention adoption. This distinction is backed-up by an annotation experiment conducted on a corpus assembled with a novel method for automatically retrieving potential requests for clarification.",
"pdf_parse": {
"paper_id": "W15-0106",
"_pdf_hash": "",
"abstract": [
{
"text": "As part of our ongoing work on grounding in dialogue, we present a corpus-based investigation of intention-level clarification requests. We propose to refine existing theories of grounding by considering two distinct types of intention-related conversational problems: intention recognition and intention adoption. This distinction is backed-up by an annotation experiment conducted on a corpus assembled with a novel method for automatically retrieving potential requests for clarification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dialogue is commonly modelled as a joint activity where the interlocutors are not merely making individual moves, but actively collaborate. A central coordination device is the common ground of the dialogue participants, the information they mutually take for granted (Stalnaker, 1978) . This common ground is changed and expanded over the course of a conversation in a process called grounding (Clark, 1996) . We are interested in the mechanisms used to establish agreement, i.e., in the conversational means to establish a belief as joint. To investigate this issue, in this paper we examine cases where grounding (partially) fails, as indicated by the presence of clarifications requests (CRs). In contrast to previous work (i.a., Gabsdil, 2003; Purver, 2004; Rodr\u00edguez and Schlangen, 2004) , which has mostly focused on CRs triggered by acoustic and semantic understanding problems, we are particularly concerned with problems related to intention recognition (going beyond semantic interpretation) and intention adoption (i.e., mutual agreement). The following examples, from the AMI Meeting Corpus (Carletta, 2007) , are cases in point:",
"cite_spans": [
{
"start": 268,
"end": 285,
"text": "(Stalnaker, 1978)",
"ref_id": "BIBREF16"
},
{
"start": 395,
"end": 408,
"text": "(Clark, 1996)",
"ref_id": "BIBREF5"
},
{
"start": 734,
"end": 748,
"text": "Gabsdil, 2003;",
"ref_id": "BIBREF7"
},
{
"start": 749,
"end": 762,
"text": "Purver, 2004;",
"ref_id": "BIBREF10"
},
{
"start": 763,
"end": 793,
"text": "Rodr\u00edguez and Schlangen, 2004)",
"ref_id": "BIBREF13"
},
{
"start": 1104,
"end": 1120,
"text": "(Carletta, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) A: I think that's all. B: Meeting's over?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) A: Just uh do that quickly. B: How do you do it?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) A: I'd say two. B: Why?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In these examples, it cannot be said that B has fully grounded A's proposal, but also not that B rejects A's utterance. Rather, B asks a question that is conducive to the grounding process. In (1), B has apparently understood A's utterance, but is unsure as to whether A's intention was to conclude the session. We therefore consider CRs like B's question in (1) as related to intention recognition. In contrast, in (2) and (3), B displays unwillingness or inability (but no outright refusal) to ground A's proposal, and requests further information she needs to establish common ground, i.e., to adopt A's intention as joint. Requests for instructions have also been related to clarification in Benotti's (2009) work on multiagent planning. In this paper, we present a corpus-based investigation of intention-level clarification, part of an ongoing project that aims to analyse the grounding process beyond semantic interpretation. In the next section, we introduce some theoretical observations and refine existing theories of grounding (Clark, 1996; Allwood, 1995) by distinguishing between intention recognition and intention adoption. We then present a systematic heuristic to retrieve potential clarification requests from dialogue corpora and discuss the results of a small-scale annotation experiment. 1 We end with pointers for future work. ",
"cite_spans": [
{
"start": 696,
"end": 712,
"text": "Benotti's (2009)",
"ref_id": "BIBREF1"
},
{
"start": 1039,
"end": 1052,
"text": "(Clark, 1996;",
"ref_id": "BIBREF5"
},
{
"start": 1053,
"end": 1067,
"text": "Allwood, 1995)",
"ref_id": "BIBREF0"
},
{
"start": 1310,
"end": 1311,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As extensively discussed by Hulstijn and Maudet (2006) , the intentional level we are interested in is commonly denoted with the term uptake. In particular, in Clark's (1996) stratification of the grounding process into four distinct levels (see Table 1 for our take on it), the fourth level, \"proposal and consideration (uptake),\" is related to the speaker's intentions. When discussing joint projects at level 4, Clark introduces the notion of joint construals: the determination and consideration of speaker meaning, including the intended illocutionary force (Clark, 1996, pp. 212-213) . However, he also points out that uptake may fail due to unwillingness or inability: \"when respondents are unwilling or unable to comply with the project as proposed, they can decline to take it up\" (Clark, 1996, p. 204) . We contend that this difference between construal and compliance-between intention recognition and intention adoptionhas been obscured in the literature so far. 2 For example, in their annotation scheme for CRs, Rodr\u00edguez and Schlangen (2004) reproduce the underspecification in labelling their level 4 CRs as \"recognising or evaluating speaker intention.\" Since we, with Clark (1996) , consider such intentional categories to be part of the grounding hierarchy, we expect problems on an intentional level to be evinced in much the same way as other conversational mishaps: in particular by CRs aimed at fixing these different types of conversational trouble. When studying the CRs annotated as intention related in the corpus of Rodr\u00edguez and Schlangen (2004) we indeed find examples related to recognition and others which aim at adoption: 3 (4) K: okay, again from the top I: from the very top? K: no, well, [. . . ] (5) K: for me that is in fact below this I: why below? K: yes, it belongs there, all okay.",
"cite_spans": [
{
"start": 28,
"end": 54,
"text": "Hulstijn and Maudet (2006)",
"ref_id": "BIBREF8"
},
{
"start": 160,
"end": 174,
"text": "Clark's (1996)",
"ref_id": "BIBREF5"
},
{
"start": 563,
"end": 589,
"text": "(Clark, 1996, pp. 212-213)",
"ref_id": null
},
{
"start": 790,
"end": 811,
"text": "(Clark, 1996, p. 204)",
"ref_id": null
},
{
"start": 1026,
"end": 1056,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
},
{
"start": 1186,
"end": 1198,
"text": "Clark (1996)",
"ref_id": "BIBREF5"
},
{
"start": 1544,
"end": 1574,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
},
{
"start": 1725,
"end": 1733,
"text": "[. . . ]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Theoretical Observations",
"sec_num": "2"
},
{
"text": "In (4), speaker I has evidently not fully understood what K's question is, despite having successfully parsed and understood the propositional content of K's utterance. On the other hand, I displays no such problem in (5), but rather some reluctance to adopt K's assertion as common ground. We consider (4) to be a clarification question related to intention recognition whereas the one in (5) relates to intention adoption. A particularly striking class of intention recognition CRs are speech act determination questions as in the following example: 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Observations",
"sec_num": "2"
},
{
"text": "(6) A: And we're going to discuss [. . . ] who's gonna do what and just clarify B: Are you asking me whether I wanna be in there?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Observations",
"sec_num": "2"
},
{
"text": "Our hypothesis is that the classes of clarification requests related to intention recognition and intention adoption, respectively, are distinct and discernible. In particular, we propose to improve upon Clark's (1996) hierarchy by splitting his uptake-level into two, separating recognition from adoption. Table 1 shows our amended hierarchy and constructed examples for clarification requests evincing failure at a certain level. To test this hypothesis, we have surveyed existing corpora of CRs and assembled a novel corpus of intention-related CRs to check if annotators could reasonably discern the two classes.",
"cite_spans": [
{
"start": 204,
"end": 218,
"text": "Clark's (1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Theoretical Observations",
"sec_num": "2"
},
{
"text": "3 Corpus Study",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Observations",
"sec_num": "2"
},
{
"text": "Our work builds on previous corpus studies of CRs (Purver et al., 2003; Rodr\u00edguez and Schlangen, 2004; Rieser and Moore, 2005) . However, existent studies are not perfectly suited for investigating grounding at the level of intentions. 5 Firstly, the annotation scheme of Purver et al. (2003; , which the authors apply to a section of the BNC (Burnard, 2000) , makes use of semantic categories that cannot easily be mapped to the intention-level distinctions introduced in the previous section. Secondly, while the schemes employed by Rodr\u00edguez and Schlangen (2004) and Rieser and Moore (2005) (both based on Schlangen, 2004) do include a category for intention-level CRs, the corpora they annotate-the Bielefeld Corpus and the Carnegie Mellon Communicator Corpus, respectively-are highly task-oriented and hence the intentions of the interlocutors are to a large degree presupposed: the participants intend to fulfil the task. Finally, in all cases, the focus of the authors did not lie with intentional clarification and therefore they might have left out questions in their annotations that are interesting to us, in particular more complex intention adoption CRs (which may not have been considered CRs to begin with, given the lack of well established theoretical distinctions discussed in the previous section). For our study, we have chosen to extract questions from the AMI Meeting Corpus (Carletta, 2007) , a collection of dialogues amongst four participants role-playing a design team for a TV remote control. The dialogues are loosely task-and goal-oriented, but the conversation is mostly unconstrained. Due to this setting, we expect a larger amount of discussion and decision making, which should give rise to more intention-level CRs. In addition, the rich annotations distributed with the AMI Corpus enabled us to apply a sophisticated heuristic to automatically extract potential CRs, which we describe next.",
"cite_spans": [
{
"start": 50,
"end": 71,
"text": "(Purver et al., 2003;",
"ref_id": "BIBREF11"
},
{
"start": 72,
"end": 102,
"text": "Rodr\u00edguez and Schlangen, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 103,
"end": 126,
"text": "Rieser and Moore, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 272,
"end": 292,
"text": "Purver et al. (2003;",
"ref_id": "BIBREF11"
},
{
"start": 343,
"end": 358,
"text": "(Burnard, 2000)",
"ref_id": "BIBREF3"
},
{
"start": 535,
"end": 565,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
},
{
"start": 570,
"end": 593,
"text": "Rieser and Moore (2005)",
"ref_id": "BIBREF12"
},
{
"start": 609,
"end": 625,
"text": "Schlangen, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 1397,
"end": 1413,
"text": "(Carletta, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "3.1"
},
{
"text": "The AMI Corpus is annotated with dialogue acts, including a class of 'Elicit-*' acts denoting different kinds of information requests/questions, but without specifically distinguishing CRs. However, the corpus is also annotated with relations between utterances, loosely called adjacency pair annotation, 6 which indicates whether or not an utterance is considered a direct reply to another one. We utilise observations on the sequential nature of CRs (\"other-initiated repair\") in group settings made by Schegloff (2000) to assemble a set of possible clarification requests as follows. Take all utterances Q where: a. Q is turn-initial and annotated as an 'Elicit-' type of dialogue act, spoken by a speaker B.",
"cite_spans": [
{
"start": 505,
"end": 521,
"text": "Schegloff (2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.2"
},
{
"text": "b. Q is the second part of an adjacency pair; the first part (the source) is spoken by another speaker A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.2"
},
{
"text": "c. Q is the first part of another adjacency pair; the second part (the answer) is spoken by A as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.2"
},
{
"text": "This heuristic is based on the intuition that CRs are proper questions (i.e., utterances that demand an answer) with a backward-looking function (i.e., related to an earlier source utterance) that are typically answered by the speaker of the source. We expect this heuristic to have a sufficiently high recall to be quantitatively applicable, but are aware that it cannot find each and every CR. 7 There are 338 utterances Q in the AMI Corpus satisfying the criteria above. We note that the annotation manual for the AMI Corpus states that CRs are usually annotated as 'Elicit-' acts, but that some very simple CRs (e.g., 'huh?') can instead be tagged as 'Comment-about-Understanding (und).' However, this class also contains some backchannel utterances: positive comments about understanding. If we apply the same heuristic to the utterances annotated as 'und,' we find 195 additional possible CRs. We confirmed that our heuristic successfully separates CRs from backchannels, and that these CRs are indeed related to levels 1-3 of Clark's (1996) hierarchy. However, these utterances are not the primary subject of our study. We henceforth refer to CRs on levels 1-3 collectively as low-level.",
"cite_spans": [
{
"start": 396,
"end": 397,
"text": "7",
"ref_id": null
},
{
"start": 1033,
"end": 1047,
"text": "Clark's (1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.2"
},
{
"text": "As indicated above, we are primarily interested in the 338 possible CRs annotated as 'Elicit-' dialogue acts and therefore included only these in our annotation. Since our main interest is in intention-level CRs and our primary ambition is the investigation of intention adoption vs. intention recognition, we used the following simple annotation scheme: Each question found by our heuristic is annotated as one of {not,low,int-rec,int-ad,ambig}, where the categories are defined as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.3"
},
{
"text": "\u2022 not CR. Select this category if you are sure that the question is not a clarification request. That is, if it does not serve to better the askers understanding of the previous highlighted utterance. For instance if the question is requesting novel information, moving the dialogue forward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.3"
},
{
"text": "\u2022 low CR. Select this category if the question indicates that the asker has not fully understood the semantic / propositional content of the previous highlighted utterance. This includes, for example, word meaning problems, acoustic problems, or reference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.3"
},
{
"text": "\u2022 intention recognition CR. Select this category if the question indicates semantic understanding, but that the CR utterer has not fully understood (or is trying to guess) the speaker's goal/intention (the intended function of the previous highlighted utterance). The prototypical case is speech act determination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.3"
},
{
"text": "\u2022 intention adoption CR. Select this category if the question indicates the CR utterer has understood/recognised the speaker's main goal (their intention), but does not yet accept it because he wants/needs more information or he has incompatible beliefs. For instance, if the CR utterer asks about the reason behind the speaker's utterance before accepting it, or requests information needed to carry out her proposal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.3"
},
{
"text": "\u2022 ambiguous. Sometimes it may not be possible to decide what function a CR has precisely, maybe due to a lack of context. In those cases, annotate the question as ambiguous. We instructed our annotators to follow a decision tree where they first decide whether a question is clearly not a CR, and only otherwise consider the different categories of CRs. This is because in a pilot study we found that the distinction between 'not CR' and 'intention adoption CR' was difficult for some annotators. To reduce the confusion, we defined the 'not CR' class as only clear-cut cases of not-CR questions, at the risk of incurring a higher amount of ambiguity when the decision tree bottoms out, i.e., when a question that was not definitely not a CR could not be matched to a CR-category after all. Our annotation scheme only refines one dimension (namely, 'source') of the multi-dimensional schemes applied by Rodr\u00edguez and Schlangen (2004) and Rieser and Moore (2005) . Since our main ambition in this work is to establish the two levels of intentionality, we leave a fuller annotation with further dimensions-such as syntactic categories like Schlangen's (2004) 'form'-for future work.",
"cite_spans": [
{
"start": 903,
"end": 933,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
},
{
"start": 938,
"end": 961,
"text": "Rieser and Moore (2005)",
"ref_id": "BIBREF12"
},
{
"start": 1138,
"end": 1156,
"text": "Schlangen's (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.3"
},
{
"text": "Nevertheless, this is a difficult annotation task: Annotators can only play the role of overhearer and therefore have a more indirect access to the intentions of the interlocutors. In addition, CRs in particular can be fragmented and ambiguous. Therefore, annotators were shown a substantial dialogue excerpt starting 10 utterances before the source and ending with either the 10th utterance after the answer to the CR or with the CR-asker's next reply (the follow-up). We found that answer and follow-up are particularly helpful in determining the function of a CR: the answer gives hints towards the speaker's interpretation of the CR, and the follow-up can show whether the asker agrees with that construal. 8 In the full study, the corpus was annotated by 2 expert annotators, since we deemed the task to be too complex and fine-grained for na\u00efve annotators. One third of the corpus was annotated by both annotators, the remaining two thirds by one annotator each. To create a gold-standard on the overlapping segment, the annotators discussed the utterances where their initial judgement differed and mutually agreed on the appropriate annotation. ",
"cite_spans": [
{
"start": 711,
"end": 712,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Procedure",
"sec_num": "3.3"
},
{
"text": "In the five-way classification task described above, our annotators had an agreement (Cohen's \u03ba, 1960) of \u03ba = 0.76 on the overlapping third of the corpus; 9 of \u03ba = 0.85 in the boolean task of determining whether an utterance is a CR; and of \u03ba = 0.82 in the boolean task of retrieving intention-related CRs from all other questions. The distribution of categories is shown in Table 2 . In order to compare our distribution to previous work, we have also recorded the distribution we obtain when dropping the items annotated as 'not CR' and adding the questions annotated as 'Comment-about-Understanding (und)' as low-level CRs. Then the total number of CRs in our corpus is 443.",
"cite_spans": [
{
"start": 85,
"end": 102,
"text": "(Cohen's \u03ba, 1960)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 375,
"end": 382,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "The AMI Corpus contains about 42,000 turns, so we found that roughly 1.1% of turns receive clarification according to our heuristic. Previous studies have indicated a higher number: Purver (2004) reports about 4% and Rodr\u00edguez and Schlangen (2004) about 5.8%. Rodr\u00edguez and Schlangen (2004) themselves conjecture that their corpus might contain an unusually high amount of CRs due to the setting (an instructor guiding a builder). For comparison, we have manually extracted CRs from a 2500-turn subset of the AMI Corpus: We found 52 CRs in that segment, indicating that about 2% of turns prompt a CR. It is to be expected that our heuristic misses some CRs, e.g., ones that do not receive an answer, and its coverage is dependent on how systematic the adjacency pair annotation in the AMI Corpus is.",
"cite_spans": [
{
"start": 182,
"end": 195,
"text": "Purver (2004)",
"ref_id": "BIBREF10"
},
{
"start": 217,
"end": 247,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
},
{
"start": 260,
"end": 290,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "While our heuristic only retrieves an estimated 50% of CRs, 10 the distribution of classes we found is comparable to the results described by Rodr\u00edguez and Schlangen (2004) and Rieser and Moore (2005) : They report 63.5% and 75%, respectively, of low-level CRs and 22.2% / 20% on intention-level. Rodr\u00edguez and Schlangen (2004) mark the remaining 14.3% as ambiguous, whereas Rieser and Moore (2005) report 5% \"other/several\" and do not mention an ambiguity class. 11 By and large, this is comparable to the distribution we found. We have low ambiguity (9%) compared to Rodr\u00edguez and Schlangen (2004) because we conflated different categories of lower-level CRs into one 'low CR' category. As we had hoped, we find a larger amount (29%) of intention-level CRs than the previous studies. We take the similarity in distributions as tacitly confirming the viability of our heuristic for quantitative evaluation.",
"cite_spans": [
{
"start": 142,
"end": 172,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
},
{
"start": 177,
"end": 200,
"text": "Rieser and Moore (2005)",
"ref_id": "BIBREF12"
},
{
"start": 297,
"end": 327,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
},
{
"start": 375,
"end": 398,
"text": "Rieser and Moore (2005)",
"ref_id": "BIBREF12"
},
{
"start": 569,
"end": 599,
"text": "Rodr\u00edguez and Schlangen (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "We have theoretically motivated a distinction within grounding hierarchies between intention recognition and intention adoption and have created a novel corpus of intention-level CRs to investigate its tenability. Our corpus is not only novel in its contents, but also in its construction: unlike previous studies, we have developed and applied a suitable heuristic that exploits rich existing annotations to automatically find possible clarification requests. A small-scale annotation experiment on our corpus showed that the theoretical distinction we propose is viable. Our immediate next step in this project is a deeper investigation into the form and problem sources of the intention-level CRs in our corpus, including a more fine-grained annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "While DIT++(Bunt, 2012) stratifies the grounding hierarchy into \"attention / perception / interpretation / evaluation / execution,\" it is similarly underspecified: To us, evaluation (e.g., checking an asserted proposition for consistency) relates to intention adoption, whereas (semantic) understanding and (pragmatic) intention retrieval (e.g., recognising on level 4.1 that an indicative was intended as an inform act and hence requires a consistency check on level 4.2) are again distinct categories.3 We thank the authors for providing us with their annotated corpus; in the dialogues, I is explaining to K how to assemble a paper airplane. We had the German-language examples translated to English by a native speaker of German.4 Retrieved from the British National Corpus (BNC)(Burnard, 2000) using SCoRE(Purver, 2001).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We have carefully studied the annotated data described inPurver et al. (2003) andRodr\u00edguez and Schlangen (2004), which was kindly provided to us by the authors upon request.6 See http://mmm.idiap.ch/private/ami/annotation/dialogue acts manual 1.0.pdf. 7 In particular, previous work indicates that some CRs are simply not answered;Rodr\u00edguez and Schlangen (2004) report 8.7% unanswered CRs in their corpus. Our heuristic does not find these.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Rodr\u00edguez and Schlangen (2004) include the CR asker's 'happiness' (as evinced by the follow-up) in their annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Rodr\u00edguez and Schlangen (2004) report \u03ba = 0.7 in the task of determining the level of understanding that the CR addresses. However, their categorisation is different from ours. In particular, they do not include a 'not CR' category.10 We surveyed the CRs not found by our heuristic and attribute this mostly to the adjacency pair annotation; however, in addition to CRs that are not answered at all, there are also CRs that are answered by a different person than the source speaker.11 Their category \"ambiguity\" refers to a class of CRs dubbed \"ambiguity refinement\" and not to uncertainty in the annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An activity based approach to pragmatics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Allwood",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allwood, J. (1995). An activity based approach to pragmatics. Gothenburg papers in theoretical linguis- tics (76), 1-38.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Clarification potential of instructions",
"authors": [
{
"first": "L",
"middle": [],
"last": "Benotti",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGdial)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benotti, L. (2009). Clarification potential of instructions. In Proceedings of the 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGdial).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The semantics of feedback",
"authors": [
{
"first": "H",
"middle": [],
"last": "Bunt",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 16th SemDial Workshop on the Semantics and Pragmatics of Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bunt, H. (2012). The semantics of feedback. In Proceedings of the 16th SemDial Workshop on the Semantics and Pragmatics of Dialogue (SeineDial).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reference Guide for the British National Corpus (World Edition)",
"authors": [
{
"first": "L",
"middle": [],
"last": "Burnard",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burnard, L. (2000). Reference Guide for the British National Corpus (World Edition). Oxford University Computing Services.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unleashing the killer corpus: experiences in creating the multi-everything ami meeting corpus",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 2007,
"venue": "Language Resources and Evaluation",
"volume": "41",
"issue": "2",
"pages": "181--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carletta, J. (2007). Unleashing the killer corpus: experiences in creating the multi-everything ami meet- ing corpus. Language Resources and Evaluation 41(2), 181-190.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using language",
"authors": [
{
"first": "H",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, H. H. (1996). Using language. Cambridge University Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A coefficient of agreement for nominal scales",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "1",
"issue": "20",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Mea- surement 1(20), 37-46.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Clarification in spoken dialogue systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gabsdil",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 AAAI Spring Symposium. Workshop on Natural Language Generation in Spoken and Written Dialogue",
"volume": "",
"issue": "",
"pages": "28--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabsdil, M. (2003). Clarification in spoken dialogue systems. In Proceedings of the 2003 AAAI Spring Symposium. Workshop on Natural Language Generation in Spoken and Written Dialogue, Stanford, CA, pp. 28-35.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Uptake and joint action",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hulstijn",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Maudet",
"suffix": ""
}
],
"year": 2006,
"venue": "Cognitive Systems Research",
"volume": "7",
"issue": "2-3",
"pages": "175--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hulstijn, J. and N. Maudet (2006, June). Uptake and joint action. Cognitive Systems Research 7(2-3), 175-191.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SCoRE: A tool for searching the BNC",
"authors": [
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Purver, M. (2001, October). SCoRE: A tool for searching the BNC. Technical Report TR-01-07, De- partment of Computer Science, King's College London.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Theory and Use of Clarification Requests in Dialogue",
"authors": [
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Purver, M. (2004). The Theory and Use of Clarification Requests in Dialogue. Ph. D. thesis, King's College, University of London.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On the means for clarification in dialogue",
"authors": [
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ginzburg",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Healey",
"suffix": ""
}
],
"year": 2003,
"venue": "Current and new directions in discourse and dialogue",
"volume": "",
"issue": "",
"pages": "235--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Purver, M., J. Ginzburg, and P. Healey (2003). On the means for clarification in dialogue. In Current and new directions in discourse and dialogue, pp. 235-255. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Implications for generating clarification requests in task-oriented dialogues",
"authors": [
{
"first": "V",
"middle": [],
"last": "Rieser",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rieser, V. and J. D. Moore (2005). Implications for generating clarification requests in task-oriented di- alogues. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Form, intonation and function of clarification requests in German task-oriented spoken dialogues",
"authors": [
{
"first": "K",
"middle": [
"J"
],
"last": "Rodr\u00edguez",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 8th SemDial Workshop on the Semantics and Pragmatics of Dialogue (Catalog)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodr\u00edguez, K. J. and D. Schlangen (2004). Form, intonation and function of clarification requests in Ger- man task-oriented spoken dialogues. In Proceedings of the 8th SemDial Workshop on the Semantics and Pragmatics of Dialogue (Catalog).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "When 'others' initiate repair",
"authors": [
{
"first": "E",
"middle": [
"A"
],
"last": "Schegloff",
"suffix": ""
}
],
"year": 2000,
"venue": "Applied Linguistics",
"volume": "21",
"issue": "2",
"pages": "205--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schegloff, E. A. (2000). When 'others' initiate repair. Applied Linguistics 21(2), 205-243.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Causes and strategies for requesting clarification in dialogue",
"authors": [
{
"first": "D",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schlangen, D. (2004). Causes and strategies for requesting clarification in dialogue. In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Assertion",
"authors": [
{
"first": "R",
"middle": [],
"last": "Stalnaker",
"suffix": ""
}
],
"year": 1978,
"venue": "Pragmatics",
"volume": "9",
"issue": "",
"pages": "315--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stalnaker, R. (1978). Assertion. In P. Cole (Ed.), Pragmatics, Volume 9 of Syntax and Semantics, pp. 315-332. New York Academic Press.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"text": "Grounding hierarchy for speaker A and addressee B with refined uptake level.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table/>",
"text": "Distribution of clarification requests in our corpus with examples for each category.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}