ACL-OCL / Base_JSON /prefixW /json /W08 /W08-0127.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W08-0127",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:38:15.769126Z"
},
"title": "An Evaluation Understudy for Dialogue Coherence Models",
"authors": [
{
"first": "Sudeep",
"middle": [],
"last": "Gandhe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "13274 Fiji way, Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "gandhe@ict.usc.edu"
},
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "13274 Fiji way, Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "traum@ict.usc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Evaluating a dialogue system is seen as a major challenge within the dialogue research community. Due to the very nature of the task, most of the evaluation methods need a substantial amount of human involvement. Following the tradition in machine translation, summarization and discourse coherence modeling, we introduce the the idea of evaluation understudy for dialogue coherence models. Following (Lapata, 2006), we use the information ordering task as a testbed for evaluating dialogue coherence models. This paper reports findings about the reliability of the information ordering task as applied to dialogues. We find that simple n-gram co-occurrence statistics similar in spirit to BLEU (Papineni et al., 2001) correlate very well with human judgments for dialogue coherence.",
"pdf_parse": {
"paper_id": "W08-0127",
"_pdf_hash": "",
"abstract": [
{
"text": "Evaluating a dialogue system is seen as a major challenge within the dialogue research community. Due to the very nature of the task, most of the evaluation methods need a substantial amount of human involvement. Following the tradition in machine translation, summarization and discourse coherence modeling, we introduce the the idea of evaluation understudy for dialogue coherence models. Following (Lapata, 2006), we use the information ordering task as a testbed for evaluating dialogue coherence models. This paper reports findings about the reliability of the information ordering task as applied to dialogues. We find that simple n-gram co-occurrence statistics similar in spirit to BLEU (Papineni et al., 2001) correlate very well with human judgments for dialogue coherence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In computer science or any other research field, simply building a system that accomplishes a certain goal is not enough. It needs to be thoroughly evaluated. One might want to evaluate the system just to see to what degree the goal is being accomplished or to compare two or more systems with one another. Evaluation can also lead to understanding the shortcomings of the system and the reasons for these. Finally the evaluation results can be used as feedback in improving the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The best way to evaluate a novel algorithm or a model for a system that is designed to aid humans in processing natural language would be to employ it in a real system and allow users to interact with it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The data collected by this process can then be used for evaluation. Sometimes this data needs further analysis -which may include annotations, collecting subjective judgments from humans, etc. Since human judgments tend to vary, we may need to employ multiple judges. These are some of the reasons why evaluation is time consuming, costly and sometimes prohibitively expensive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, if the system being developed contains a machine learning component, the problem of costly evaluation becomes even more serious. Machine learning components often optimize certain free parameters by using evaluation results on heldout data or by using n-fold cross-validation. Evaluation results can also help with feature selection. This need for repeated evaluation can forbid the use of data-driven machine learning components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For these reasons, using an automatic evaluation measure as an understudy is quickly becoming a common practice in natural language processing tasks. The general idea is to find an automatic evaluation metric that correlates very well with human judgments. This allows developers to use the automatic metric as a stand-in for human evaluation. Although it cannot replace the finesse of human evaluation, it can provide a crude idea of progress which can later be validated. e.g. BLEU (Papineni et al., 2001 ) for machine translation, ROUGE (Lin, 2004) for summarization.",
"cite_spans": [
{
"start": 479,
"end": 506,
"text": "BLEU (Papineni et al., 2001",
"ref_id": null
},
{
"start": 534,
"end": 551,
"text": "ROUGE (Lin, 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, the discourse coherence modeling community has started using the information ordering task as a testbed to test their discourse coherence models (Barzilay and Lapata, 2005; Soricut and Marcu, 2006) . Lapata (2006) has proposed an au-tomatic evaluation measure for the information ordering task. We propose to use the same task as a testbed for dialogue coherence modeling. We evaluate the reliability of the information ordering task as applied to dialogues and propose an evaluation understudy for dialogue coherence models.",
"cite_spans": [
{
"start": 155,
"end": 182,
"text": "(Barzilay and Lapata, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 183,
"end": 207,
"text": "Soricut and Marcu, 2006)",
"ref_id": "BIBREF22"
},
{
"start": 210,
"end": 223,
"text": "Lapata (2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the next section, we look at related work in evaluation of dialogue systems. Section 3 summarizes the information ordering task and Lapata's (2006) findings. It is followed by the details of the experiments we carried out and our observations. We conclude with a summary future work directions.",
"cite_spans": [
{
"start": 135,
"end": 150,
"text": "Lapata's (2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the work on evaluating dialogue systems focuses on human-machine communication geared towards a specific task. A variety of evaluation metrics can be reported for such task-oriented dialogue systems. Dialogue systems can be judged based on the performance of their components like WER for ASR (Jurafsky and Martin, 2000) , concept error rate or F-scores for NLU, understandability for speech synthesis etc. Usually the core component, the dialogue model -which is responsible for keeping track of the dialogue progression and coming up with an appropriate response, is evaluated indirectly. Different dialogue models can be compared with each other by keeping the rest of components fixed and then by comparing the dialogue systems as a whole. Dialogue systems can report subjective measures such as user satisfaction scores and perceived task completion. SASSI (Hone and Graham, 2000) prescribes a set of questions used for eliciting such subjective assessments. The objective evaluation metrics can include dialogue efficiency and quality measures.",
"cite_spans": [
{
"start": 301,
"end": 328,
"text": "(Jurafsky and Martin, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 870,
"end": 893,
"text": "(Hone and Graham, 2000)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "PARADISE (Walker et al., 2000) was an attempt at reducing the human involvement in evaluation. It builds a predictive model for user satisfaction as a linear combination of some objective measures and perceived task completion. Even then the system needs to train on the data gathered from user surveys and objective features retrieved from logs of dialogue runs. It still needs to run the actual dialogue system and collect objective features and perceived task completeion to predict user satisfaction.",
"cite_spans": [
{
"start": 9,
"end": 30,
"text": "(Walker et al., 2000)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other efforts in saving human involvement in evaluation include using simulated users for testing . This has become a popular tool for systems employing reinforcement learning Williams and Young, 2006) . Some of the methods involved in user simulation are as complex as building dialogue systems themselves (Schatzmann et al., 2007) . User simulations also need to be evaluated as how closely they model human behavior (Georgila et al., 2006) or as how good a predictor they are of dialogue system performance (Williams, 2007) .",
"cite_spans": [
{
"start": 176,
"end": 201,
"text": "Williams and Young, 2006)",
"ref_id": "BIBREF29"
},
{
"start": 307,
"end": 332,
"text": "(Schatzmann et al., 2007)",
"ref_id": "BIBREF21"
},
{
"start": 419,
"end": 442,
"text": "(Georgila et al., 2006)",
"ref_id": "BIBREF8"
},
{
"start": 510,
"end": 526,
"text": "(Williams, 2007)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some researchers have proposed metrics for evaluating a dialogue model in a task-oriented system. (Henderson et al., 2005) used the number of slots in a frame filled and/or confirmed. Roque et al. (2006) proposed hand-annotating information-states in a dialogue to evaluate the accuracy of information state updates. Such measures make assumptions about the underlying dialogue model being used (e.g., form-based or information-state based etc.).",
"cite_spans": [
{
"start": 98,
"end": 122,
"text": "(Henderson et al., 2005)",
"ref_id": "BIBREF9"
},
{
"start": 184,
"end": 203,
"text": "Roque et al. (2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We are more interested in evaluating types of dialogue systems that do not follow these task-based assumptions: systems designed to imitate humanhuman conversations. Such dialogue systems can range from chatbots like Alice (Wallace, 2003) , Eliza (Weizenbaum, 1966) to virtual humans used in simulation training (Traum et al., 2005) . For such systems, the notion of task completion or efficiency is not well defined and task specific objective measures are hardly suitable. Most evaluations report the subjective evaluations for appropriateness of responses. Traum et. al. (2004) propose a coding scheme for response appropriateness and scoring functions for those categories. Gandhe et. al. (2006) propose a scale for subjective assessment for appropriateness.",
"cite_spans": [
{
"start": 223,
"end": 238,
"text": "(Wallace, 2003)",
"ref_id": "BIBREF27"
},
{
"start": 247,
"end": 265,
"text": "(Weizenbaum, 1966)",
"ref_id": "BIBREF28"
},
{
"start": 312,
"end": 332,
"text": "(Traum et al., 2005)",
"ref_id": "BIBREF25"
},
{
"start": 560,
"end": 580,
"text": "Traum et. al. (2004)",
"ref_id": "BIBREF24"
},
{
"start": 678,
"end": 699,
"text": "Gandhe et. al. (2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The information ordering task consists of choosing a presentation sequence for a set of information bearing elements. This task is well suited for textto-text generation like in single or multi-document summarization (Barzilay et al., 2002) . Recently there has been a lot of work in discourse coherence modeling (Lapata, 2003; Barzilay and Lapata, 2005; Soricut and Marcu, 2006 ) that has used information ordering to test the coherence models. The information-bearing elements here are sentences rather than high-level concepts. This frees the models from having to depend on a hard to get training corpus which has been hand-authored for concepts.",
"cite_spans": [
{
"start": 217,
"end": 240,
"text": "(Barzilay et al., 2002)",
"ref_id": "BIBREF2"
},
{
"start": 313,
"end": 327,
"text": "(Lapata, 2003;",
"ref_id": "BIBREF14"
},
{
"start": 328,
"end": 354,
"text": "Barzilay and Lapata, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 355,
"end": 378,
"text": "Soricut and Marcu, 2006",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Ordering",
"sec_num": "3"
},
{
"text": "Most of the dialogue models still work at the higher abstraction level of dialogue acts and intentions. But with an increasing number of dialogue systems finding use in non-traditional applications such as simulation training, games, etc.; there is a need for dialogue models which do not depend on hand-authored corpora or rules. Recently Gandhe and Traum (2007) proposed dialogue models that do not need annotations for dialogue-acts, semantics and hand-authored rules for information state updates or finite state machines.",
"cite_spans": [
{
"start": 340,
"end": 363,
"text": "Gandhe and Traum (2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Ordering",
"sec_num": "3"
},
{
"text": "Such dialogue models focus primarily on generating an appropriate coherent response given the dialogue history. In certain cases the generation of a response can be reduced to selection from a set of available responses. For such dialogue models, maintaining the information state can be considered as a secondary goal. The element that is common to the information ordering task and the task of selecting next most appropriate response is the ability to express a preference for one sequence of dialogue turns over the other. We propose to use the information ordering task to test dialogue coherence models. Here the information bearing units will be dialogue turns. 1 There are certain advantages offered by using information ordering as a task to evaluate dialogue coherence models. First the task does not require a dialogue model to take part in conversations in an interactive manner. This obviates the need for having real users engaging in the dialogue with the system. Secondly, the task is agnostic about the underlying dialogue model. It can be a data-driven statistical model or information-state based, form based or even a reinforcement learning system based on MDP or POMDP. Third, there are simple objective measures available to evaluate the success of information ordering task.",
"cite_spans": [
{
"start": 669,
"end": 670,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Ordering",
"sec_num": "3"
},
{
"text": "Recently, Purandare and Litman (2008) have used this task for modeling dialogue coherence. But they only allow for a binary classification of sequences as either coherent or incoherent. For comparing different dialogue coherence models, we need the ability for finer distinction between sequences of information being put together. Lapata (2003) proposed Kendall's \u03c4 , a rank correlation measure, as one such candidate. In a recent study they show that Kendall's \u03c4 correlates well with human judgment (Lapata, 2006) . They show that human judges can reliably provide coherence ratings for various permutations of text. (Pearson's correlation for inter-rater agreement is 0.56) and that Kendall's \u03c4 is a good indicator for human judgment (Pearson's correlation for Kendall's \u03c4 with human judgment is 0.45 (p < 0.01)).",
"cite_spans": [
{
"start": 10,
"end": 37,
"text": "Purandare and Litman (2008)",
"ref_id": "BIBREF19"
},
{
"start": 332,
"end": 345,
"text": "Lapata (2003)",
"ref_id": "BIBREF14"
},
{
"start": 501,
"end": 515,
"text": "(Lapata, 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Ordering",
"sec_num": "3"
},
{
"text": "Before adapting the information ordering task for dialogues, certain questions need to be answered. We need to validate that humans can reliably perform the task of information ordering and can judge the coherence for different sequences of dialogue turns. We also need to find which objective measures (like Kendall's \u03c4 ) correlate well with human judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Ordering",
"sec_num": "3"
},
{
"text": "One of the advantages of using information ordering as a testbed is that there are objective measures available to evaluate the performance of information ordering task. Kendall's \u03c4 (Kendall, 1938) , a rank correlation coefficient, is one such measure. Given a reference sequence of length n, Kendall's \u03c4 for an observed sequence can be defined as, \u03c4 = # concordant pairs \u2212 # discordant pairs # total pairs Each pair of elements in the observed sequence is marked either as concordant -appearing in the same order as in reference sequence or as discordant otherwise. The total number of pairs is C n 2 = n(n \u2212 1)/2. \u03c4 ranges from -1 to 1.",
"cite_spans": [
{
"start": 182,
"end": 197,
"text": "(Kendall, 1938)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Information Ordering",
"sec_num": "4"
},
{
"text": "Another possible measure can be defined as the fraction of n-grams from reference sequence, that are preserved in the observed sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Information Ordering",
"sec_num": "4"
},
{
"text": "b n = # n-grams preserved # total n-grams In this study we have used, b 2 , fraction of bigrams and b 3 , fraction of trigrams preserved from the reference sequence. These values range from 0 to 1. Table 1 gives examples of observed sequences and Observed Sequence , 2, 3, 4, 5, 6, 7, 8, 9] 1.00 1.00 1.00 [8, 9, 0, 1, 2, 3, 4, 5, 6, 7] 0.89 0.75 0.29 [4, 1, 0, 3, 2, 5, 8, 7, 6, 9] 0.00 0.00 0.60 [6, 9, 8, 5, 4, 7, 0, 3, 2 , 1] 0.00 0.00 -0.64 [2, 3, 0, 1, 4, 5, 8, 9, 6, 7] 0.56 0.00 0.64 ",
"cite_spans": [
{
"start": 265,
"end": 290,
"text": ", 2, 3, 4, 5, 6, 7, 8, 9]",
"ref_id": null
},
{
"start": 306,
"end": 309,
"text": "[8,",
"ref_id": null
},
{
"start": 310,
"end": 312,
"text": "9,",
"ref_id": null
},
{
"start": 313,
"end": 315,
"text": "0,",
"ref_id": null
},
{
"start": 316,
"end": 318,
"text": "1,",
"ref_id": null
},
{
"start": 319,
"end": 321,
"text": "2,",
"ref_id": null
},
{
"start": 322,
"end": 324,
"text": "3,",
"ref_id": null
},
{
"start": 325,
"end": 327,
"text": "4,",
"ref_id": null
},
{
"start": 328,
"end": 330,
"text": "5,",
"ref_id": null
},
{
"start": 331,
"end": 333,
"text": "6,",
"ref_id": null
},
{
"start": 334,
"end": 336,
"text": "7]",
"ref_id": null
},
{
"start": 352,
"end": 355,
"text": "[4,",
"ref_id": null
},
{
"start": 356,
"end": 358,
"text": "1,",
"ref_id": null
},
{
"start": 359,
"end": 361,
"text": "0,",
"ref_id": null
},
{
"start": 362,
"end": 364,
"text": "3,",
"ref_id": null
},
{
"start": 365,
"end": 367,
"text": "2,",
"ref_id": null
},
{
"start": 368,
"end": 370,
"text": "5,",
"ref_id": null
},
{
"start": 371,
"end": 373,
"text": "8,",
"ref_id": null
},
{
"start": 374,
"end": 376,
"text": "7,",
"ref_id": null
},
{
"start": 377,
"end": 379,
"text": "6,",
"ref_id": null
},
{
"start": 380,
"end": 382,
"text": "9]",
"ref_id": null
},
{
"start": 398,
"end": 401,
"text": "[6,",
"ref_id": null
},
{
"start": 402,
"end": 404,
"text": "9,",
"ref_id": null
},
{
"start": 405,
"end": 407,
"text": "8,",
"ref_id": null
},
{
"start": 408,
"end": 410,
"text": "5,",
"ref_id": null
},
{
"start": 411,
"end": 413,
"text": "4,",
"ref_id": null
},
{
"start": 414,
"end": 416,
"text": "7,",
"ref_id": null
},
{
"start": 417,
"end": 419,
"text": "0,",
"ref_id": null
},
{
"start": 420,
"end": 422,
"text": "3,",
"ref_id": null
},
{
"start": 423,
"end": 424,
"text": "2",
"ref_id": null
},
{
"start": 446,
"end": 449,
"text": "[2,",
"ref_id": null
},
{
"start": 450,
"end": 452,
"text": "3,",
"ref_id": null
},
{
"start": 453,
"end": 455,
"text": "0,",
"ref_id": null
},
{
"start": 456,
"end": 458,
"text": "1,",
"ref_id": null
},
{
"start": 459,
"end": 461,
"text": "4,",
"ref_id": null
},
{
"start": 462,
"end": 464,
"text": "5,",
"ref_id": null
},
{
"start": 465,
"end": 467,
"text": "8,",
"ref_id": null
},
{
"start": 468,
"end": 470,
"text": "9,",
"ref_id": null
},
{
"start": 471,
"end": 473,
"text": "6,",
"ref_id": null
},
{
"start": 474,
"end": 476,
"text": "7]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluating Information Ordering",
"sec_num": "4"
},
{
"text": "b 2 b 3 \u03c4 [0, 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Information Ordering",
"sec_num": "4"
},
{
"text": "For our experiments we used segments drawn from 9 dialogues. These dialogues were two-party humanhuman dialogues. To ensure applicability of our results over different types of dialogue, we chose these 9 dialogues from different sources. Three of these were excerpts from role-play dialogues involving negotiations which were originally collected for a simulation training scenario (Traum et al., 2005) . Three are from SRI's Amex Travel Agent data which are task-oriented dialogues about air travel planning (Bratt et al., 1995) . The rest of the dialogues are scripts from popular television shows. Fig 6 shows an example from the air-travel domain. Each excerpt drawn was 10 turns long with turns strictly alternating between the two speakers.",
"cite_spans": [
{
"start": 382,
"end": 402,
"text": "(Traum et al., 2005)",
"ref_id": "BIBREF25"
},
{
"start": 509,
"end": 529,
"text": "(Bratt et al., 1995)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 601,
"end": 612,
"text": "Fig 6 shows",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Following the experimental design of (Lapata, 2006) we created random permutations for these dialogue segments. We constrained our permutations so that the permutations always start with the same speaker as the original dialogue and turns strictly alternate between the speakers. With these constraints there are still 5! \u00d7 5! = 14400 possible permutations per dialogue. We selected 3 random permutations for each of the 9 dialogues. In all, we have a total of 27 dialogue permutations. They are arranged in 3 sets, each set containing a permutation for all 9 dialogues. We ensured that not all permutations in a given set are particularly very good or very bad. We used Kendall's \u03c4 to balance the permutations across the given set as well as across the given dialogue.",
"cite_spans": [
{
"start": 37,
"end": 51,
"text": "(Lapata, 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Unlike Lapata (2006) who chose to remove the pronouns and discourse connectives, we decided not do any pre-processing on the text like removing disfluencies or removing cohesive devices such as anaphora, ellipsis, discourse connectives, etc. One of the reason is such pre-processing if done manually defeats the purpose of removing humans from the evaluation procedure. Moreover it is very difficult to remove certain cohesive devices such as discourse deixis without affecting the coherence level of the original dialogues.",
"cite_spans": [
{
"start": 7,
"end": 20,
"text": "Lapata (2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "In our first experiment, we divided a total of 9 human judges among the 3 sets (3 judges per set). Each judge was presented with 9 dialogue permutations. They were asked to assign a single coherence rating for each dialogue permutation. The ratings were on a scale of 1 to 7, with 1 being very incoherent and 7 being perfectly coherent. We did not provide any additional instructions or examples of scale as we wanted to capture the intuitive idea of coherence from our judges. Within each set the dialogue permutations were presented in random order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1",
"sec_num": "6"
},
{
"text": "We compute the inter-rater agreement by using Pearson's correlation analysis. We correlate the ratings given by each judge with the average ratings given by the judges who were assigned the same set. For inter-rater agreement we report the average of 9 such correlations which is 0.73 (std dev = 0.07). Artstein and Poesio (2008) have argued that Krippendorff's \u03b1 (Krippendorff, 2004) can be used for interrater agreement with interval scales like the one we have. In our case for the three sets \u03b1 values were 0.49, 0.58, 0.64. These moderate values of alpha indicate that the task of judging coherence is indeed a difficult task, especially when detailed instructions or examples of scales are not given.",
"cite_spans": [
{
"start": 303,
"end": 329,
"text": "Artstein and Poesio (2008)",
"ref_id": "BIBREF0"
},
{
"start": 364,
"end": 384,
"text": "(Krippendorff, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1",
"sec_num": "6"
},
{
"text": "In order to assess whether Kendall's \u03c4 can be used as an automatic measure of dialogue coherence, we perform a correlation analysis of \u03c4 values against the average ratings by human judges. The Pearson's correlation coefficient is 0.35 and it is statistically not significant (P=0.07). Fig 1(a) shows the relationship between coherence judgments and \u03c4 values. This experiment fails to support the suitability We also analyzed the correlation of human judgments against simple n-gram statistics, specifically (b 2 + b 3 ) /2. Fig 1(b) shows the relationship between human judgments and the average of fraction of bigrams and fraction of trigrams that were preserved in the permutation. The Pearson's correlation coefficient is 0.62 and it is statistically significant (P<0.01).",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 293,
"text": "Fig 1(a)",
"ref_id": "FIGREF1"
},
{
"start": 524,
"end": 532,
"text": "Fig 1(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiment 1",
"sec_num": "6"
},
{
"text": "Since human judges found it relatively hard to assign a single rating to a dialogue permutation, we decided to repeat experiment 1 with some modifications. In our second experiment we asked the judges to provide coherence ratings at every turn, based on the dialogue that preceded that turn. The dialogue permutations were presented to the judges through a web interface in an incremental fashion turn by turn as they rated each turn for coherence (see Fig 5 in the appendix for the screenshot of this interface). We used a scale from 1 to 5 with 1 being completely incoherent and 5 as perfectly coherent. 3 A total of 11 judges participated in this experiment with the first set being judged by 5 judges and the remaining two sets by 3 judges each.",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 461,
"text": "Fig 5 in",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2",
"sec_num": "7"
},
{
"text": "For the rest of the analysis, we use the average coherence rating from all turns as a coherence rating for the dialogue permutation. We performed the inter-rater agreement analysis as in experiment 1. The average of 11 correlations is 0.83 (std dev = 0.09). Although the correlation has improved, Krippendorff's \u03b1 values for the three sets are 0.49, 0.35, 0.63. This shows that coherence rating is still a hard task even when judged turn by turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2",
"sec_num": "7"
},
{
"text": "We assessed the relationship between the average coherence rating for dialogue permutations with Kendall's \u03c4 (see Fig 2(a) ). The Pearson's correlation coefficient is 0.33 and is statistically not significant (P=0.09). Fig 2(b) shows high correlation of average coherence ratings with the fraction of bigrams and trigrams that were preserved in permutation. The Pearson's correlation coefficient is 0.75 and is statistically significant (P<0.01).",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Fig 2(a)",
"ref_id": "FIGREF3"
},
{
"start": 219,
"end": 227,
"text": "Fig 2(b)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experiment 2",
"sec_num": "7"
},
{
"text": "Results of both experiments suggest that, (b 2 + b 3 ) /2 correlates very well with human judgments and can be used for evaluating information ordering when applied to dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2",
"sec_num": "7"
},
{
"text": "We wanted to know whether information ordering as applied to dialogues is a valid task or not. In this experiment we seek to establish a higher baseline for the task of information ordering in dialogues. We presented the dialogue permutations to our human judges and asked them to reorder the turns so that the resulting order is as coherent as possible. All 11 judges who participated in experiment 2 also participated in this experiment. They were presented with a drag and drop interface over the web that allowed them to reorder the dialogue permutations. The reordering was constrained to keep the first speaker of the reordering same as that of the original dialogue and the re-orderings must have strictly alternating turns. We computed the Kendall's \u03c4 and fraction of bigrams and trigrams (b 2 + b 3 ) /2 for these re-orderings. There were a total of 11 \u00d7 9 = 99 reordered dialogue permutations. Fig 3(a) and 3(b) shows the frequency distribution of \u03c4 and (b 2 + b 3 ) /2 values respectively. Humans achieve high values for the reordering task. For Kendall's \u03c4 , the mean of the reordered dialogues is 0.82 (std dev = 0.25) and for (b 2 + b 3 ) /2, the mean is 0.71 (std dev = 0.28). These values establish an upper baseline for the information ordering task. These can be compared against the random baseline. For \u03c4 random performance is 0.02 4 and",
"cite_spans": [],
"ref_spans": [
{
"start": 904,
"end": 912,
"text": "Fig 3(a)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Experiment 3",
"sec_num": "8"
},
{
"text": "Results show that (b 2 + b 3 ) /2 correlates well with human judgments for dialogue coherence better than Kendall's \u03c4 . \u03c4 encodes long distance relationships in orderings where as (b 2 + b 3 ) /2 only looks at local context. Fig 4 shows the relationship between these two measures. Notice that most of the orderings have \u03c4 values around zero (i.e. in the middle range for \u03c4 ), whereas majority of orderings will have a low value for (b 2 + b 3 ) /2. \u03c4 seems to overestimate the coherence even in the absence of immediate local coherence (See third entry in table 1). It seems that local context is more important for dialogues than for discourse, which may follow from the fact that dialogues are produced by two speakers who must react to each other, while discourse can be planned by one speaker from the beginning. Traum and Allen (1994) point out that such social obligations to respond and address the contributions of the other should be an important factor in building dialogue systems.",
"cite_spans": [
{
"start": 818,
"end": 840,
"text": "Traum and Allen (1994)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 225,
"end": 236,
"text": "Fig 4 shows",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "9"
},
{
"text": "The information ordering paradigm does not take into account the content of the information-bearing items, e.g. the fact that turns like \"yes\", \"I agree\", \"okay\" perform the same function and should be treated as replaceable. This may suggest a need to modify some of the objective measures to evaluate the information ordering specially for dialogue systems that involve more of such utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "9"
},
{
"text": "Human judges can find the optimal sequences with relatively high frequency, at least for short dialogues. It remains to be seen how this varies with longer dialogue lengths which may contain sub-dialogues that can be arranged independently of each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "9"
},
{
"text": "Evaluating dialogue systems has always been a major challenge in dialogue systems research. The core component of dialogue systems, the dialogue model, has usually been only indirectly evaluated. Such evaluations involve too much human effort and are a bottleneck for the use of data-driven machine learning models for dialogue coherence. The information ordering task, widely used in discourse coherence modeling, can be adopted as a testbed for evaluating dialogue coherence models as well. Here we have shown that simple n-gram statistics that are sensitive to local features correlate well with human judgments for coherence and can be used as an evaluation understudy for dialogue coherence models. As with any evaluation understudy, one must be careful while using it as the correlation with human judgments is not perfect and may be inaccurate in some cases -it can not completely replace the need for full evaluation with human judges in all cases (see (Callison-Burch et al., 2006) for a critique of BLUE along these lines).",
"cite_spans": [
{
"start": 961,
"end": 990,
"text": "(Callison-Burch et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "10"
},
{
"text": "In the future, we would like to perform more experiments with larger data sets and different types of dialogues. It will also be interesting to see the role cohesive devices play in coherence ratings. We would like to see if there are any other measures or certain modifications to the current ones that correlate better with human judgments. We also plan to employ this evaluation metric as feedback in building dialogue coherence models as is done in machine translation (Och, 2003) . well have you noticed that there's been an awful lot of fighting in the area recently Doctor yes yes i have we're very busy we've had many more casual+ casualties many more patients than than uh usual in the last month but uh what what is this about relocating our clinic have have uh you been instructed to move us Captain no but uh we just have some concerns about the increase in fighting xx Doctor are you suggesting that we relocate the clinic because we had no plans we uh we uh we're located here and we've been uh we are located where the patients need us Captain yeah but yeah actually it is a suggestion that you would be a lot safer if you moved away from this area we can put you in an area where there's n+ no insurgents and we have the area completely under control with our troops Doctor i see captain is this a is this a suggestion from your commander Captain i'm uh the company commander Figure 6 : Examples of the dialogues used to elicit human judgments for coherence",
"cite_spans": [
{
"start": 473,
"end": 484,
"text": "(Och, 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 1392,
"end": 1400,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "10"
},
{
"text": "These can also be at the utterance level, but for this paper we will use dialogue turns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For more on the relationship between b 2 , b 3 and \u03c4 see row 3,4 of table 1 and figure 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We believe this is a less complex task than experiment 1 and hence a narrower scale is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Theoretically this should be zero. The slight positive bias is the result of the constraints imposed on the re-orderingslike only allowing the permutations that have the correct starting speaker.for (b 2 + b 3 ) /2 it is 0.11. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This value is calculated by considering all 14400 permutations as equally likely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The effort described here has been sponsored by the U.S. Army Research, Development, and Engineering Command (RDE-COM). Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred. We would like to thank Radu Soricut, Ron Artstein, and the anonymous SIGdial reviewers for helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "To appear in Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. In To appear in Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling local coherence: An entity-based approach",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Proc. ACL-05.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inferring strategies for sentence ordering in multidocument summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2002,
"venue": "JAIR",
"volume": "17",
"issue": "",
"pages": "35--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay, Noemie Elhadad, and Kathleen McKe- own. 2002. Inferring strategies for sentence ordering in multidocument summarization. JAIR, 17:35-55.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The sri telephone-based atis system",
"authors": [
{
"first": "Harry",
"middle": [],
"last": "Bratt",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Dowding",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Hunicke-Smith",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Spoken Language Systems Technology Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harry Bratt, John Dowding, and Kate Hunicke-Smith. 1995. The sri telephone-based atis system. In Pro- ceedings of the Spoken Language Systems Technology Workshop, January.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "proceedings of EACL-2006",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. In proceedings of EACL-2006.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "User modeling for spoken dialogue system evaluation",
"authors": [
{
"first": "Wieland",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "Esther",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Pieraccini",
"suffix": ""
}
],
"year": 1997,
"venue": "Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wieland Eckert, Esther Levin, and Roberto Pieraccini. 1997. User modeling for spoken dialogue system eval- uation. In Automatic Speech Recognition and Under- standing, pages 80-87, Dec.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Creating spoken dialogue characters from corpora without annotations",
"authors": [
{
"first": "Sudeep",
"middle": [],
"last": "Gandhe",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Interspeech-07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sudeep Gandhe and David Traum. 2007. Creating spo- ken dialogue characters from corpora without annota- tions. In Proceedings of Interspeech-07.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving question-answering with linking dialogues",
"authors": [
{
"first": "Sudeep",
"middle": [],
"last": "Gandhe",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": ""
}
],
"year": 2006,
"venue": "International Conference on Intelligent User Interfaces (IUI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sudeep Gandhe, Andrew Gordon, and David Traum. 2006. Improving question-answering with linking di- alogues. In International Conference on Intelligent User Interfaces (IUI), January.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "User simulation for spoken dialogue systems: Learning and evaluation",
"authors": [
{
"first": "Kalliroi",
"middle": [],
"last": "Georgila",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2006,
"venue": "proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalliroi Georgila, James Henderson, and Oliver Lemon. 2006. User simulation for spoken dialogue systems: Learning and evaluation. In proceedings of Inter- speech.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hybrid reinforcement/supervised learning for dialogue policies from communicator data",
"authors": [
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
},
{
"first": "Kallirroi",
"middle": [],
"last": "Georgila",
"suffix": ""
}
],
"year": 2005,
"venue": "proceedings of IJCAI workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Henderson, Oliver Lemon, and Kallirroi Georgila. 2005. Hybrid reinforcement/supervised learning for dialogue policies from communicator data. In pro- ceedings of IJCAI workshop.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural Language Engineering: Special Issue on Best Practice in Spoken Dialogue Systems",
"authors": [
{
"first": "Kate",
"middle": [
"S"
],
"last": "Hone",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate S. Hone and Robert Graham. 2000. Towards a tool for the subjective assessment of speech system inter- faces (SASSI). Natural Language Engineering: Spe- cial Issue on Best Practice in Spoken Dialogue Sys- tems.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SPEECH and LANGUAGE PROCESSING: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Jurafsky and James H. Martin. 2000. SPEECH and LANGUAGE PROCESSING: An Introduction to Natural Language Processing, Computational Lin- guistics, and Speech Recognition. Prentice-Hall.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A new measure of rank correlation",
"authors": [
{
"first": "Maurice",
"middle": [
"G"
],
"last": "Kendall",
"suffix": ""
}
],
"year": 1938,
"venue": "Biometrika",
"volume": "30",
"issue": "",
"pages": "81--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maurice G. Kendall. 1938. A new measure of rank cor- relation. Biometrika, 30:81-93.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Content Analysis, An Introduction to Its Methodology 2nd Edition",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2004,
"venue": "Sage Publications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Krippendorff. 2004. Content Analysis, An Intro- duction to Its Methodology 2nd Edition. Sage Publi- cations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Probabilistic text structuring: Experiments with sentence ordering",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata. 2003. Probabilistic text structuring: Ex- periments with sentence ordering. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics, Sapporo, Japan.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic evaluation of information ordering",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "4",
"pages": "471--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata. 2006. Automatic evaluation of informa- tion ordering. Computational Linguistics, 32(4):471- 484.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning dialogue strategies within the markov decision process framework",
"authors": [
{
"first": "Esther",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Pieraccini",
"suffix": ""
},
{
"first": "Wieland",
"middle": [],
"last": "Eckert",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Workshop on Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "72--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, pages 72-79, Dec. Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In Proceedings of the Work- shop on Text Summarization Branches Out.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Minimum error rate training for statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL 2003: Proc. of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In In ACL 2003: Proc. of the 41st Annual Meeting of the Association for Com- putational Linguistics, July.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "Technical Report RC22176 (W0109-022), IBM Research Division",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore A. Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. In Technical Re- port RC22176 (W0109-022), IBM Research Division, September.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Analyzing dialog coherence using transition patterns in lexical and semantic features",
"authors": [
{
"first": "Amruta",
"middle": [],
"last": "Purandare",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings 21st International FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amruta Purandare and Diane Litman. 2008. Analyz- ing dialog coherence using transition patterns in lexi- cal and semantic features. In Proceedings 21st Inter- national FLAIRS Conference, May.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Evaluation of an information state-based dialogue manager",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Roque",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Ai",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": ""
}
],
"year": 2006,
"venue": "Brandial 2006: The 10th Workshop on the Semantics and Pragmatics of Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Roque, Hua Ai, and David Traum. 2006. Evalu- ation of an information state-based dialogue manager. In Brandial 2006: The 10th Workshop on the Seman- tics and Pragmatics of Dialogue.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Agenda-based user simulation for bootstrapping a pomdp dialogue system",
"authors": [
{
"first": "Jost",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Weilhammer",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue sys- tem. In proceedings of HLT/NAACL, Rochester, NY.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Discourse generation using utility-trained coherence models",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. ACL-06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Soricut and Daniel Marcu. 2006. Discourse gener- ation using utility-trained coherence models. In Proc. ACL-06.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Discourse obligations in dialogue processing",
"authors": [
{
"first": "R",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "James",
"middle": [
"F"
],
"last": "Traum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 1994,
"venue": "proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics (ACL-94)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R. Traum and James F. Allen. 1994. Discourse obligations in dialogue processing. In proceedings of the 32nd Annual Meeting of the Association for Com- putational Linguistics (ACL-94), pages 1-8.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluation of multi-party virtual reality dialogue interaction",
"authors": [
{
"first": "David",
"middle": [
"R"
],
"last": "Traum",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Stephan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Fourth International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "1699--1702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R. Traum, Susan Robinson, and Jens Stephan. 2004. Evaluation of multi-party virtual reality dia- logue interaction. In In Proceedings of Fourth Interna- tional Conference on Language Resources and Evalu- ation (LREC), pages 1699-1702.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Virtual humans for non-team interaction training",
"authors": [
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Swartout",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Gratch",
"suffix": ""
},
{
"first": "Stacy",
"middle": [],
"last": "Marsella",
"suffix": ""
}
],
"year": 2005,
"venue": "AAMAS-05 Workshop on Creating Bonds with Humanoids",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Traum, William Swartout, Jonathan Gratch, and Stacy Marsella. 2005. Virtual humans for non-team interaction training. In AAMAS-05 Workshop on Cre- ating Bonds with Humanoids, July.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Towards developing general models of usability with PARADISE",
"authors": [
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Kamm",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2000,
"venue": "Natural Language Engineering: Special Issue on Best Practice in Spoken Dialogue Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Walker, C. Kamm, and D. Litman. 2000. Towards de- veloping general models of usability with PARADISE. Natural Language Engineering: Special Issue on Best Practice in Spoken Dialogue Systems.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Be Your Own Botmaster",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Wallace. 2003. Be Your Own Botmaster, 2nd Edition. ALICE A. I. Foundation.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Eliza-a computer program for the study of natural language communication between man and machine",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Weizenbaum",
"suffix": ""
}
],
"year": 1966,
"venue": "Communications of the ACM",
"volume": "9",
"issue": "1",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Weizenbaum. 1966. Eliza-a computer program for the study of natural language communication be- tween man and machine. Communications of the ACM, 9(1):36-45, January.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Partially observable markov decision processes for spoken dialog systems",
"authors": [
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Speech and Language",
"volume": "21",
"issue": "",
"pages": "393--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason D. Williams and Steve Young. 2006. Partially ob- servable markov decision processes for spoken dialog systems. Computer Speech and Language, 21:393- 422.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A method for evaluating and comparing user simulations: The cramer-von mises divergence",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason D. Williams. 2007. A method for evaluating and comparing user simulations: The cramer-von mises di- vergence. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(a) Kendall's \u03c4 does not correlate well with human judgments for dialogue coherence. (b) Fraction of bigram & trigram counts correlate well with human judgments for dialogue coherence."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Experiment 1 -single coherence rating per permutation of Kendall's \u03c4 as an evaluation understudy."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(a) Kendall's \u03c4 does not correlate well with human judgments for dialogue coherence. (b) Fraction of bigram & trigram counts correlate well with human judgments for dialogue coherence."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Experiment 2 -turn-by-turn coherence rating"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(a) Histogram of Kendall's \u03c4 for reordered sequences (b) Histogram of fraction of bigrams & trigrams values for reordered sequences"
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Experiment 3 -upper baseline for information ordering task (human performance)"
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Examples of observed sequences and their respective b 2 , b 3 & \u03c4 values. Here the reference sequence is [0,1,2,3,4,5,6,7,8,9]. respective b 2 , b 3 and \u03c4 values. Notice how \u03c4 allows for long-distance relationships whereas b 2 , b 3 are sensitive to local features only. 2"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>User</td><td>ok</td></tr><tr><td>Agent</td><td>yeah that's United flight four seventy</td></tr><tr><td>User</td><td>that's the one</td></tr><tr><td>Doctor</td><td>hello i'm doctor perez</td></tr><tr><td/><td>how can i help you</td></tr><tr><td>Captain</td><td>uh well i'm with uh the local</td></tr><tr><td/><td>i'm i</td></tr></table>",
"text": "AAA at American Express may I help you? User yeah this is BBB BBB I need to make some travel arrangements Agent ok and what do you need to do? User ok on June sixth from San Jose to Denver, United Agent leaving at what time? User I believe there's one leaving at eleven o'clock in the morning Agent leaves at eleven a.m. and arrives Denver at two twenty p.m. out of San Jose 'm the commander of the local company and uh i'd like to talk to you about some options you have for relocating your clinic Doctor uh we're not uh planning to relocate the clinic captain what uh what is this about Captain"
}
}
}
}