ACL-OCL / Base_JSON /prefixS /json /sigdial /2005.sigdial-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:49:12.921550Z"
},
"title": "A corpus for studying addressing behavior in multi-party dialogues",
"authors": [
{
"first": "Natasa",
"middle": [],
"last": "Jovanovic",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Twente",
"location": {
"postBox": "PO Box 217",
"settlement": "Enschede",
"country": "the Netherlands"
}
},
"email": "natasa@ewi.utwente.nl"
},
{
"first": "Rieks",
"middle": [],
"last": "Op Den Akker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Twente",
"location": {
"postBox": "PO Box 217",
"settlement": "Enschede",
"country": "the Netherlands"
}
},
"email": ""
},
{
"first": "Anton",
"middle": [],
"last": "Nijholt",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Twente",
"location": {
"postBox": "PO Box 217",
"settlement": "Enschede",
"country": "the Netherlands"
}
},
"email": "a.nijholt@ewi.utwente.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a multi-modal corpus of hand-annotated meeting dialogues that was designed for studying addressing behavior in face-to-face conversations. The corpus contains annotated dialogue acts, addressees, adjacency pairs and gaze direction. First, we describe the corpus design where we present the annotation schema, annotation tools and annotation process itself. Then, we analyze the reproducibility and stability of the annotation schema.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a multi-modal corpus of hand-annotated meeting dialogues that was designed for studying addressing behavior in face-to-face conversations. The corpus contains annotated dialogue acts, addressees, adjacency pairs and gaze direction. First, we describe the corpus design where we present the annotation schema, annotation tools and annotation process itself. Then, we analyze the reproducibility and stability of the annotation schema.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Current tendencies in modeling humancomputer as well as human-human interactions are moving from a two-party model to a multiparty model. One of the issues that becomes salient in interactions involving more than two parties is addressing (Goffman, 1981; Clark and Carlson, 1992; Traum, 2003) .",
"cite_spans": [
{
"start": 239,
"end": 254,
"text": "(Goffman, 1981;",
"ref_id": "BIBREF8"
},
{
"start": 255,
"end": 279,
"text": "Clark and Carlson, 1992;",
"ref_id": "BIBREF4"
},
{
"start": 280,
"end": 292,
"text": "Traum, 2003)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Addressing is carried out through various communication channels, e.g. speech, gesture or gaze. Existing corpora, such as ICSI and ISL meeting corpora-currently widely used to study linguistic phenomena in natural meetings (Janin et al., 2004; Burger and Sloane, 2004) , are limited to audio data only. To explore interaction patterns in addressing behavior and to develop statistical models for automatic addressee prediction, we need a corpus that contains video data as well.",
"cite_spans": [
{
"start": 223,
"end": 243,
"text": "(Janin et al., 2004;",
"ref_id": "BIBREF10"
},
{
"start": 244,
"end": 268,
"text": "Burger and Sloane, 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe a new multi-modal corpus of hand-annotated meeting dialogues, designed for studying addressing behavior in small face-to-face conversations. The meetings were recorded in the IDIAP meeting room in the research program of the European M4 1 and AMI 2 projects. The recordings are available through the media file server 3 . Currently, the corpus contains hand-annotated dialogue acts, adjacency pairs, addressees and gaze directions of meeting participants. This paper reports the reliability of the overall annotation schema as well as a detailed analysis of detected sources of unreliability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The overall annotation schema is presented in Section 3. Annotation tools used in the creation of the corpus and the annotation process itself are described in Section 4 and Section 5 respectively. In Section 6 we give an overview of the applied reliability tests and measures as well as methods for detecting sources of unreliability. The results of these tests and methods applied on dialogue act annotation, addressee annotation, adjacency pairs annotation and gaze annotation are given in Section 7 and Section 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The corpus consists of 12 meetings recorded at the IDIAP smart meeting room (Moore, 2002) . The room is equipped with fully synchronized multi-channel audio and video recording devices. Of the 12 meetings, 10 were recorded within the scope of the M4 project. The meetings are scripted as to which actions the participants will undertake, but not as to what they will say. Although the meetings are inappropriate for research into richer meeting analysis due to their constrained nature, they allow us to examine observable patterns of addressing behavior in small group discussions. More natural, scenariobased, meetings are being recording at IDIAP in the scope of the AMI project. One of the new pilot meetings is included in our corpus. The meeting involves a group focused on the design of a TV remote control. The last meeting in our corpus is one of a series of meetings recorded for the exploration of argumentative structures in meeting dialogues.",
"cite_spans": [
{
"start": 76,
"end": 89,
"text": "(Moore, 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting data",
"sec_num": "2"
},
{
"text": "There are 23 participants in the corpus. Each meeting consists of 4 participants. The total amount of recorded data is approximately 75 minutes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting data",
"sec_num": "2"
},
{
"text": "In two-person dialogues, it is usually obvious to the non-speaking participant who is the one being addressed by the current speaker. In a multiparty case, the speaker has not only the responsibility to make his speech understandable for the listeners, but also to make clear to whom he is addressing his speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation scheme",
"sec_num": "3"
},
{
"text": "Analysis of the mechanisms that people use in identifying their addressees leads to a model of a conversation that describes the features that play a role in these mechanisms. Our annotation schema is based on the model presented in (Jovanovic and op den Akker, 2004) . The features described in the model are of three types: verbal, nonverbal and contextual. For example, utterances that contain the proper name of a conversational participant may be addressed to that participant. Also speaker gaze behavior may be a feature that gives a hint to the in-tended addressee. The history of the conversation is important as well, since most of the utterances that are related to the previous discourse are addressed to one of the recent speakers.",
"cite_spans": [
{
"start": 233,
"end": 267,
"text": "(Jovanovic and op den Akker, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation scheme",
"sec_num": "3"
},
{
"text": "Although the model contains a rich set of features that are relevant for observers to identify the participants the speaker is talking to, currently, the scheme contains only annotations of dialogue acts, adjacency pairs, addressees and gaze direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation scheme",
"sec_num": "3"
},
{
"text": "Annotation of dialogue acts involves two types of activities: marking of dialogue acts segment boundaries and marking of dialogue acts themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue acts",
"sec_num": "3.1"
},
{
"text": "Utterances within speech transcripts, also known as prosodic utterances, are segmented in advance using prosody, pause and syntactical information. In our schema, a dialogue act segment may contain a part of a prosodic utterance, a whole prosodic utterance, or several contiguous prosodic utterances of the same speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue acts",
"sec_num": "3.1"
},
{
"text": "Our dialogue act tag set is based on the MRDA (Meeting Recorder Dialogue Act) set . Each functional utterance in MRDA is marked with a label, made up of one or more tags from the set. The analysis of the MRDA tag set presented in (Clark and Popescu-Belis, 2004) shows that the number of possible labels reaches several millions. For that reason, the usage of the complete set may lead to a low quality of manual annotations.",
"cite_spans": [
{
"start": 230,
"end": 261,
"text": "(Clark and Popescu-Belis, 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue acts",
"sec_num": "3.1"
},
{
"text": "In our dialogue act annotation scheme each utterance is marked as U nlabeled or with exactly one tag from the tag set that represents the most specific utterance function. For addressee identification, it is less important whether an utterance is a suggestion in the form of a question or in the form of a statement. More important is that the speaker suggests to the addressee to perform an action, informing all other participants about that suggestion. Our dialogue act tag set as well as the mapping between our tag set and the MRDA set is shown in Adjacency pairs (APs) are paired utterances such as question-answer or statementagreement. The paired utterances are produced by different speakers. Utterances in an adjacency pair are ordered with the first part (Apart, the initiative) and the second part (B-part, the response). In multi-party conversations, adjacency pairs do not impose a strict adjacency requirement, since a speaker has more opportunities to insert utterances between two elements of an adjacency pair. For example, a suggestion can be followed by agreements or disagreements from multiple speakers. In our scheme, adjacency pairs are labelled at a separate level from dialogue acts. Labelling of adjacency pairs consists of marking dialogue acts that occur as their A-part and B-part. If a dialogue act is an A-part with several B-parts, for each of these B-parts, a new adjacency pair is created.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue acts",
"sec_num": "3.1"
},
{
"text": "In a group discussion, many of the speaker's utterances are addressed to the group as a whole. However, the speaker may show by verbal or non-verbal behavior that he intends to affect one selected participant or a subgroup of participants in particular, that he expects that participant or that subgroup to react on what he says. In this case, the selected participant or the subgroup is the addressee of the dialogue act performed by the speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Addressees",
"sec_num": "3.3"
},
{
"text": "Given that each meeting in the corpus consists of four participants, the addressee tag set contains the following values:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Addressees",
"sec_num": "3.3"
},
{
"text": "\u2022 a single participant: P x \u2022 a subgroup of participants: P x , P y \u2022 the whole audience:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Addressees",
"sec_num": "3.3"
},
{
"text": "P x , P y , P z \u2022 U nknown where x, y, z \u2208 {0, 1, 2, 3}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Addressees",
"sec_num": "3.3"
},
{
"text": "; P x denotes speaker at the channel x. The U nknown tag is used when the annotator cannot determine to whom the dialogue act is addressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Addressees",
"sec_num": "3.3"
},
{
"text": "Annotation of gaze direction involves two types of activities: labeling the changes in the gazed targets and labeling the gazed targets themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaze direction",
"sec_num": "3.4"
},
{
"text": "For addressee identification, the only targets of interest are meeting participants. Therefore, the tag set contains tags that are linked to each participant (P x ) where x \u2208 {0, 1, 2, 3} and the N oT arget tag that is used when the speaker does not look at any of the participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaze direction",
"sec_num": "3.4"
},
{
"text": "The corpus was created using two annotation tools developed at the University of Twente: the DACoder (Dialogue Act Coder) and the CSL (Continuous Signal Labeling) tools (Reidsma et al., 2005) . The DACoder supports annotation of dialogue acts, addressees and any kind of relations between dialogue acts. The CSL tool supports labeling of time-aligned annotation layers directly related to the signal files. Any annotation layer that consists of simple labeling of non-overlapping segments of the time line can be coded using this tool (e.g. gaze directions, postures and emotions).",
"cite_spans": [
{
"start": 169,
"end": 191,
"text": "(Reidsma et al., 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation tools",
"sec_num": "4"
},
{
"text": "The tools were built using NXT (NITE XML Toolkit) (Carletta et al., 2003) . NXT uses a stand-off XML data storage format which consists of several inter-related xml-files. The structure and location of the files are represented in a \"metadata\" file. The NXT stand-off XML format enables the capture and efficient manipulation of complex hierarchical structures across different modalities.",
"cite_spans": [
{
"start": 50,
"end": 73,
"text": "(Carletta et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation tools",
"sec_num": "4"
},
{
"text": "Six trained annotators were involved in the corpus creation. They were divided into two groups: the DA (Dialogue Act) group and the VL (Video Labeling) group. The DA group, involving 4 annotators, annotated dialogue acts, addressees and adjacency pairs. The VL group, involving 2 annotators, annotated gaze direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation procedure",
"sec_num": "5"
},
{
"text": "The corpus was divided into two sets of 6 meetings. The DA group was divided into 2 subgroups of 2 annotators: the B&E group and the M&R group. Each of these subgroups annotated exactly one set of meeting data. Each annotator in the VL group annotated one set of meeting data. Additionally, two meetings were annotated by both annotators in the VL group in order to test reliability of gaze annotation. In summary, each meeting in the corpus was annotated with dialogue acts, addressees and adjacency pairs by exactly two annotators, and with participants' gaze directions by at most two annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation procedure",
"sec_num": "5"
},
{
"text": "The annotators performed their tasks following different procedures. Two annotators from the DA group annotated dialogue acts, addressee and adjacency pairs separately, whereas the others annotated dialogue acts and addressees in one pass and adjacency pairs in the other pass. One annotator from the VL group annotated gaze direction in real-time, while the other annotator annotated gaze direction offline. For the DA group, labeling time of 5 minutes of meeting data averaged about two and a half hours. Real-time labeling of gaze direction for four meeting participants averaged about 20 minutes for 5 minutes of meeting data, whereas off-line annotation averaged about 5 hours for the same amount of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation procedure",
"sec_num": "5"
},
{
"text": "In order to obtain valid research results, data on which they are based must be reliable. We have performed two reliability tests proposed by Krippendorff in (Krippendorff, 1980) : stability (intra-annotator reliability) and reproducibility (inter-annotator reliability). Stability is the degree to which an annotator's judgments remain unchanged over time. It is measured by giving the same annotator a set of data to annotate twice, at different times. Reproducibility is the degree to which different annotators can produce the same annotation. It is measured by giving several annotators the same data to annotate independently, following the same coding instructions.",
"cite_spans": [
{
"start": 158,
"end": 178,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability",
"sec_num": "6"
},
{
"text": "Reliability is a function of agreement achieved among annotators. In the dialogue and discourse processing community, the Kappa agreement coefficient (\u03ba) has been adopted as a standard (Cohen, 1960; Carletta, 1996) . In recent years, there have been some discussions about the usage of Kappa as an appropriate reliability metric. The main problem when employing Kappa is that it actually depends on marginal distributions. As shown in (Krippendorff, 2004) , Kappa expected disagreement is a function of the individual coder preferences for the categories, and not of the proportions of categories in the data.",
"cite_spans": [
{
"start": 185,
"end": 198,
"text": "(Cohen, 1960;",
"ref_id": "BIBREF6"
},
{
"start": 199,
"end": 214,
"text": "Carletta, 1996)",
"ref_id": "BIBREF3"
},
{
"start": 435,
"end": 455,
"text": "(Krippendorff, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kappa vs. Alpha",
"sec_num": "6.1"
},
{
"text": "An agreement coefficient that does not have this inadequacy is Krippendorff's Alpha (\u03b1) (Krippendorff, 1980) . Since Alpha measures properties of the data and not coders' preferences, it is easily interpretable compared to Kappa. When a sample size is large and coders agree on their use of categories, \u03ba = \u03b1 (Krippendorff, 2004).",
"cite_spans": [
{
"start": 88,
"end": 108,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kappa vs. Alpha",
"sec_num": "6.1"
},
{
"text": "To estimate reliability of dialogue act, addressee and gaze annotation, we applied both agreement coefficients. The obtained Kappa and Alpha values were identical. Therefore, in the following sections we report only Kappa values. In contrast to dialogue act and addressee annotation, adjacency pairs annotation cannot be considered as a simple labeling of annotation units with categories. Therefore, we developed our own approach that represents annotated APs in a form of categorical labeling and measures agreement on APs annotation using Alpha.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kappa vs. Alpha",
"sec_num": "6.1"
},
{
"text": "For the evaluation of Alpha and Kappa values, we used the Krippendorff's scale that has been adopted as standard in the discourse and dialogue processing community (Krippendorff, 1980) . According to that scale, any variable with an agreement coefficient below .67 is disregarded as unreliable, between .67 and .8 allows drawing tentative conclusions and above .80 allows drawing definite conclusions.",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kappa vs. Alpha",
"sec_num": "6.1"
},
{
"text": "Detecting causes of disagreement may be of great use to obtain reliable data or to improve data reliability. A source of unreliability can be a coding unit, a category, a subset of categories or an annotator (Krippendorff, 1980) . Even if a category is well defined annotators may still have different interpretations of the category. Furthermore, annotators may show a correlated disagreement. For example, annotator A 1 uses category C 1 whenever annotator A 2 use category C 2 .",
"cite_spans": [
{
"start": 208,
"end": 228,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting sources of unreliability",
"sec_num": "6.2"
},
{
"text": "To identify which categories are sources of unreliability we measured single-category reliability (Krippendorff, 1980) . Single-category reliability assesses the extent to which one category is confused with all other categories in a set. It is estimated by grouping the remaining categories into one category and measuring the agreement among annotators regarding the assignment of units to these two categories. A low agreement can be the result of an ambiguous definition of the category or of the coders inability to interpret the meaning of the category.",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting sources of unreliability",
"sec_num": "6.2"
},
{
"text": "In this section we present inter-annotator reliability of the annotation schema applied on the M4 meeting data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator reliability",
"sec_num": "7"
},
{
"text": "We first measured agreements among annotators on how they segmented dialogues into dialogue act segments. Then, we tested reliability of dialogue act classification on those segments for which annotators agreed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of dialogue acts annotation",
"sec_num": "7.1"
},
{
"text": "In the discourse and dialogue community, several approaches have been proposed for assessing segmentation reliability using various metrics: percent agreement (Carletta et al., 1997; , precision and recall (Passonneau and Litman, 1997), and \u03ba (Carletta et al., 1997; Hirschberg and Nakatani, 1996) .",
"cite_spans": [
{
"start": 159,
"end": 182,
"text": "(Carletta et al., 1997;",
"ref_id": "BIBREF1"
},
{
"start": 243,
"end": 266,
"text": "(Carletta et al., 1997;",
"ref_id": "BIBREF1"
},
{
"start": 267,
"end": 297,
"text": "Hirschberg and Nakatani, 1996)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "Since there is no standardized technique to estimate segmentation agreement, we developed our own approach based on percent agreement. We defined four types of segmentation agreement:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "\u2022 Perfect agreement (PA)-Annotators completely agree on the segment boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "\u2022 Contiguous segments of the same type (ST)-A segment of one annotator is divided into several segments of the same type by the other annotator. Segments are of the same type if they are marked with the same dialogue act tag and the same addressee tag. An additional constraint is that segments are not labeled as parts of APs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "\u2022 Unlabeled-DA (UDA)-A segment of one annotator is divided into two segments by the other annotator where one of those segments is marked as U nlabled and the other one with a dialogue act tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "\u2022 Conjunction-Floor(CF)-Two adjacent segments differ only in a conjunction or a floor mechanism at the end of the first segment. The following example shows the segmentation agreement of this type: 1. I can do that-but I need your help 2. I can do that but-I need your help",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "The approach takes one annotator's segmentation as a reference (R) and compares it with the other annotator's segmentation (C) segment by segment. As a result, it gives a new segmentation (C ) that represents the modification of (C) to match the reference segmentation (R) according to identified types of agreement. In addition to measuring segmentation agreement, the modified segmentation (C ) is used for assessing reliability of dialogue act classification, addressee classification and adjacency pairs annotation. Most of the segmentation disagreements are of the following three types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "First, while one annotator labeled a segment with the Acknowledgment tag, the other one included the segment in the dialogue act that follows. Second, while one annotator marked a segment with one of the response tags, the other annotator split the segment into a response and a statement that has a supportive function such as explanation, elaboration or clarification. Third, while one annotator split a segment into two or more segments labeled with the same dialogue act tag but different addressee tags, the other annotator marked it as one segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation reliability",
"sec_num": "7.1.1"
},
{
"text": "Reliability of dialogue act classification is measured over those dialogue act segments for which both annotators agreed on their boundaries. Since the number of agreed segments for each R-C pair is different, we calculated reliability of dialogue act classification for each pair. The results are shown in Table 3 Table 3 : Inter-annotator agreement on DA classification",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 3",
"ref_id": null
},
{
"start": 315,
"end": 322,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reliability of dialogue act classification",
"sec_num": "7.1.2"
},
{
"text": "We applied a single-category reliability test for each dialogue act tag to assess the extent to which one dialogue tag was confused with the other tags in the set. Annotators in the B&E group used different ranges of categories. For that reason, Kappa values of the categories that are used by only one annotator are zero. Negative Kappa values for Understanding check and Follow-me categories indicate that annotator agreement is below the chance: in all cases where one annotator identifies one of these two categories, the other annotator does not. The results show an unacceptably low agreement on Assessment/ Appreciation and Understanding check categories in both groups. The Assessment/Appreciation category was merely confused with Positive response and Statement categories. The Understanding check category was mostly confused with Information request and Statement categories. Annotators in the M&R group reached a lower agreement on the responses tags than annotators in the B&E group. The responses tags were mostly confused with the Statement tag. Additionally, annotators in the M&R group had a little more difficulty distinguishing Positive response from Assessment/Appreciation and Acknowledgement. The low Kappa value for the Influencing-listener-actions category in the B&R group is a result of the confusion with the Statement category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of dialogue act classification",
"sec_num": "7.1.2"
},
{
"text": "As for dialogue act classification, reliability of addressee annotation is measured over those dialogue act segments for which both annotators agreed on their boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of addressee annotation",
"sec_num": "7.2"
},
{
"text": "The Kappa values for addressee annotation are shown in Table 5 : Inter-annotator agreement on addressee annotation notators in the B&E group reached good agreement on addressee annotation, whereas annotators in the M&R group reached an acceptable level of agreement that allows drawing tentative conclusions from data. We measured single-category reliability using the Kappa test for one R-C pair in each group. Addressee values that consist of three participants such as p 0 , p 1 , p 3 or p 1 , p 2 , p 3 were grouped into one category that represents the whole audience (ALLP ). Annotators in the B&E group reached a good agreement (\u03ba \u2265 0.80; N = 369) on all categories representing a single participant. Agreement on ALLP was \u03ba = 0.77. Annotators in the M&R group reached a lower agreement on each category than annotators in the B&E group. They had a little more difficulty distinguishing ALLP (\u03ba = 0.63; N = 366) as well as p 3 (\u03ba = 0.59; N = 366) from a remaining set of categories. For all other categories representing a single participant Kappa was 0.71 \u2264 \u03ba < 0.80. There were only a few instances in the data labeled with categories that represent a subgroup addressing. In both DA groups, annotators failed to agree on those categories. Annotators had problems distinguishing subgroup addressing from addressing the group as a whole.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Reliability of addressee annotation",
"sec_num": "7.2"
},
{
"text": "According to our schema for annotation of adjacency pairs, each dialogue act can be marked as a B-part of at most one and as an A-part of an arbitrary number of adjacency pairs. The sets of adjacency pairs produced by two annotators may differ in several ways. First, the annotators may disagree on dialogue acts that are marked as A-parts of adjacency pairs. Secondly, they may assign a different number of B-parts as well as different B-parts themselves to the same Apart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "Since there seems to be no standard associated metric for agreement on APs annotation in the literature, we developed a new approach that resembles a method for measuring reliability of co-reference annotation proposed in (Passonneau, 2004) . The key of the approach is to represent annotated data as a form of categorical labeling in order to apply standard reliability metrics.",
"cite_spans": [
{
"start": 222,
"end": 240,
"text": "(Passonneau, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "Adjacency pairs annotation can be seen as assigning to each dialogue act a context that represents the relations that the dialogue act has with surrounding dialogue acts. To encode the contexts of dialogue acts, we define a set of classes that contain related dialogue acts. For each Apart, all its B-parts are collected in one class. Therefore, a class is characterized with its Apart and a set of B-parts (b-set): a, bset(a) where bset(a) = {b|(a, b) \u2208 AP }. A dialogue act can belong to at most two classes: a class containing the dialogue act as an A-part (Aclass) and a class containing the dialogue act as a B-part (B-class). Thus, the complete context of a dialogue act is encoded with an AP label that is compounded of its A-class and B-class",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "(L = A \u2212 class|B \u2212 class).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "Given a list of dialogue acts DA = [da 1 , . . . , da n ], a class can be represented in two different ways: with fixed or relative position of the dialogue acts. The former encodes each dialog act in the class with the index of the dialog acts in the list. The latter encodes the dialogue Figure 1 : A graphical representation of the context of dialogue act 45. The label that encodes this context is < 0, {2} > | < \u22122, {1, 2} > acts in the class with relative positions to the dialogue act representing the A-part of the class. In this paper, we use the approach with relative positions because it significantly decreases the number of possible classes. In our encoding, each class of the labeled dialogue act da i (Aclass and B-class) has the form \u2212n, O , where n is an offset of the labeled DA da i from the A-part of the class and O is a set of offsets of the dialogue acts in the b-set from the A-part of the class. Note that for the A-class, n is always 0 since the labeled dialogue act is the A-part of the class. For the B-class, n is always a positive because the labeled dialogue act is in the b-set and the A-part always precedes dialogue acts in the b-set. Thus, \u2212n refers to the dialogue act that is the A-part of the class. In the case where the labeled dialogue act is not an A-part or a Bpart of an adjacency pair, one or both of the A-class and the B-class can be empty ( 0, {} ).",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "The proposed encoding makes patterns of disagreements between annotators directly visible. For example, (1) if one annotator marks the dialogue act 43 as an A-part of two adjacency pairs with B-parts 44 and 45 respectively, and the dialogue act 45 as an A-part of an adjacency pair with the B-part 47, and (2) the other annotator marks the dialogue act 44 as an A-part of an adjacency pair with the B-part 45 and the dialogue act 45 as an A-part of two adjacency pairs with B-parts 46 and 47 respectively, then the dialogue acts will be labeled as presented in Table 6 . Figure 1 illustrates the relation between the context of the dialogue act 45 and the AP label that encodes this context.",
"cite_spans": [],
"ref_spans": [
{
"start": 561,
"end": 569,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 572,
"end": 580,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "Agreement on APs annotation is measured over those dialogue acts for which annotators agreed on their boundaries. For computing agreement between annotators we use Krippendorff's \u03b1 measure. This measure allows the us- age of an appropriate user defined distance metric on the AP labels. For nominal categories, the usual \u03b1 distance metric (\u03b4) is a binary function: \u03b4 = 1 if categories are equal, otherwise \u03b4 = 0. We need to use a more refined distance metric, one that is sensitive for partial agreement of annotators on the context they assign to a dialogue act. The agreement on the contexts is translated to agreements on the corresponding A-classes and B-classes. When annotators disagree, their disagreement should be penalized based on the difference between classes. The intuition is that similarity of two classes with the same A-part depends on the number of elements in the intersection as well as on the number of elements in the union of their b-sets. Therefore, we define a distance metric \u03b4 that uses the following similarity measure on sets 4 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "sim(c 1 , c 2 ) = 2|c 1 \u2229 c 2 | |c 1 | + |c 2 |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "The distance metric (\u03b4 ) between the corresponding A-classes (or B-classes) of two APs label is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "\u03b4 ( \u2212n 1 , O 1 , \u2212n 2 , O 2 ) = 1, n 1 = n 2 \u03b4 ( \u2212n, O 1 , \u2212n, O 2 ) = 1 \u2212 sim(O 1 , O 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "The distance between two AP labels, L 2 = A 1 |B 1 and L 2 = A 2 |B 2 , is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "\u03b4 \u03bb (L 1 , L 2 ) = \u03bb \u2022 \u03b4 (A 1 , A 2 ) + (1 \u2212 \u03bb)\u03b4 (B 1 , B 2 )),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "where \u03bb \u2208 [0, 1] is a factor that determines the relative contribution of the distance between the corresponding classes the labels consist of. Applying \u03b4 0.5 to the data of exactly one R-C pair in each group gave the following results: M-R: \u03b1 = 0.71 (N = 260), B-E: \u03b1 = 0.83 (N = 322). The most frequently occurring disagreement is when one annotator marks a dialogue act with the empty label, the other annotator with a non-empty one. If annotators agreed that a dialogue act is an A-part of an adjacency pair, they mostly agreed, either partially or fully, on the B-set of this dialogue act. In most cases, the confusion between (1) an AP label with both Aclass and B-class non-empty and (2) an AP label with one of the classes empty is related to the disagreement on the DA tags assigned by annotators. This concerns the confusion between (i) Statement and Assessment/Appreciation tags (ii) Statement and Response tags (iii) Understanding check and Information Request tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of adjacency pairs annotation",
"sec_num": "7.3"
},
{
"text": "To evaluate reliability of gaze annotation, we first measured annotators agreement on marking the changes in gazed targets. Then, we measured agreement on labeling of time segments with gazed targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of gaze annotation",
"sec_num": "7.4"
},
{
"text": "Marking the changes in gazed targets results in a segmentation of the time-line into nonoverlapping, continuous segments that cover the whole input. In other words, the start time of a segment coincidences with the end time of the segment that precedes. A segment boundary indicates a change in gazed target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of gaze annotation",
"sec_num": "7.4"
},
{
"text": "The segmentation agreement is measured over all locations where any of the annotators marked a segment boundary. The number of locations where both annotators agree to some tolerance level is averaged over the total number of locations marked as a boundary. A tolerance level is defined to adjust the difference in whether a change is marked at the moment when the speaker starts changing the gaze direction or at the moment when the new target has been reached. It also adjusts the difference in the reaction of the annotators to the observed changes. Empirical analysis of the data shows that two points of the time-line can be considered equal with a tolerance level of 0.85 s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of gaze annotation",
"sec_num": "7.4"
},
{
"text": "The agreement on locations where any coder marked a segment boundary is 80.40% (N = 939). Annotators mostly disagreed on marking the cases when a participant briefly changes the gaze direction and then looks again at the previous target. Annotators reached very good agreement on gaze labelling (\u03ba = 0.95) measured over those segments where boundaries were agreed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability of gaze annotation",
"sec_num": "7.4"
},
{
"text": "8 Intra-annotator reliability Intra-annotator reliability measures whether the results of a single annotator remain consistent over time. We assessed intra-annotator reliability of dialogue act and addressee annotation. One meeting from each data subset has been annotated twice by each annotator in the DA group over a period of three months. The results presented in Table 7 show that agreement on dialogue act annotation was good for each annotator indicating intra-annotator consistency in applying the dialogue act schema. Furthermore, the results show that annotator R had a little more difficulty with addressee annotation than other annotators that reached good agreement. ",
"cite_spans": [],
"ref_spans": [
{
"start": 369,
"end": 376,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Reliability of gaze annotation",
"sec_num": "7.4"
},
{
"text": "We presented a multi-modal corpus of handannotated meeting dialogues that is designed for studying addressing behavior in face-to-face conversations involving four participants. The corpus contains dialogue acts, addressees, adjacency pairs and gaze directions of meeting participants. Annotators involved in the corpus design were able to reproduce the gaze labeling reliably. The annotations of dialogue acts and addresses were somewhat less reliable but still acceptable. Since there are only few instances of subgroup addressing in the data and annotators failed to agree on them, the corpus cannot be used for exploring the patterns in addressing behavior when a subgroup is addressed. In this paper, we have also presented a new approach for measuring reliability of adjacency pairs annotation. The key of the approach is to represent AP annotated data as a form of categorical labelling in order to apply standard reliability metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Apart from addressing, the corpus can be exploited for studying other interesting aspects of conversations involving more than two participants. The NXT stand-off XML format enables an easy extension of the corpus with new annotation layers of different modalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "http://www.m4project.org 2 http://www.amiproject.org 3 http://mmm.idiap.ch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Known as Dice coefficient, see(Manning and Schutze, 1999)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partly supported by the European Union 6th FWP IST Integrated Project AMI (Augmented Multi-party Interaction, FP6-506811, publication). We would like to thank Dennis Reidsma, Dennis Hofs, Lynn Packwood and annotators that were involved in the corpus development. We are grateful to Klaus Krippendorff for useful discussions about reliability metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The isl meeting corpus: Categorical features of communicative group interactions",
"authors": [
{
"first": "S",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Sloane",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ICASSP 2004 Meeting Recognition Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Burger and Z. Sloane. 2004. The isl meeting cor- pus: Categorical features of communicative group interactions. In Proc. of ICASSP 2004 Meeting Recognition Workshop.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The reliability of a dialogue structure coding scheme",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Kowtko",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Doherty-Sneddon",
"suffix": ""
},
{
"first": "A",
"middle": [
"H"
],
"last": "Anderson",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "1",
"pages": "13--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carletta, A. Isard, S. Isard, J.C. Kowtko, G. Doherty-Sneddon, and A.H. Anderson. 1997. The reliability of a dialogue structure coding scheme. Computational Linguistics, 23(1):13-31.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The NITE XML toolkit: flexible annotation for multi-modal language data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Evert",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Heid",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kilgour",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Voormann",
"suffix": ""
}
],
"year": 2003,
"venue": "Behavior Research Methods, Instruments, and Computers",
"volume": "35",
"issue": "3",
"pages": "353--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carletta, S. Evert, U. Heid, J. Kilgour, J. Robert- son, and H. Voormann. 2003. The NITE XML toolkit: flexible annotation for multi-modal lan- guage data. Behavior Research Methods, Instru- ments, and Computers, 35(3):353-363.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Assessing agreement on classification tasks: the kappa statistic",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "2",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carletta. 1996. Assessing agreement on classifi- cation tasks: the kappa statistic. Computational Linguistics, 22(2):249-254.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hearers and speech acts. Arenas of Language Use",
"authors": [
{
"first": "H",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "T",
"middle": [
"B"
],
"last": "Carlson",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.H. Clark and T.B. Carlson. 1992. Hearers and speech acts. Arenas of Language Use (H.H.Clark ed.).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multi-level dialogue act tags",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of 5th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Clark and Andrei Popescu-Belis. 2004. Multi-level dialogue act tags. In Proc. of 5th SIG- dial Workshop on Discourse and Dialogue.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A coefficient of agreement for nominal scales. Educational and Psychological Measurement",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Cohen. 1960. A coefficient of agreement for nom- inal scales. Educational and Psychological Mea- surement, 20:37-46.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Meeting recorder project: Dialogue act labeling guide",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Carvey",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Dhillon, S. Bhagat, H. Carvey, and E. Shriberg. 2004. Meeting recorder project: Dialogue act la- beling guide. Technical Report TR-04-002, ICSI Speech Group, Berkeley, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Footing. In Forms of Talk",
"authors": [
{
"first": "Erving",
"middle": [],
"last": "Goffman",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "124--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erving Goffman. 1981. Footing. In Forms of Talk, pages 124-159. University of Pennsylvania Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A prosodic analysis of discourse segments in direction-giving monologues",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Nakatani",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hirschberg and C.H. Nakatani. 1996. A prosodic analysis of discourse segments in direction-giving monologues. In In Proc. of the 34th Annual Meet- ing of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The icsi meeting project: Resources and research",
"authors": [
{
"first": "A",
"middle": [],
"last": "Janin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Edwards",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Macias-Guarasa",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Peskin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wooters",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wrede",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ICASSP 2004 Meeting Recognition Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Janin, J. Ang, S. Bhagat, R. Dhillon, J. Ed- wards, J. Macias-Guarasa, N. Morgan, B. Pe- skin, E. Shriberg, A. Stolcke, C. Wooters, and B. Wrede. 2004. The icsi meeting project: Re- sources and research. In Proc. of ICASSP 2004 Meeting Recognition Workshop.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards automatic addressee identification in multi-party dialogues",
"authors": [
{
"first": "N",
"middle": [],
"last": "Jovanovic",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Op Den",
"suffix": ""
},
{
"first": "Akker",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of 5th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Jovanovic and R. op den Akker. 2004. Towards automatic addressee identification in multi-party dialogues. In Proc. of 5th SIGdial Workshop on Discourse and Dialogue.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Content analysis: An introduction to its methodology",
"authors": [
{
"first": "K",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Krippendorff. 1980. Content analysis: An intro- duction to its methodology. Sage Publications.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Reliability in content analysis: Some common misconceptions and recommendations",
"authors": [
{
"first": "K",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2004,
"venue": "Human Communication Research",
"volume": "30",
"issue": "3",
"pages": "411--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Krippendorff. 2004. Reliability in content anal- ysis: Some common misconceptions and recom- mendations. Human Communication Research, 30(3):411-433.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.D. Manning and H. Schutze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The idiap smart meeting room",
"authors": [
{
"first": "D",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2002,
"venue": "IDIAP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Moore. 2002. The idiap smart meeting room. Technical Report IDIAP-COM-07, IDIAP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discourse segmentation by human and automated means",
"authors": [
{
"first": "R",
"middle": [],
"last": "Passonneau",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "1",
"pages": "103--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Passonneau and D. Litman. 1997. Discourse seg- mentation by human and automated means. Com- putational Linguistics, 23(1):103-139.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Computing reliability for coreference annotation",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.J. Passonneau. 2004. Computing reliability for coreference annotation. In Proc. of LREC.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A presentation of a set of new annotation tools based on the nxt api",
"authors": [
{
"first": "D",
"middle": [],
"last": "Reidsma",
"suffix": ""
},
{
"first": "D",
"middle": [
"H W"
],
"last": "Hofs",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jovanovic",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. Measuring Behavior",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Reidsma, D.H.W. Hofs, and N. Jovanovic. 2005. A presentation of a set of new annotation tools based on the nxt api. In In Proc. Measuring Be- havior.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The icsi meeting recorder dialog act (mrda) corpus",
"authors": [
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Carvey",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of 5th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Shriberg, R. Dhillon, S. Bhagat, J.Ang, and H. Carvey. 2004. The icsi meeting recorder di- alog act (mrda) corpus. In Proc. of 5th SIGdial Workshop on Discourse and Dialogue.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Issues in multi-party dialogues",
"authors": [
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Agent Communication In (F. Dignum",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Traum. 2003. Issues in multi-party dia- logues. Advances in Agent Communication In (F. Dignum, ed.).",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>DA tag set</td><td>MRDA</td></tr><tr><td>Statements</td><td/></tr><tr><td>s Statement</td><td>s Statement</td></tr><tr><td>Questions</td><td/></tr><tr><td>q Information-Request</td><td>Wh-question, Y/N question,</td></tr><tr><td/><td>OR-question, Or Clause Af-</td></tr><tr><td/><td>ter Y/N question</td></tr><tr><td>qo Open-ended Question</td><td>Open-ended questions</td></tr><tr><td>qh Rhetorical Question</td><td>Rhetorical Questions</td></tr><tr><td>Backchannels and Ack.</td><td/></tr><tr><td>bk Acknowledgement</td><td>Acknowledgment,Backchannel</td></tr><tr><td>ba Assessment/Appreciation</td><td>Assessment/Appreciation</td></tr><tr><td>Responses</td><td/></tr><tr><td>rp Positive response</td><td>(Partial)Accept, Affirmative</td></tr><tr><td/><td>Answer</td></tr><tr><td>rn Negative response</td><td>(Partial)Reject, Dispreferred</td></tr><tr><td/><td>and Negative Answer</td></tr><tr><td>ru Uncertain response</td><td>Maybe , No Knowledge</td></tr><tr><td>Action Motivators</td><td/></tr><tr><td>al Influencing-listeners-action</td><td>Command, Suggestion</td></tr><tr><td>as Committing-speaker-action</td><td>Commitment, Suggestion</td></tr><tr><td>Checks</td><td/></tr><tr><td>f \"Follow Me\"</td><td>\"Follow Me\"</td></tr><tr><td>br Repetition Request</td><td>Repetition Request</td></tr><tr><td>bu Understanding Check</td><td>Understanding Check</td></tr><tr><td>Politeness Mechanisms</td><td/></tr><tr><td>fa Apology</td><td>Apology</td></tr><tr><td>ft Thanks</td><td>Thanks</td></tr><tr><td>fo Other polite</td><td>Downplayer,Sympathy, Wel-</td></tr><tr><td/><td>come</td></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Dialogue act tag set</td></tr><tr><td>3.2 Adjacency pairs</td></tr></table>",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "shows overall segmentation results for each annotation group.",
"content": "<table><tr><td/><td colspan=\"3\">Agreement types</td><td/><td/><td/></tr><tr><td>R-C</td><td>PA ST</td><td colspan=\"5\">UDA CFM Agree Total Agree</td></tr><tr><td/><td/><td/><td/><td/><td/><td>%</td></tr><tr><td>B-E</td><td>326 22</td><td>16</td><td>2</td><td>366</td><td>406</td><td>90.15</td></tr><tr><td>E-B</td><td>326 32</td><td>17</td><td>2</td><td>377</td><td>411</td><td>91.73</td></tr><tr><td>M-R</td><td>317 29</td><td>10</td><td>2</td><td>358</td><td>419</td><td>85.44</td></tr><tr><td>R-M</td><td>317 33</td><td>15</td><td>2</td><td>367</td><td>426</td><td>86.14</td></tr></table>",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Segmentation agreement (R-C pair:</td></tr><tr><td>Reference annotator (R)-Comparison annota-</td></tr><tr><td>tor)</td></tr></table>",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "shows the results of performing the Kappa tests for only one R-C pair in each DA group.",
"content": "<table><tr><td>Category</td><td>B-E</td><td>M-R</td></tr><tr><td>Statement</td><td>0.82</td><td>0.72</td></tr><tr><td>Acknowledgment</td><td>0.87</td><td>0.75</td></tr><tr><td>Assessment/Appreiciation</td><td>0.32</td><td>0.39</td></tr><tr><td>Information-Request</td><td>0.70</td><td>0.84</td></tr><tr><td>Open-ended question</td><td>0.74</td><td>0.84</td></tr><tr><td>Repetition request</td><td>1.00</td><td>1.00</td></tr><tr><td>Rhetorical questions</td><td>0.00</td><td>0.66</td></tr><tr><td>Influencing-listeners-actions</td><td>0.58</td><td>0.70</td></tr><tr><td>Committing-speaker-actions</td><td>0.86</td><td>0.74</td></tr><tr><td>Positive response</td><td>0.70</td><td>0.52</td></tr><tr><td>Uncertain response</td><td>0.80</td><td>0.50</td></tr><tr><td>Negative response</td><td>0.67</td><td>0.61</td></tr><tr><td>Understanding check</td><td>0.32</td><td>-0.01</td></tr><tr><td>Other polite</td><td>0.00</td><td>-</td></tr><tr><td>Thanks</td><td>1.00</td><td>1.00</td></tr><tr><td>Follow me</td><td>-</td><td>-0.003</td></tr></table>",
"num": null
},
"TABREF6": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Single-category reliability for DA tags</td></tr><tr><td>(Kappa values)</td></tr></table>",
"num": null
},
"TABREF7": {
"html": null,
"type_str": "table",
"text": "The results show that an-",
"content": "<table><tr><td>Group</td><td>R-C pair</td><td>N</td><td>\u03ba</td></tr><tr><td>M&amp;R</td><td>M-R R-M</td><td>358 367</td><td>0.68 0.70</td></tr><tr><td>B&amp;E</td><td>B-E E-B</td><td>366 377</td><td>0.79 0.81</td></tr></table>",
"num": null
},
"TABREF9": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: An example of adjacency pairs annota-</td></tr><tr><td>tion (C 1 and C 2 : original AP annotations; C 1 (1)</td></tr><tr><td>and C 2 (1): AP labels)</td></tr></table>",
"num": null
},
"TABREF11": {
"html": null,
"type_str": "table",
"text": "Intra-annotator agreement",
"content": "<table/>",
"num": null
}
}
}
}