ACL-OCL / Base_JSON /prefixH /json /H01 /H01-1001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H01-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:31:18.144176Z"
},
"title": "Activity detection for information access to oral communication",
"authors": [
{
"first": "",
"middle": [],
"last": "Klausriesandalexwaibel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "InteractiveSystemsLabs,CarnegieMellonUniversity",
"location": {
"postCode": "15213",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": "ries|ahw@cs.cmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Oral communication is ubiquitous and carries important information yet it is also time consuming to document. Given the development of storage media and networks one could just record and store a conversation for documentation. The question is, however, how an interesting information piece would be found in a large database. Traditional information retrieval techniques use a histogram of keywords as the document representation but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance. An alternative index could be the activity such as discussing, planning, informing, story-telling, etc. This paper addresses the problem of the automatic detection of those activities in meeting situation and everyday rejoinders. Several extensions of this basic idea are being discussed and/or evaluated: Similar to activities one can define subsets of larger database and detect those automatically which is shown on a large database of TV shows. Emotions and other indices such as the dominance distribution of speakers might be available on the surface and could be used directly. Despite the small size of the databases used some results about the effectiveness of these indices can be obtained.",
"pdf_parse": {
"paper_id": "H01-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "Oral communication is ubiquitous and carries important information yet it is also time consuming to document. Given the development of storage media and networks one could just record and store a conversation for documentation. The question is, however, how an interesting information piece would be found in a large database. Traditional information retrieval techniques use a histogram of keywords as the document representation but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance. An alternative index could be the activity such as discussing, planning, informing, story-telling, etc. This paper addresses the problem of the automatic detection of those activities in meeting situation and everyday rejoinders. Several extensions of this basic idea are being discussed and/or evaluated: Similar to activities one can define subsets of larger database and detect those automatically which is shown on a large database of TV shows. Emotions and other indices such as the dominance distribution of speakers might be available on the surface and could be used directly. Despite the small size of the databases used some results about the effectiveness of these indices can be obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information access to oral communication is becoming an interesting research area since recording, storing and transmitting large amounts of audio (and video) data is feasible today. While written information is often available electronically (especially since it is typically entered on computers) oral communication is usually only documented by constructing a new document in written form such as a transcript (court proceedings) or minutes (meetings). Oral communications are therefore a large untapped resource, especially if no corresponding written documents are available and the cost of documentation using traditional techniques is considered high: Tutorial introductions by a senior staff member might be worthwhile to attend by many newcomers, office meetings may contain informations relevant for others and should be reproducable, informal and formal group meetings may be interesting but not fully documented. In essence the written form is already a reinterpretation of the original rejoinder. Such a reinterpretation are used to Reinterpretation is a time consuming, expensive and optional step and written documentation is combining reinterpretation and documentation step in one 1 . If however reinterpretation is not necessary or unwanted a system which is producing audiovisual records is superior. If reinterpretation is wanted or needed a system using audiovisual records may be used to improve the reinterpretation by adding all audiovisual data and the option to go back to the unaltered original. Whether reinterpretation is done or not it is crucial to be able to navigate effectively within an audiovisual document and to find a specific document. Figure 1 : Information access hierarchy: Oral communications take place in very different formats and the first step in the search is to determine the database (or sub-database) of the rejoinder. The next step is to find the specific rejoinder. Since rejoinders can be very long the rejoinder has to segmented and a segment has to be selected.",
"cite_spans": [],
"ref_spans": [
{
"start": 1676,
"end": 1684,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "While keywords are commonly used in information access to written information the use of other indices such as style is still uncommon (but see Kessler et al. (1997); van Bretan et al. (1998) ). Oral communication is richer than written communication since it is an interactive real time accomplishment between participants, may involve speech gestures such as the display of emotion and is situated in space and time. Bahktin (1986) characterizes a conversation by topic, situation and style. Information access to oral communication can therefore make use of indices that pertain to the oral nature of the discourse (Fig. 2) . Indices other than topic (represented by keywords) increase in importance since browsing audio documents is cumbersome which makes the common interactive retrieval strategy \"query, browse, reformulate\" less effective. Finally the topic may not be known at all or may not be that relevant for the query formulation, for example if one just wants to be reminded what was being discussed last time a person was met. Activities are suggested as an alternative index and are a description of the type of interaction. It is common to use \"action-verbs\" such as story-telling, discussing, planning, informing, etc. to describe activities 2 . Items similar to activities have been shown to be directly retrievable from autobiographic memory (Herrmann, 1993) and are therefore indices that are available to participants of the conversation. Other indices may be very effective but not available: The frequency of the word \"I\" in the conversation, the histogram of word lengths or the histogram of pitch per participant.",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "Kessler et al. (1997);",
"ref_id": "BIBREF7"
},
{
"start": 167,
"end": 191,
"text": "van Bretan et al. (1998)",
"ref_id": "BIBREF16"
},
{
"start": 419,
"end": 433,
"text": "Bahktin (1986)",
"ref_id": "BIBREF0"
},
{
"start": 1362,
"end": 1378,
"text": "(Herrmann, 1993)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 618,
"end": 626,
"text": "(Fig. 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "In Fig. 1 the information access hierarchy is being introduced which allows to understand the problem of information access to oral communication at different levels. In Ries (1999) we have shown that the detection of general di- 2 The definition of activities such as planning may vary vastly across general dialogue genres, for example compare a military combat situation with a mother child interaction. However it is often possible to develop activities and dialogue typologies for a specific dialogue genre. The related problem of general typologies of dialogues is still far from being settled and action-verbs are just one potential categorization (Fritz and Hundschnur, 1994) . Bahktin (1986) describes a discourse along the three major properties style, situation and topic. Current information retrieval systems focus on the topical aspect which might be crucial in written documents. Furthermore, since throughout text analysis is still a hard problem, information retrieval has mostly used keywords to characterize topic. Many features that could be extracted are therefore ignored in a traditional keyword based approach.",
"cite_spans": [
{
"start": 170,
"end": 181,
"text": "Ries (1999)",
"ref_id": "BIBREF12"
},
{
"start": 230,
"end": 231,
"text": "2",
"ref_id": null
},
{
"start": 655,
"end": 683,
"text": "(Fritz and Hundschnur, 1994)",
"ref_id": "BIBREF5"
},
{
"start": 686,
"end": 700,
"text": "Bahktin (1986)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 3,
"end": 9,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "alogue genre (database level in Fig. 1 ) can be done with high accuracy if a number of different example types have been annotated; in we have shown that it is hard but not impossible to distinguish activities in personal phone calls (segment level in Fig. 1 ) . In this paper we will address activities in meetings and other types of dialogues and show that these activities can be distinguished using certain features and a neural network based classifier (Sec. 2, segment level in Fig. 1 ). The concept of information retrieval assessment using information theoretic measures is applied to this task (Sec. 3). Additionally we will introduce a level somewhat below the database level in Fig. 1 that we call \"sub-genre\" and we have collected a large database of TV-shows that are automatically classified for their showtype (Sec. 4). We also explore whether there are other indices similar to activities that could be used and we are presenting results on emotions in meetings (Sec. 5).",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 38,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 252,
"end": 258,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 484,
"end": 490,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 689,
"end": 695,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "We are interested in the detection of activities that are described by action verbs and have annotated those in two databases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "meetings have been collected at Interactive Systems Labs at CMU (Waibel et al., 1998 ) and a subset of 8 meetings has been annotated. Most of the meetings are by the data annotation group itself and are fairly informal in style. The participants are often well acquainted and meet each other a lot besides their meetings.",
"cite_spans": [
{
"start": 64,
"end": 84,
"text": "(Waibel et al., 1998",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "Santa Barbara (SBC) is a corpus released by the LDC and 7 out of 12 rejoinders have been annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "The annotator has been instructed to segment the rejoinders into units that are coherent with respect to their topic Activity SBC Meeting Discussion 35 58 Information 25 23 Story-telling 24 10 Planning 7 19 Undetermined 5 8 Advising 5 17 Not meeting 3 2 Interrogation 2 1 Evaluation 1 0 Introduction 0 1 Closing 0 1 Table 1 : Distribution of activity types: Both databases contain a lot of discussing, informing and story-telling activities however the meeting data contains a lot more planning and advising.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 358,
"text": "Meeting Discussion 35 58 Information 25 23 Story-telling 24 10 Planning 7 19 Undetermined 5 8 Advising 5 17 Not meeting 3 2 Interrogation 2 1 Evaluation 1 0 Introduction 0 1 Closing 0 1 Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "and activity and annotate them with an activity which follows the intuitive definition of the action-verb such as discussing, planning, etc. Additionally an activity annotation manual containing more specific instructions has been available Thym\u00e9-Gobbel et al., 2001) 3 . The list of tags and the distribution can be seen in Tab Table 2 : Intercoder agreement for activities: The meeting dialogues and Santa Barbara corpus have been annotated by a semi-naive coder and the first author of the paper. The \u03ba-coefficient is determined as in Carletta et al. (1997) and mutual information measures how much one label \"informs\" the other (see Sec. 3). For CallHome Spanish 3 dialogues were coded for activities by two coders and the result seems to indicate that the task was easier.",
"cite_spans": [
{
"start": 241,
"end": 267,
"text": "Thym\u00e9-Gobbel et al., 2001)",
"ref_id": "BIBREF15"
},
{
"start": 538,
"end": 560,
"text": "Carletta et al. (1997)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 325,
"end": 328,
"text": "Tab",
"ref_id": null
},
{
"start": 329,
"end": 336,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "Both datasets have been annotated not only by a seminaive annotator but also by the first author of the paper. The results for \u03ba-statistics (Carletta et al., 1997) and mutual information between the coders can be seen in Tab. 2. The intercoder agreement would be considered moderate but compares approximately to Carletta et al. (1997) agreement on transactions (\u03ba = 0.59), especially for the interactive activities and CallHome Spanish.",
"cite_spans": [
{
"start": 140,
"end": 163,
"text": "(Carletta et al., 1997)",
"ref_id": "BIBREF3"
},
{
"start": 313,
"end": 335,
"text": "Carletta et al. (1997)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "For classification a neural network was trained that uses the softmax function as its output and KL-divergence as the error function. The network connects the input directly to the output units. Hidden units have not been used since they did not yield improvements on this task. The network was trained using RPROP with momentum (Riedmiller and Braun, 1993) and corresponds to an exponential model (Nigam et al., 1999) . The momentum term can be interpreted as a Gaussian prior with zero mean on the network weights. It is the same architecture that we used previously for the detection of activities on CallHome Spanish. Although some feature sets could be trained using the iterative scaling algorithm if no hidden units are being used the training times weren't high enough to justify the use of the less flexible iterative scaling algorithm. dominance is described as the distribution of the speaker dominance in a conversation. The distribution is represented as a histogram and speaker dominance is measured as the average dominance of the dialogue acts (Linell et al., 1988) of each speaker. The dialogue acts are detected and the dominance is a numeric value assigned for each dialogue act type. Dialogue act types that restrict the options of the conversation partners have high dominance (questions), dialogue acts that signal understanding (backchannels) carry low dominance.",
"cite_spans": [
{
"start": 398,
"end": 418,
"text": "(Nigam et al., 1999)",
"ref_id": "BIBREF9"
},
{
"start": 1060,
"end": 1081,
"text": "(Linell et al., 1988)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "First author The activities used for classification are those of the semi-naive coder. The \"first author\" column describes the \"accuracy\" of the first author with respect to the naive coder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "The detection of interactive activities works fairly well using the dominance feature on SBC which is also natural since the relative dominance of speakers should describe what kind of interaction is exhibited. The dialogue act distribution on the other hand works fairly well on the more homogeneous meeting database were there is a better chance to see generalizations from more specific dialogue based information. Overall the combination of more than one feature is really important since word level, Wordnet and stylistic information, while sometimes successful, seem to be able to improve the result while they don't provide good features by themselves. The meeting data is also more difficult which might be due to its informal style.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACTIVITY DETECTION",
"sec_num": "2."
},
{
"text": "Assuming a probabilistic information retrieval model a query r -in our example an activity -predicts a document d with the probability q(d|r) = q(r|d)q (d) q (r) . Let p(d, r) be the real probability mass distribution of these quantities. The probability mass function q(r|d) is estimated on a separate training set by a neural network based classifier 6 . The quantity we are interested in is the reduction in expected coding length of the document using the neural network based detector 7 :",
"cite_spans": [
{
"start": 152,
"end": 155,
"text": "(d)",
"ref_id": null
},
{
"start": 158,
"end": 161,
"text": "(r)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "\u2212Eplog q(D) q(D|R) \u2248 H(R) \u2212 Ep log 1 q(R|D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "The two expectations correspond exactly to the measures in Tab. 5, the first represents the baseline, the second the one for the respective classifier. In more standard information theoretic notation this quantity may be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "H(R) \u2212 (Hp(R|D) + D(p(r|d)||q(r|d)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "This equivalence is not extremely useful though since the quantities in parenthesis can't be estimated separately. For the small meeting database and SBC however no entropy reductions could be obtained. On the larger databases, on the other hand, entropy reductions could be obtained (\u2248 0.5bit on the CallHome Spanish database , \u2248 1bit for the sub-database detection problem in Sec. 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "6 All quantities involving the neural net q(r|d) have been determined using a round robin approach such that network is trained on a separate training set. 7 Since estimating q(d) is simple we may assume that q(d) \u2248",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "r p(d, r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "Another option is to assume that the labels of one coder are part of D. If the query by the other coder is R we are interested in the reduction of the document entropy given the query. If we furthermore assume that H(R|D) = H(R|R ) where R is the activity label embedded in D:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "H(D) \u2212 H(D|R) = H(R) \u2212 H(R|D) = M I(R, R )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "Tab. 2 shows that the labels of the semi-naive coder and the first author only inform each other by 0.25 \u2212 0.65 bits. However, since all constraints are important to apply, it might be important to include manual annotations to be matched by a query or in a graphical presentation of the output results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "Another interesting question to consider is whether the activity is correlated with the rejoinder or not. This question is important since a correlation of the activity with the rejoinder would mean that the indexing performance of activities needs to be compared to other indices that apply to rejoinders such as attendance, time and place (for results on the correlation with rejoinders see Waibel et al. (2001) ). The correlation can be measured using the mutual information between the activity and the meeting identity. The mutual information is moderate for SBC (\u2248 0.67 bit) and much lower for the meetings (\u2248 0.20 bit). This also corresponds to our intuition since some of the rejoinders in SBC belong to very distinct dialogue genre while the meeting database is homogeneous. The conclusion is that activities are useful for navigation in a rejoinder if the database is homogeneous and they might be useful for finding conversations in a more heterogeneous database. ",
"cite_spans": [
{
"start": 393,
"end": 413,
"text": "Waibel et al. (2001)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMATION ACCESS ASSESSMENT",
"sec_num": "3."
},
{
"text": "We set up an environment for TV shows that records the subtitles with timestamps continuously from one TV channel and the channel was switched every other day. At the same time the TV program was downloaded from http: //tv.yahoo.com/ to obtain programming information including the genre of the show. Yahoo assigns primary and secondary show types and unless the combination of primary/secondary show-type is frequent enough the primary showtype is used (Tab. 4). The TV show database has the advantage that we were able to collect a large and varied database with little effort. The same classifier as in Sec. 2 has been used however dialogue acts have not been detected since the data contains a lot of noise, is not necessarily conversational and speaker identities can't be determined easily. Detection results for TV shows can be seen in Tab. 5. It may be noted that adding a lot of keywords does improve the detection result but not so much the entropy. It may therefore be assume that there is a limited dependence between topic and genre which isn't really a surprise since there are many shows with weekly sequels and there may be some true repeats. Table 5 : Show type detection: Using the neural network described in Sec. 2 the show type was detected.",
"cite_spans": [],
"ref_spans": [
{
"start": 1159,
"end": 1166,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "DETECTION OF SUB-DATABASES",
"sec_num": "4."
},
{
"text": "If there is a number in the word column the word feature is being used. The number indicates how many word/part of speech pairs are in the vocabulary additionally to the parts of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DETECTION OF SUB-DATABASES",
"sec_num": "4."
},
{
"text": "Emotions are displayed in a variety of gestures, some of which are oral and may be detected via automated methods from the audio channel (Polzin, 1999) . Using only verbal information the emotions happy, excited and neutral can be detected on the meeting database with 88.1% accuracy while always picking neutral yields 83.6%. This result can be improved to 88.6% by adding pitch and power information.",
"cite_spans": [
{
"start": 137,
"end": 151,
"text": "(Polzin, 1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EMOTION AND DOMINANCE",
"sec_num": "5."
},
{
"text": "While these experiments were conducted at the utterance level emotions can be extended to topical segments. For that purpose the emotions of the individual utterances are entered in a histogram over the segment and the vectors are clustered automatically. The resulting clusters roughly correspond to a \"neutral\", \"a little happy\" and \"somewhat excited\" segment. Using the classifier for emotions on the word level the segment can be classified automatically into categories with a 83.3% accuracy while the baseline is 68.9%. The entropy reduction by automatically detected emotional activities is \u2248 0.3bit 8 . A similar attempt can be made for dominance (Linell et al., 1988) distributions: Dominance is easy to understand for the user of an information access system and it can be determined automatically with high accuracy.",
"cite_spans": [
{
"start": 655,
"end": 676,
"text": "(Linell et al., 1988)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EMOTION AND DOMINANCE",
"sec_num": "5."
},
{
"text": "8 A similar classification result for emotions on the utterance level has been obtained by just using the laughter vs. non-laughter tokens of the transcript as the input. This may indicate that (a) the index should really be the amount of laughter in the conversational segment and that (b) emotions might not be displayed very overtly in meetings. These results however would require a wider sampling of meeting types to be generally acceptable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EMOTION AND DOMINANCE",
"sec_num": "5."
},
{
"text": "It has been shown that activities can be detected and that they may be efficient indices for access to oral communication. Overall it is easy to make high level distinctions with automated methods while fine-grained distinctions are even hard to make for humans -on the other hand automatic methods are still able to model some aspect of it (Fig. 3) . To obtain an reduction in entropy a relatively large database such as CallHome Spanish is required (120 dialogues). Alternatives to activities might be emotional and dominance distributions that are easier to detect and that may be natural to understand for users. If activities are only used for local navigation support within a rejoinder one could also visualize by displaying the dialogue act patterns for each channel on a time line.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 349,
"text": "(Fig. 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "CONCLUSION AND FUTURE WORK",
"sec_num": "6."
},
{
"text": "The author has also observed that topic clusters and activities are largely independent in the meeting domain resulting in orthogonal indices. Since activities have intuitions for naive users and they may be remembered it can be assumed that users would be able to make use of these constraints. Ongoing work includes the use of speaker activity for dialogue segmentation and further assessment of features for information access. Overall the methods presented here and the ongoing work are improving the ability to index oral communication. It should be noted that some of the techniques presented lend themselves to implementations that don't require (full) speech recognition: Speaker identification and dialogue act identification may be done without an LVCSR system which would allow to lower the computational requirements as well as to a more robust system. Figure 3 : Detection accuracy summary: The detection of high-level genre as exemplified by the differentiation of corpora can be done with high accuracy using simple features (Ries, 1999) . Similar it was fairly easy to discriminate between male and female speakers on Switchboard (Ries, 1999) . Discriminating between sub-genre such as TV-show types (Sec. 4) can be done with reasonable accuracy. However it is a lot harder to discriminate between activities within one conversation for personal phone calls (CallHome) or for general rejoinders (Santa) and meetings (Sec. 2).",
"cite_spans": [
{
"start": 1040,
"end": 1052,
"text": "(Ries, 1999)",
"ref_id": "BIBREF12"
},
{
"start": 1146,
"end": 1158,
"text": "(Ries, 1999)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 865,
"end": 873,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "CONCLUSION AND FUTURE WORK",
"sec_num": "6."
},
{
"text": "The most important exception is the literal courtroom transcript, however one could argue that even transcripts are reinterpretations since they do not contain a number of informations present in the audio channel such as emotions, hesitations, the use of slang and certain types of hetereglossia, accents and so forth. This is specifically true if transcription machines are used which restrict the transcriber to standard orthography.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In contrast toThym\u00e9-Gobbel et al., 2001) the \"consoling\" activity has been eliminated and an \"informing\" activity has been introduced for segments where one or more than one member of the rejoinder give information to the others. Additionally an \"introducing\" activity was added to account for a introduction of people or topics at the beginning of meetings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Klaus Zechner trained an English part of speech tagger tagger on Switchboard that has been used. The tagger uses the code byBrill (1994).5 The model was trained to be very portable and therefore the following choices were taken: (a) the dialogue model is context-independent and (b) only the part of speech are taken as the input to the model plus the 50 most likely word/part of speech types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Speech Genres and other late Essays, chapter Speech Genres",
"authors": [
{
"first": "M",
"middle": [
"M"
],
"last": "Bahktin",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. M. Bahktin. Speech Genres and other late Essays, chap- ter Speech Genres. University of Texas Press, Austin, 1986.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Variation across speech and writing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Biber",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Biber. Variation across speech and writing. Cambridge University Press, 1988.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A report on recent progress in transformation based error-driven learning",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1994,
"venue": "DARPA Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill. A report on recent progress in transformation based error-driven learning. In DARPA Workshop, 1994.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The reliability of a dialogue structure coding scheme",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Kowtko",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Doherty-Sneddon",
"suffix": ""
},
{
"first": "A",
"middle": [
"H"
],
"last": "Anderson",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "1",
"pages": "13--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carletta, A. Isard, S. Isard, J. C. Kowtko, G. Doherty- Sneddon, and A. H. Anderson. The reliability of a dia- logue structure coding scheme. Computational Linguis- tics, 23(1):13-31, March 1997.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WordNet -An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Fellbaum, editor. WordNet -An Electronic Lexical Database. MIT press, 1998.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Handbuch der Dialoganalyse. Niemeyer, Tuebingen",
"authors": [
{
"first": "G",
"middle": [],
"last": "Fritz",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hundschnur",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Fritz and F. Hundschnur. Handbuch der Dialoganalyse. Niemeyer, Tuebingen, 1994.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Autobiographical memory and the validity of retrospective reports, chapter The validity of retrospective reports as a function of the directness of retrieval processes",
"authors": [
{
"first": "D",
"middle": [
"J"
],
"last": "Herrmann",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "21--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. J. Herrmann. Autobiographical memory and the validity of retrospective reports, chapter The validity of retrospec- tive reports as a function of the directness of retrieval processes, pages 21-31. Springer, 1993.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic detection of genre",
"authors": [
{
"first": "B",
"middle": [],
"last": "Kessler",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Nunberg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Meeting of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "32--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Kessler, G. Nunberg, and H. Sch\u00fctze. Automatic detec- tion of genre. In Proceedings of the 35th Annual Meet- ing of the Association for Computational Linguistics and the 8th Meeting of the European Chapter of the Associ- ation for Computational Linguistics, pages 32-38. Mor- gan Kaufmann Publishers, San Francisco CA, 1997. URL http://xxx.lanl.gov/abs/cmp-lg/9707002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Interactional dominance in dyadic communication: a presentation of initiative-response analysis",
"authors": [
{
"first": "P",
"middle": [],
"last": "Linell",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gustavsson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Juvonen",
"suffix": ""
}
],
"year": 1988,
"venue": "Linguistics",
"volume": "26",
"issue": "",
"pages": "415--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Linell, L. Gustavsson, and P. Juvonen. Interactional dominance in dyadic communication: a presentation of initiative-response analysis. Linguistics, 26:415-442, 1988.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using maximum entropy for text classification",
"authors": [
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the IJCAI-99 Workshop on Machine Learning for Information Filtering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Nigam, J. Lafferty, and A. McCallum. Using maxi- mum entropy for text classification. In Proceedings of the IJCAI-99 Workshop on Machine Learning for Infor- mation Filtering, 1999. URL http://www.cs.cmu.edu/ lafferty/.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Detecting Verbal and Non-Verbal Cues in the Communication of Emotion",
"authors": [
{
"first": "T",
"middle": [],
"last": "Polzin",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Polzin. Detecting Verbal and Non-Verbal Cues in the Communication of Emotion. PhD thesis, Carnegie Mellon University, November 1999.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A direct adaptive method for faster backpropagation learning: The RPROP algorithm",
"authors": [
{
"first": "M",
"middle": [],
"last": "Riedmiller",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Braun",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. of the IEEE Int. Conf. on Neural Networks",
"volume": "",
"issue": "",
"pages": "586--591",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In Proc. of the IEEE Int. Conf. on Neural Networks, pages 586-591, 1993.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards the detection and description of textual meaning indicators in spontaneous conversations",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Eurospeech",
"volume": "3",
"issue": "",
"pages": "1415--1418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Ries. Towards the detection and description of textual meaning indicators in spontaneous conversations. In Pro- ceedings of the Eurospeech, volume 3, pages 1415-1418, Budapest, Hungary, September 1999.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Shallow discourse genre annotation in callhome spanish",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Valle",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceecings of the International Conference on Language Ressources and Evaluation (LREC-2000)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Ries, L. Levin, L. Valle, A. Lavie, and A. Waibel. Shallow discourse genre annotation in callhome spanish. In Proceecings of the International Conference on Lan- guage Ressources and Evaluation (LREC-2000), Athens, Greece, May 2000.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Ess-Dykema",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. V. Ess-Dykema, and M. Meteer. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3), September 2000.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dialogue act, dialogue game, and activity tagging manual for spanish conversational speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Thym\u00e9-Gobbel",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Valle",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Thym\u00e9-Gobbel, L. Levin, K. Ries, and L. Valle. Dia- logue act, dialogue game, and activity tagging manual for spanish conversational speech. Technical report, Carnegie Mellon University, 2001. in preperation.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Genres defined for a purpose, fast clustering, and an iterative information retrieval interface",
"authors": [
{
"first": "J",
"middle": [],
"last": "Van Bretan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Dewe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hallberg",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Karlgren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wolkert",
"suffix": ""
}
],
"year": 1998,
"venue": "Eighth DE-LOS Workshop on User Interfaces in Digital Libraries L\u00e5ngholmen",
"volume": "",
"issue": "",
"pages": "60--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "van Bretan, J. Dewe, A. Hallberg, J. Karlgren, and N. Wolk- ert. Genres defined for a purpose, fast clustering, and an iterative information retrieval interface. In Eighth DE- LOS Workshop on User Interfaces in Digital Libraries L\u00e5ngholmen, pages 60-66, October 1998.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Meeting browser: Tracking and summarising meetings",
"authors": [
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bett",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Finke",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the DARPA Broadcast News Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Waibel, M. Bett, and M. Finke. Meeting browser: Track- ing and summarising meetings. In Proceedings of the DARPA Broadcast News Workshop, 1998.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Advances in automatic meeting record creation and access",
"authors": [
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bett",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Metze",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schaaf",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Soltau",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Zechner",
"suffix": ""
}
],
"year": 2001,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Waibel, M. Bett, F. Metze, K. Ries, T. Schaaf, T. Schultz, H. Soltau, H. Yu, and K. Zechner. Advances in automatic meeting record creation and access. In ICASSP, Salt Lake City, Utah, USA, 2001. to appear.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "extract and condense information \u2022 add or delete information \u2022 change the meaning \u2022 cite the rejoinder \u2022 relate rejoinders to each other",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Bahktin's characterization of dialogue:",
"type_str": "figure"
},
"TABREF3": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Activity detection: Activities are detected</td></tr><tr><td>on the Santa Barbara Corpus (SBC) and the meet-</td></tr><tr><td>ing database (meet) either without clustering the</td></tr><tr><td>activities (all) or clustering them according to their</td></tr><tr><td>interactivity (interactive) (see Sec. 2 for details).</td></tr></table>"
},
"TABREF4": {
"num": null,
"html": null,
"text": "The features used for classification are words the 50 most frequent words / part of speech pairs are used directly, all other pairs are replaced by their part of speech 4 .stylistic features adapted fromBiber (1988) and contain mostly syntactic constructions and some word classes.",
"type_str": "table",
"content": "<table><tr><td>Wordnet a total of 40 verb and noun classes (so called lex-</td></tr><tr><td>icographers classes (Fellbaum, 1998)) are defined and</td></tr><tr><td>a word is replaced by the most frequent class over all</td></tr><tr><td>possible meanings of the word.</td></tr><tr><td>dialogue acts such as statements, questions, backchannels,</td></tr><tr><td>. . . are detected using a language model based detec-</td></tr><tr><td>tor trained on Switchboard similar to Stolcke et al.</td></tr><tr><td>(2000) 5</td></tr></table>"
},
"TABREF6": {
"num": null,
"html": null,
"text": "TV show types: The distribution of show types in a large database of TV shows (1067 shows) that has been recorded over the period of a couple of months until April 2000 in Pittsburgh, PA",
"type_str": "table",
"content": "<table/>"
}
}
}
}