| { |
| "paper_id": "E17-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:50:57.622973Z" |
| }, |
| "title": "Joint, Incremental Disfluency Detection and Utterance Segmentation from Speech", |
| "authors": [ |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Hough", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Bielefeld University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Bielefeld University", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present the joint task of incremental disfluency detection and utterance segmentation and a simple deep learning system which performs it on transcripts and ASR results. We show how the constraints of the two tasks interact. Our joint-task system outperforms the equivalent individual task systems, provides competitive results and is suitable for future use in conversation agents in the psychiatric domain.", |
| "pdf_parse": { |
| "paper_id": "E17-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present the joint task of incremental disfluency detection and utterance segmentation and a simple deep learning system which performs it on transcripts and ASR results. We show how the constraints of the two tasks interact. Our joint-task system outperforms the equivalent individual task systems, provides competitive results and is suitable for future use in conversation agents in the psychiatric domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Artificial conversational systems promise to be a valuable addition to the existing set of psychiatric health care delivery solutions. As artificial systems, they can ensure that interview protocols are followed, and, perhaps surprisingly, due to being \"just a computer\", even seem to increase their interlocutors' willingness to disclose (Lucas et al., 2014) . Interactions with such conversational agents have been shown to contain interpretable markers of psychological distress, such as rate of filled pauses, speaking rate, and various temporal, utterance and turn-related interactional features (DeVault et al., 2013) . Filled pauses and disfluencies in general have also been shown to predict outcomes to psychiatric treatment (Howes et al., 2012; McCabe et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 359, |
| "text": "(Lucas et al., 2014)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 601, |
| "end": 623, |
| "text": "(DeVault et al., 2013)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 734, |
| "end": 754, |
| "text": "(Howes et al., 2012;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 755, |
| "end": 775, |
| "text": "McCabe et al., 2013)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Currently, these systems are only used to elicit material that is then analysed offline. For offline analysis of transcripts with gold standard utterance segmentation, much work exists on detecting disfluencies (Johnson and Charniak, 2004; Qian and Liu, 2013; Honnibal and Johnson, 2014) . To enable more cost-effective analysis, however, and possibly even let the interaction script itself be dependent on an analysis hypothesis, it would be better to be able to work directly off the speech sig-nal, and online (incrementally) . This is what we explore in this paper, presenting and evaluating a model that works with online, incremental speech recognition output to detect disfluencies with various degrees of fine-grainedness.", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 239, |
| "text": "(Johnson and Charniak, 2004;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 240, |
| "end": 259, |
| "text": "Qian and Liu, 2013;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 260, |
| "end": 287, |
| "text": "Honnibal and Johnson, 2014)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 513, |
| "end": 528, |
| "text": "(incrementally)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As a second contribution, we combine incremental disfluency detection with another lowerlevel task that is important for responsive conversational systems, namely the detection of turntaking opportunities through detection of utterance boundaries. (See for example (Schlangen and Skantze, 2011) for arguments for incremental processing and responsive turn-taking in conversational systems, and (Schlangen, 2006; Atterer et al., 2008; Raux, 2008; Manuvinakurike et al., 2016, inter alia) for examples of incremental utterance segmentation). Besides both being relevant for interactive health assessment systems, these tasks also have an immanent connection, as the approach typically used for turn-end detection is simply waiting for a silence of a certain duration, and hence is mislead by intra-turn silent disfluencies. Similarly, without gold standard segmentation, disfluent restarts and repairs may be predicted at fluent utterance boundaries. We hence conjecture that the tasks can profitably be done jointly.", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 294, |
| "text": "(Schlangen and Skantze, 2011)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 394, |
| "end": 411, |
| "text": "(Schlangen, 2006;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 412, |
| "end": 433, |
| "text": "Atterer et al., 2008;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 434, |
| "end": 445, |
| "text": "Raux, 2008;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 446, |
| "end": 486, |
| "text": "Manuvinakurike et al., 2016, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As a separate task, there has been extensive work on utterance segmentation. Cuendet (2006) reports an NIST-SU utterance segmentation error rate result on the Switchboard corpus at 48.50, using a combination of lexical and acoustic features. Ang et al. (2005) report NIST-SU scores in the region of 34.35-45.92 on the ICSI Meeting Corpus. Mart\u00ednez-Hinarejos et al. (2015) report state-of-the-art dialogue act segmentation results on Switchboard at 23.0 NIST-SU, however this is not on the level of full dialogues, but on pre-segmented turn stretches. For the equivalent task of sentence boundary detection, Seeker et al. (2016) report an F-score of 0.7665 on Switchboard data, using a joint dependency parsing framework, and Xu et al. (2014) implement a deep learning architecture and report an 0.810 F-score and 35.9 NIST-SU error rate on broadcast news speech using prosodic and lexical features using a DNN for prosodic features, combined with a CRF classifier. However scaling this to spontaneous speech and the challenges of incrementality explained here, is yet to be tested.", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 91, |
| "text": "Cuendet (2006)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 242, |
| "end": 259, |
| "text": "Ang et al. (2005)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 339, |
| "end": 371, |
| "text": "Mart\u00ednez-Hinarejos et al. (2015)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 607, |
| "end": 627, |
| "text": "Seeker et al. (2016)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 725, |
| "end": 741, |
| "text": "Xu et al. (2014)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Strongly incremental approaches to the task are rare, however (Atterer et al., 2008) achieve a wordby-word F-score of 0.511 on predicting whether the current word is the end of the utterance (dialogue act) on Switchboard, and using ground-truth syntactic information indicating sentence structure information achieve 0.559.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 84, |
| "text": "(Atterer et al., 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Disfluency detection on pre-segmented utterances in the Switchboard corpus has also had a lot of attention, and has also reached high performance (Johnson and Charniak, 2004; Georgila, 2009; Qian and Liu, 2013; Honnibal and Johnson, 2014 ). On detection on Switchboard transcripts, Honnibal and Johnson (2014) achieve 0.841 reparandum word accuracy using a joint dependency parsing approach, and Hough and Purver (2014) in a strongly incrementally operating system without look-ahead achieve 0.779, using a pipeline of classifiers and language model features. The potentially live approaches tend to use acoustic information (Moniz et al., 2015) and do not perform on a comparable level to their transcription-based task analogues, nor achieve the same fine-grained analysis of disfluency structure, which is often needed to identify the disfluency type and compute its meaning.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 174, |
| "text": "(Johnson and Charniak, 2004;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 175, |
| "end": 190, |
| "text": "Georgila, 2009;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 191, |
| "end": 210, |
| "text": "Qian and Liu, 2013;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 211, |
| "end": 237, |
| "text": "Honnibal and Johnson, 2014", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 282, |
| "end": 309, |
| "text": "Honnibal and Johnson (2014)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 625, |
| "end": 645, |
| "text": "(Moniz et al., 2015)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Live incremental approaches to both tasks have not been able to benefit from reliable ASR hypotheses arriving in a timely manner until recently. Now the arrival of improved performance, in terms of low Word Error Rate (WER) and better live performance properties is making this possible (Baumann et al., 2016) . In this paper we define a joint task in a live setting. After defining the task we present a simple deep learning system which simultaneously detects disfluencies and predicts up-coming utterance boundaries from incremental word hypotheses and derived information.", |
| "cite_spans": [ |
| { |
| "start": 287, |
| "end": 309, |
| "text": "(Baumann et al., 2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 The Tasks: Real-time disfluency prediction and utterance segmentation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Disfluencies, in their fullest form as speech repairs, are typically assumed to have a tripartite reparandum-interregnum-repair structure (terms originally proposed by Shriberg (1994) ), as exhibited by the following example.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 183, |
| "text": "Shriberg (1994)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental disfluency detection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "John [ likes reparandum + { uh } interregnum loves ] repair Mary", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Incremental disfluency detection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "If reparandum and repair are absent, the disfluency reduces to an isolated edit term. In the example given here, the interregnum is filled by a marked, lexicalised edit term, but more phrasal terms such as I mean and you know can also occur.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental disfluency detection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The task of disfluency detection then is to recognise these elements and their structure, and the task of incremental disfluency detection adds the challenge of doing this in real-time, from \"left-toright\". In that latter setting, detection runs into the same problem as a human processor of such an utterance: Only by the time the interregnum is encountered, or possibly even only when the repair is seen, does it become clear that earlier material now is to be considered as \"to be repaired\" (reparandum). 1 Hence, the task cannot be set up as a straightforward sequence labelling task where the tags \"reparandum\", \"interregnum\" and \"repair\" are distributed left-to-right over words as indicated in the example above; in this example, it would unfairly require the prediction that \"likes\" is going to be repaired, at a point when no evidence is available for making it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental disfluency detection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We follow Hough and Schlangen (2015) and use a tag set that encodes the reparandum start only at a time when it can be guessed, namely at the onset of the actual repair. This is illustrated in Figure 1 in the \"disfluency (complex)\" row. Here, the word at the repair onset, \"to\", gets tagged as repair onset (rpS) and, at the same time, as repairing material beginning 5 tokens in the past (-5, yielding the complex label rpS-5). Additionally, we annotate all repair words (as rpMid, if the word is neither first nor last word of the repair, and together with the disfluency type, if it is the final word; here, the ", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 36, |
| "text": "Hough and Schlangen (2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 193, |
| "end": 201, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incremental disfluency detection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Utterance segmentation .w--w--w- -w--w- -w--w--w- -w- -w- -w--w.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental disfluency detection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": ".w--w. Joint task (simple) .f--e--f--f--f--e--e--e--rpS--f--f--f. .f--f. Joint task (complex) .f--e--f--f--f--e--e--e--rpS\u22125--rpESub--f--f. .f--f. Figure 1 : An utterance with the traditional repair disfluency and segmentation annotation in-line (Shriberg, 1994; Meteer et al., 1995) and our incrementally-oriented tag schemes label is rpESub for substitution), 2 editing terms (e) and fluent material (f ) as well. From the complex tag set, we can reconstruct the disfluency structure as in (1) in a strongly incremental fashion. We also define a reduced tag set (shown in Figure 1 as \"disfluency (simple)\" that only tags fluent words, editing terms, and the repair onset.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 26, |
| "text": "(simple)", |
| "ref_id": null |
| }, |
| { |
| "start": 84, |
| "end": 93, |
| "text": "(complex)", |
| "ref_id": null |
| }, |
| { |
| "start": 246, |
| "end": 262, |
| "text": "(Shriberg, 1994;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 263, |
| "end": 283, |
| "text": "Meteer et al., 1995)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 147, |
| "end": 155, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 574, |
| "end": 582, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incremental disfluency detection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We formulate incremental utterance segmentation as the judgement in real time as to when the current utterance is going to end, and so like (Schlangen, 2006; Atterer et al., 2008) , we move from purely reactive approach, signalled by silence, to prediction. To allow prediction to be possible we use four tags for classifying stretches of acoustic data (which can be the time spans of forced aligned gold standard words, or the word hypotheses timings provided by an ASR), which are equivalent to a BIES (Beginning, Inside, End and Single) scheme for utterances-see Table 1 . The tag set allows evidence from the prior context of the word (the acoustic and linguistic information preceding the word) to be used to predict whether this word continues a current utterance (the -prefix) or starts anew (the . prefix), and also permits the online prediction of whether the next word (or segment) will continue the current utterance (the -suffix) or the current word ends the utterance (the . suffix). From these utterance boundary predictions can be derived when -w. or .w. is predicted (i.e. \"will end utterance\"). The tag set is summarized in Table 1 and an example is in Fig. 1 , row \"utterance segmentation\".", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 157, |
| "text": "(Schlangen, 2006;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 158, |
| "end": 179, |
| "text": "Atterer et al., 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 566, |
| "end": 573, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1141, |
| "end": 1148, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1170, |
| "end": 1176, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incremental utterance segmentation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Studying the two phenomena in natural dialogue corpora, for example in terms of rich transcription mark-up in the SWBD annotation manual (Meteer et al., 1995) , there are several constraints:", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 158, |
| "text": "(Meteer et al., 1995)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Defining the joint task", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "-w--w. .w-.w. f 1 1 1 1 e 1 1 1 1 rpS 1 1 0 0 -w--w. .w-.w. f 1 1 1 1 e 1 1 1 1 rpS-[1-8] 1 0 0 0 rpMid 1 0 0 0 rpESub 1 1 0 0 rpEDel 1 1 0 0 rpS-[1-8]ESub 1 1 0 0 rpS-[1-8]EDel 1 1 0 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Defining the joint task", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Figure 2: The joint tag set for the task. 1= tag in set, simple (top) and complex (bottom).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Defining the joint task", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "C1 Repair onsets cannot begin an utterance (by definition of first position repairs needing a preceding reparandum). C2 Repairs must be completed within the utterance in which they begin. C3 Utterances can be interrupted or abandoned, but these are different to within-dialogue-act repairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Defining the joint task", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Given these constraints, we can generate a joint tag set as a subset of the cross product of both tag schemes. The utterance segmentation tags in Table 1 are combined with the simple strongly incremental disfluency tags described in \u00a73.1. The joint set for both the simple and complex tasks is in Fig. 2 , where 1 indicates the tag is in the set and 0 otherwise. In the simple task, there are 10 tags. The joint set for the full task including disfluency structure detection has 53 possible tags (rather than the full cross product, which would be 92). In reality, in the training corpus, only 43 of these possible combinations were found, so this constituted our tag set in practice. See Fig. 1 (bottom 2 rows) for example sequences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 297, |
| "end": 303, |
| "text": "Fig. 2", |
| "ref_id": null |
| }, |
| { |
| "start": 689, |
| "end": 695, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Defining the joint task", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Given the formulation of the joint task, we would like to ask the following questions of scalable, automatic approaches to it:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Research questions", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "-w-a word which continues the current utterance and whose following word will continue it -w.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Research questions", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "a word which continues the current utterance and is the last word of it .w-a word which is the beginning of an utterance and whose following word will continue it .w.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Research questions", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "a word constituting an entire utterance Table 1 : The tag set for the continuity of each word within a dialogue act Q1 Given the interaction between the two tasks, can a system which performs both jointly help improve equivalent systems doing the individual tasks? Q2 Given the incremental availability of word timings from state-of-the-art ASR, to what extent can word timing data help performance of either task? Q3 To what extent is it possible to achieve a good online accuracy vs. final accuracy trade-off in a live, incremental, system?", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 40, |
| "end": 47, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Research questions", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "To address these questions we use a combination of a deep learning architecture for sequence labelling and incremental decoding techniques which we will now explain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Research questions", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Our systems consist of deep learning sequence models which consume incoming words and use word embeddings in addition to other features to predict disfluency and utterance segmentation labels for each word, in a strictly left-to-right, wordby-word fashion. We also use word timings as input to a separate classifier whose output is combined with that of the deep learning architecture in an incremental decoder. See Fig. 3 for the overall architecture. We describe the elements of the system below.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 416, |
| "end": 422, |
| "text": "Fig. 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "LSTMs and Incremental Decoding for Live Prediction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In our systems we use the following input features:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input Features", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Words in a backwards window from the most recent word (transcribed or ASR) \u2022 Durations of words in the current window (from transcription or ASR word timings) \u2022 Part-Of-Speech (POS) tags for words in current window (either reference, or from an incremental CRF tagger)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input Features", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For incremental ASR, we use the free trial version of IBM's Watson Speech-To-Text service. 3 The service provides good quality ASR on noisy Figure 3 : Schematic structure of the system. data-on our selected heldout data on Switchboard, the average WER is 26.5%. The Watson service, crucially for our task, does not filter out hesitation markers or disfluencies, which is rare for current web-based services (Baumann et al., 2016) . The service also outputs results incrementally, so silence-based end-pointing is not used. The service also returns word timings, which upon manual inspection were close enough to the reference timings to use as features in the live version of our system. In this paper, the durations are not features in the principal RNN but in an orthogonal logistic regression classifier-see \u00a74.3.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 92, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 407, |
| "end": 429, |
| "text": "(Baumann et al., 2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 140, |
| "end": 148, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Input Features", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For POS-tagging, we use the NLTK CRF tagger, which when trained on our training data and tested on our heldout data achieves 0.915 accuracy on all tags, which was sufficiently good for our purposes. Crucially, for the label UH, which is important evidence for an edit term, it achieves an F-score of 0.959.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input Features", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We use two well-studied deep learning architectures for our sequence labelling task-the Elman Recurrent Neural Network (RNN) and the Long Short-Term Memory (LSTM) RNN. Architecturally the RNNs here reproduce approximately the identical set-up as described in (Mesnil et al., 2013; Hough and Schlangen, 2015) .", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 280, |
| "text": "(Mesnil et al., 2013;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 281, |
| "end": 307, |
| "text": "Hough and Schlangen, 2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Input and word embeddings Following (Mes-nil et al., 2013), we use 1-of-N, or 'one-hot', vectors as our raw input to the network, which provide unique indices to dense vectors in a word embedding matrix. The initial word embeddings were obtained from Switchboard data using the python implementation of word2vec in gensim, 4 using a skip-gram context model. The training data for the initial embeddings was cleaned of disfluencies, effecting a 'clean' language model (Johnson and Charniak, 2004) . These embeddings were then further updated as part of the objective function during the task-specific training itself. Instead of single word/POS inputs we use context windows which, like n-gram language models, are backwards from the current word. The internal representation of context windows of length n in the network is created through the ordered concatenation of the n corresponding word embedding vectors of size 50, resulting in an input to the network of dimension R 50n . We use n =2 in our experiments here. RNN architecture and activation functions In addition to the embedding layer, we use a (recurrent) hidden layer of 50 nodes and an output layer the size of our training tag sets (43 nodes for the complex task and 10 nodes for the simple task). The standard Elman RNN dynamics in the recurrent hidden layer at time t is as in (3), where the hidden layer h(t) is calculated as the Sigmoid function (2) of the addition of the weight matrix U applied via dot product to the current input vector x(t) and the weight matrix V applied via dot product to the stored previous value of the hidden layer at time t\u22121, i.e. h(t\u22121).", |
| "cite_spans": [ |
| { |
| "start": 467, |
| "end": 495, |
| "text": "(Johnson and Charniak, 2004)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "s(x) = 1 1 + e \u2212x", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h(t) = s(U x(t) + V h(t\u22121))", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We use the standard softmax function for the node activation function of the output layer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "At decoding time, the compression of the context into the hidden layer allows us to save the current state of the decode live compactly from ASR results as they become available to the network. In order to integrate the new incoming words and POS tags with the history, it is only necessary to store the current hidden layer activation h(t) (and the output softmax layer too, if that is being used by another process), and wait for new information to the input layer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "LSTM unit In our LSTM, we include recurrent LSTM units that uses the input x(t), the hidden state activation h(t\u22121), and memory cell activation c(t\u22121) to compute the hidden state activation h(t) at time t. It uses a combination of a memory cell c and three types of gates: input gate i, forget gate f , and output gate o to decide if the input needs to be remembered (using the input gate), when the previous memory needs to be retained (forget gate), and when the memory content needs to be output (using the output gate). For each time step t the cell activations c(t) and h(t) are computed by the below steps, whereby the is element-wise multiplication.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "i(t) = s(W i x(t) + U i h(t\u22121) + V i c(t\u22121))", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "f (t) = s(W f x(t) + U f h(t\u22121) + V f c(t\u22121)) c(t) = f (t) c(t\u22121) + i(t) tanh(W c x(t) + U c h(t\u22121)) o(t) = s(W o x(t) + U o h(t\u22121) + V o c(t)) h(t) = o(t) tanh(c(t))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "While many more weight matrices need to be learned (all the W , U and V subscripted matrices), as with the standard RNN, at decoding time it is efficient to store the current decoding state in a compact way, as it is only neccessary to save the activation of the memory cell c(t) and the hidden layer h(t) to save the current state of the network. See Fig. 3 for the schematic overall disfluency detection architecture for the LSTM.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 352, |
| "end": 358, |
| "text": "Fig. 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Learning: error function and parameter update As is common for RNNs (De Mulder et al., 2015) we use negative log likelihood loss (NLL) as a cost function and use stochastic gradient descent over the parameters, including the embedding vectors, to minimize it. We use a batch size of 9 words, consistent with our repair tag scheme. Both networks use a learning rate of 0.005 and L2 regularisation on the parameters to be learned with a weight of 0.0001.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architectures", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Markov model For decoding optimization we use Viterbi decoding on the sequence of softmax output distributions from the network in the spirit of (Guo et al., 2014) . We use a Markov model which is hand-crafted to ensure legal tag sequences are outputted for the given tag set. In our joint task, this permits 'late' detection of an utterance boundary if the probability for a -w. and following .w-or .w. tag on their own are not the arg max, but their combined probability permits the best sequence. Similarly, in the complex task, repairs where evidence of a repair end tag is strong, but the repair onset tag was not the arg max can be detected at the repair end. From an incremental perspective, in Viterbi decoding there is the danger of output 'jitter'. We investigate how different output representations have different effects on output prediction stability in our evaluation. Timing driven classifier As an edition to the decoding step, we experimented with an independent timing driven classifier which consumes the durations of the last three words and outputs a probability that this is a fluent continuation or the beginning of a new utterance. We train a logistic regression classifier on our training data. Combining this two-class probability with the probability of the relevant utterance segmentation tags in decoding boosted performance considerably.", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 163, |
| "text": "(Guo et al., 2014)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental decoding and timing driven classifier", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Accuracy On transcripts, we calculate repair onset detection accuracy F rpS , where applicable reparandum word accuracy F rm , and F1 accuracy for edit term words F e , which includes interregna. For utterance segementation we also use wordlevel F1 scores for utterance boundaries (end-ofutterance words) F uttSeg . Carrying out the task live, on speech recognition hypotheses which very well may not be identical to the annotated goldstandard transcription, requires the use of timebased metrics of local accuracy in a time window (i.e. within this time window, has a disfluency/utterance boundary been detected, even if not on the identical words?)-we therefore calculate the F1 score over 10 second windows of each speaker's channel. While this window-ing can give higher scores on certain phenomena, it tends to follow the word-level F-score so is a good time-based indicator of accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Criteria", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For utterance segmentation, for comparison to previous work we also use NIST-SU error rate (Ang et al., 2005) . NIST-SU is the ratio of the number of incorrect utterance boundary hypotheses (missed boundaries and false positives) made by a system to the number of reference boundaries.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 109, |
| "text": "(Ang et al., 2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Criteria", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For a more coarse-grained metric which includes both tasks, which is useful in our target domain of interactions in a clinical context (Howes et al., 2014) , we look at the rpS : UttSeg ratio per speaker correlation (Pearson's R). This gives us the best approximation as to how good the system is at estimating repair rate per utterance.", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 155, |
| "text": "(Howes et al., 2014)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Criteria", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Timeliness and diachronic metrics Crucial for the live nature of the system, we measure latency (i.e. how close to the actual time a disfluency or boundary event occurred has one been predicted?) and also stability of output over time (i.e. how much does the output change?). For latency we use Zwarts et al. (2010) 's time-to-detection metric: the average distance (in numbers of words) consumed before first detection of gold standard repairs from the repair onset word, TD rpS . 5 We generalize this measure to the other tags of interest to give TD e and TD uttSeg and also, particularly crucially for the ASR results, report the metrics in terms of time in seconds. 6 For stability, incorporating insights from the evaluation of incremental processors by Baumann et al. 2011, we measure the edit overhead (EO) of the output labels-this is the percentage of unnecessary edits (insertions and deletions) required to get to the final labels outputted by the system.", |
| "cite_spans": [ |
| { |
| "start": 295, |
| "end": 315, |
| "text": "Zwarts et al. (2010)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 482, |
| "end": 483, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 670, |
| "end": 671, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Criteria", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We experiment with the 2 joint output representations in Fig. 1 and implement an RNN and LSTM using Theano (Bergstra et al., 2010) as an extension to the code in Mesnil et al. (2013) . We also run the 3 individual versions of the tasks with the tag sets shown in Fig. 1 for comparison. We also train a word timings driven classifier which adds information to the decoding step as explained above to try to answer Q2. 7 Data We train on transcripts and test on both transcripts and ASR hypotheses. We use the standard Switchboard training data for disfluency detection (all conversation numbers beginning sw2*,sw3* in the Penn Treebank III release: 100k utterances, 650K words) and use the standard heldout data (PTB III files sw4[5-9]*: 6.4K utterances, 49K words) as our validation set. We test on the standard test data (PTB III files 4[0-1]*) with punctuation removed from all files. 8 For 5 Our measure is in fact one word earlier by default than Zwarts et al. (2010) as we take detection after the end of the repair onset word as the earliest possible detection point.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 130, |
| "text": "(Bergstra et al., 2010)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 162, |
| "end": 182, |
| "text": "Mesnil et al. (2013)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 417, |
| "end": 418, |
| "text": "7", |
| "ref_id": null |
| }, |
| { |
| "start": 951, |
| "end": 971, |
| "text": "Zwarts et al. (2010)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 57, |
| "end": 63, |
| "text": "Fig. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 263, |
| "end": 269, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "6" |
| }, |
| { |
| "text": "6 These measures only apply to repairs and utterance boundaries detected correctly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "6" |
| }, |
| { |
| "text": "7 All experiments are reproducible. The code can be downloaded at https://github.com/ dsg-bielefeld/deep_disfluency 8 We include partial words as these may in theory become available from the ASR in the live setting. Table 3 : Comparison of the joint vs. individual task performances the ASR results evaluation, we only select a subset of the heldout and test data whereby both channels achieved below 40% WER to ensure good separation-this left us with 18 dialogues in the validation data and 17 dialogues for testing. We train all RNNs for a maximum of 50 epochs else halt training if there is no improvement on the best F rm score on the transcript validation set after 10 epochs.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 224, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our dialogue-final accuracy results are in Table 2 . On transcripts, our best per-word F rpS reaches 0.720 and best F e reaches 0.918. For utterance segmentation, perword accuracy reaches 0.748 and the lowest NIST-SU error rate is 43.64. This is competitive with (Seeker et al., 2016) 's 0.767 F-score and out-performs (Cuendet, 2006) on the Switchboard data. The best rpS : uttSeg correlation per speaker reaches 0.92 (p<0.0001).", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 284, |
| "text": "(Seeker et al., 2016)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 319, |
| "end": 334, |
| "text": "(Cuendet, 2006)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 50, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In comparison to incremental approaches, we outperform (Atterer et al., 2008) 's 0.511 accuracy on end-of-utterance. Their work allows no prediction lag in a strictly incremental setting, so is at a disadvantage, however our result of 0.748 on transcripts is reported alongside the average time to detection of 0.399 words, which suggests on average the uttSeg when predicted correctly, is done so with no latency.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 77, |
| "text": "(Atterer et al., 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "With the exception of one metric, the LSTM outperforms the RNN on transcripts. The systems using the timing model in general outperform those with lexical information only on the utterance segmentation metrics, whilst not having an impact on disfluency detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "According to the window-based accuracies, on ASR results there is significant degradation in accuracy for repair onsets (best F rpS =0.557) however utterance segmentation did not suffer the same loss, with the best system achieving 0.685 accuracy. The rpS : uttSeg Pearson's R correlation per speaker reaches 0.81 (p<0.0001) in a system with otherwise poor performance-the second best achieved was 0.79 (p<0.0001).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "For disfluency detection, standard approaches use pre-segmented utterances to evaluate performance, so this result is difficult to compare. However in the simple task, the accuracy of 0.720 repair onset prediction is respectable (comparable to (Georgila, 2009) ), and is useful enough to allow realistic relative repair rates, in line with our motivation. The complex tagging system performs poorly on repairs compared to the literature, however the lack of segementation makes this a considerably harder task, in the same way as dialogue act tagging results are lower on unsegmented transcripts (Mart\u00ednez-Hinarejos et al., 2015) . Edit term detection performs very well at 0.918, approaching the state-of-the-art on Switchboard reported at 0.938 .", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 260, |
| "text": "(Georgila, 2009)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 596, |
| "end": 629, |
| "text": "(Mart\u00ednez-Hinarejos et al., 2015)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The utility of a joint task As can be seen in Table 3 , the overall best performing systems on the individual tasks do not reach the results in any relevant metric of the best performing combined system. The disfluency-only systems were run ignoring all utterance boundary information, which puts this setting at a disadvantage to previous approaches, however it is clear that on unsegmented data our posing of the task jointly is useful.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 46, |
| "end": 53, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Incrementality Incrementally the differences between the architectures was neglible-results for the LSTM are in Table 4 . The latency for repair onset detection is very low, being detected as little as 0.196 seconds after the onset word is finished (or on transcripts largely directly after the word has been consumed as T T D rps (word) = 0.003). Utterance boundaries were detected just over a second after the end of the last word of the previous utterance. However, the fact that T T D uttSeg on the word level reaches 0.283 suggests the timebased average is being weighed down by occa-sional long silences, which could be thresholded in future work. The EO measure of stability is severely affected by jittering ASR hypotheses, but given its worst result is 21.46% this is still a fairly stable incremental system.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 112, |
| "end": 119, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Error Analysis To explore the errors being made by the systems, and how the RNN and LSTM may differ in ability, we performed an error analysis on the simple versions with the timing models-see Fig. 4 . One can observe a boost in recall for various repair types in the LSTM, where it is performing better on repairs with longer reparanda. Characterizing repetitions as verbatim repeats, substitutions as the other repairs marked with a repair phase, and deletes as those without one, we see the LSTM outperforming the RNN on the rarer types. Whilst the problem is attenuated by the memory facility of the LSTM, our best system still suffers the vanishing gradient problem for predicting longer repairs with reparanda over 3 words long. Also we show in uttSeg detection all systems falter on long distance projections with coordinating conjunctions, which would potentially be dealt with more easily in a parsing framework, or a hierarchical deep learning framework.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 193, |
| "end": 199, |
| "text": "Fig. 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We also investigated the uttSeg detection errors and see that the networks are generally not confusing disfluencies with boundaries. However, our best system incorrectly labelled 3.6% of the reference uttSegs as rpS (hence also affecting the precision of the rpS prediction)-upon inspection these were largely abandoned utterances, which according to the constraint C3 we posited above are not marked as disfluencies in the same way intra-utterance repairs are in the reference. Due to the original annotation instructions of (Meteer et al., 1995) , these are segmented and not included in the traditional disfluency detection task. However, 43.1 41.1 FP, predicted uttSeg for rpS 0.9 0.5 FP, predicted uttSeg for e 3.4 2.7 FP, predicted uttSeg for CC 5.1 3.6 FP, predicted uttSeg for subj 2.0 1.6 FP, predicted uttSeg for proper 0.9 0.6 FP, predicted uttSeg for it 0.5 0.4 FP, predicted uttSeg for grounding 0.7 0.4 FP, predicted uttSeg for other 6.9 2.7 FP all 20.4 12.5", |
| "cite_spans": [ |
| { |
| "start": 526, |
| "end": 547, |
| "text": "(Meteer et al., 1995)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Figure 4: Error analysis: (a) recall rates for rpS onsets of repairs with different reparandum lengths and (b) types, and (c) the source of errors in uttSeg detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "intuitively these can be construed as a disfluency type, and in future we will treat them as a special type of uttSeg/disfluency hybrid. As can be seen in Fig. 4 (c) other main sources of error are on coordinating conjunctions (CC) such as 'and' and 'or', nouns with nominative subject marking case like 'I' and 'we' (subj), other proper nouns, variants of 'it' and grounding utterances like 'yeah' and 'okay'. uttSeg detection in both systems achieved high precision but relatively low recall.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 155, |
| "end": 165, |
| "text": "Fig. 4 (c)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We have presented the joint task of incremental utterance segmentation and disfluency detection and show a simple deep learning system which performs it on transcripts and ASR results. As regards the research questions posed in \u00a73.4, in answer to Q1, we showed that, all else being equal, a deep learning system can perform both tasks jointly improves over equivalent systems doing the individual tasks. In answer to Q2, we showed that word timing information, both from transcripts and ASR results, helps the utterance segmentation and the joint task across all settings whilst not aiding disfluency detection on its own, and in response to Q3, we achieve a good online accuracy vs. final accuracy trade-off in a live, incremental, system, however still experience some time delays for utterance segmentation in our most accurate system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "We conclude that our joint-task system for disfluency detection and utterance segmentation shows a new benchmark for the joint task on Switchboard data and due its incremental functioning on unsegmented data, including ASR result streams, it is suitable for live systems, such as conversation agents in the psychiatric domain. In future work we intend to optimize the inputs to our networks after this exploration, including using raw acoustic features, and combining the task with language modelling and dialogue act tagging.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Looking at it from a different perspective, this problem has been called the continuation problem byLevelt (1983): the repair material can only be integrated with the previous material, if it is identified as replacing the reparandum.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The other repair type is delete rpEDel. Verbatim reparandum-repair repetitions are subsumed by rpESub.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://radimrehurek.com/gensim/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the EACL reviewers for their helpful comments. This work was supported by the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC 277) at Bielefeld University, funded by the German Research Foundation (DFG), and the DFG-funded DUEL project (grant SCHL 845/5-1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Automatic dialog act segmentation and classification in multiparty meetings", |
| "authors": [ |
| { |
| "first": "Jeremy", |
| "middle": [], |
| "last": "Ang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Shriberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ICASSP (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "1061--1064", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeremy Ang, Yang Liu, and Elizabeth Shriberg. 2005. Automatic dialog act segmentation and classifica- tion in multiparty meetings. In ICASSP (1), pages 1061-1064.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Towards incremental endof-utterance detection in dialogue systems", |
| "authors": [ |
| { |
| "first": "Michaela", |
| "middle": [], |
| "last": "Atterer", |
| "suffix": "" |
| }, |
| { |
| "first": "Timo", |
| "middle": [], |
| "last": "Baumann", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "COLING (Posters)", |
| "volume": "", |
| "issue": "", |
| "pages": "11--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michaela Atterer, Timo Baumann, and David Schlangen. 2008. Towards incremental end- of-utterance detection in dialogue systems. In COLING (Posters), pages 11-14.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Evaluation and optimisation of incremental processors", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Baumann", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Bu\u00df", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Dialogue & Discourse", |
| "volume": "2", |
| "issue": "1", |
| "pages": "113--141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Baumann, O. Bu\u00df, and D. Schlangen. 2011. Eval- uation and optimisation of incremental processors. Dialogue & Discourse, 2(1):113-141.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Recognising conversational speech: What an incremental asr should do for a dialogue system and how to get there", |
| "authors": [ |
| { |
| "first": "Timo", |
| "middle": [], |
| "last": "Baumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Casey", |
| "middle": [], |
| "last": "Kennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Hough", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "International Workshop on Dialogue Systems Technology (IWSDS) 2016. Universit\u00e4t", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timo Baumann, Casey Kennington, Julian Hough, and David Schlangen. 2016. Recognising conversa- tional speech: What an incremental asr should do for a dialogue system and how to get there. In Inter- national Workshop on Dialogue Systems Technology (IWSDS) 2016. Universit\u00e4t Hamburg.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Theano: a cpu and gpu math expression compiler", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Bergstra", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Breuleux", |
| "suffix": "" |
| }, |
| { |
| "first": "Fr\u00e9d\u00e9ric", |
| "middle": [], |
| "last": "Bastien", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Lamblin", |
| "suffix": "" |
| }, |
| { |
| "first": "Razvan", |
| "middle": [], |
| "last": "Pascanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Desjardins", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Warde-Farley", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Python for scientific computing conference (SciPy)", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Bergstra, Olivier Breuleux, Fr\u00e9d\u00e9ric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Des- jardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy), volume 4, page 3. Austin, TX.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Model adaptation for sentence unit segmentation from speech", |
| "authors": [ |
| { |
| "first": "S\u00e9bastien", |
| "middle": [], |
| "last": "Cuendet", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "IDIAP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S\u00e9bastien Cuendet. 2006. Model adaptation for sen- tence unit segmentation from speech. Technical re- port, IDIAP.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A survey on the application of recurrent neural networks to statistical language modeling", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Wim De Mulder", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Francine", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computer Speech & Language", |
| "volume": "30", |
| "issue": "1", |
| "pages": "61--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wim De Mulder, Steven Bethard, and Marie-Francine Moens. 2015. A survey on the application of recur- rent neural networks to statistical language model- ing. Computer Speech & Language, 30(1):61-98.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Verbal indicators of psychological distress in interactive dialogue with a virtual human", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Devault", |
| "suffix": "" |
| }, |
| { |
| "first": "Kallirroi", |
| "middle": [], |
| "last": "Georgila", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of SigDial", |
| "volume": "", |
| "issue": "", |
| "pages": "193--202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David DeVault, Kallirroi Georgila, and Ron Artstein. 2013. Verbal indicators of psychological distress in interactive dialogue with a virtual human. In Pro- ceedings of SigDial 2013, pages 193-202.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Using integer linear programming for detecting speech disfluencies", |
| "authors": [ |
| { |
| "first": "Kallirroi", |
| "middle": [], |
| "last": "Georgila", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "109--112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kallirroi Georgila. 2009. Using integer linear pro- gramming for detecting speech disfluencies. In Pro- ceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, Companion Volume: Short Papers, pages 109-112. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Joint semantic utterance classification and slot filling with recursive neural networks", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Gokhan", |
| "middle": [], |
| "last": "Tur", |
| "suffix": "" |
| }, |
| { |
| "first": "Yih", |
| "middle": [], |
| "last": "Wen-Tau", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Zweig", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Spoken Language Technology Workshop (SLT)", |
| "volume": "", |
| "issue": "", |
| "pages": "554--559", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Guo, Gokhan Tur, Wen-tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classifica- tion and slot filling with recursive neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 554-559. IEEE.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Joint incremental disfluency detection and dependency parsing", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Honnibal", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association of Computational Linugistics (TACL)", |
| "volume": "2", |
| "issue": "", |
| "pages": "131--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Honnibal and Mark Johnson. 2014. Joint incremental disfluency detection and dependency parsing. Transactions of the Association of Com- putational Linugistics (TACL), 2:131-142.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Strongly incremental repair detection", |
| "authors": [ |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Hough", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Purver", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "78--89", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julian Hough and Matthew Purver. 2014. Strongly in- cremental repair detection. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 78-89, Doha, Qatar, October. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Recurrent neural networks for incremental disfluency detection", |
| "authors": [ |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Hough", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "849--853", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julian Hough and David Schlangen. 2015. Recur- rent neural networks for incremental disfluency de- tection. In Proceedings of Interspeech 2015, pages 849-853.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Helping the medicine go down: Repair and adherence in patient-clinician dialogues", |
| "authors": [ |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Howes", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Purver", |
| "suffix": "" |
| }, |
| { |
| "first": "Rose", |
| "middle": [], |
| "last": "Mccabe", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "T" |
| ], |
| "last": "Patrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Healey", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lavelle", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of SemDial 2012 (SeineDial): The 16th Workshop on the Semantics and Pragmatics of Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christine Howes, Matt Purver, Rose McCabe, Patrick GT Healey, and Mary Lavelle. 2012. Help- ing the medicine go down: Repair and adherence in patient-clinician dialogues. In Proceedings of SemDial 2012 (SeineDial): The 16th Workshop on the Semantics and Pragmatics of Dialogue, page 155.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Helping, i mean assessing psychiatric communication: An application of incremental self-repair detection", |
| "authors": [ |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Howes", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Hough", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Purver", |
| "suffix": "" |
| }, |
| { |
| "first": "Rose", |
| "middle": [], |
| "last": "Mccabe", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 18th SemDial Workshop on the Semantics and Pragmatics of Dialogue (DialWatt)", |
| "volume": "", |
| "issue": "", |
| "pages": "80--89", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christine Howes, Julian Hough, Matthew Purver, and Rose McCabe. 2014. Helping, i mean assessing psychiatric communication: An application of incre- mental self-repair detection. In Proceedings of the 18th SemDial Workshop on the Semantics and Prag- matics of Dialogue (DialWatt), pages 80-89, Edin- burgh, September.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A TAGbased noisy-channel model of speech repairs", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "33--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson and Eugene Charniak. 2004. A TAG- based noisy-channel model of speech repairs. In ACL, pages 33-39.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Monitoring and self-repair in speech", |
| "authors": [ |
| { |
| "first": "Willem", |
| "middle": [ |
| "J" |
| ], |
| "last": "Levelt", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "Cognition", |
| "volume": "14", |
| "issue": "4", |
| "pages": "41--104", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Willem J. Levelt. 1983. Monitoring and self-repair in speech. Cognition, 14(4):41-104.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "It's only a computer: Virtual humans increase willingness to disclose", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Lucas", |
| "suffix": "" |
| }, |
| { |
| "first": "Aisha", |
| "middle": [], |
| "last": "Gratch", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis", |
| "middle": [ |
| "Philippe" |
| ], |
| "last": "King", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Computers in Human Behavior", |
| "volume": "37", |
| "issue": "", |
| "pages": "94--100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gale M. Lucas, Jonathan Gratch, Aisha King, and Louis Philippe Morency. 2014. It's only a com- puter: Virtual humans increase willingness to dis- close. Computers in Human Behavior, 37:94-100.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Toward Incremental Dialogue Act Segmentation in Fast-Paced Interactive Dialogue Systems", |
| "authors": [ |
| { |
| "first": "Ramesh", |
| "middle": [], |
| "last": "Manuvinakurike", |
| "suffix": "" |
| }, |
| { |
| "first": "Maike", |
| "middle": [], |
| "last": "Paetzel", |
| "suffix": "" |
| }, |
| { |
| "first": "Cheng", |
| "middle": [], |
| "last": "Qu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Devault", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue. Forthcoming", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramesh Manuvinakurike, Maike Paetzel, Cheng Qu, David Schlangen, and David DeVault. 2016. To- ward Incremental Dialogue Act Segmentation in Fast-Paced Interactive Dialogue Systems. In Pro- ceedings of the 17th Annual SIGdial Meeting on Dis- course and Dialogue. Forthcoming.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Unsegmented dialogue act annotation and decoding with n-gram transducers", |
| "authors": [ |
| { |
| "first": "Carlos-D", |
| "middle": [], |
| "last": "Mart\u00ednez-Hinarejos", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9-Miguel", |
| "middle": [], |
| "last": "Bened\u00ed", |
| "suffix": "" |
| }, |
| { |
| "first": "Vicent", |
| "middle": [], |
| "last": "Tamarit", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Speech, and Language Processing", |
| "volume": "23", |
| "issue": "", |
| "pages": "198--211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos-D Mart\u00ednez-Hinarejos, Jos\u00e9-Miguel Bened\u00ed, and Vicent Tamarit. 2015. Unsegmented dialogue act annotation and decoding with n-gram transduc- ers. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(1):198-211.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Shared understanding in psychiatrist-patient communication: Association with treatment adherence in schizophrenia", |
| "authors": [ |
| { |
| "first": "Rosemarie", |
| "middle": [], |
| "last": "Mccabe", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "T" |
| ], |
| "last": "Patrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Healey", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Priebe", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Lavelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Dodwell", |
| "suffix": "" |
| }, |
| { |
| "first": "Amelia", |
| "middle": [], |
| "last": "Laugharne", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Snell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bremner", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Patient education and counseling", |
| "volume": "93", |
| "issue": "1", |
| "pages": "73--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rosemarie McCabe, Patrick GT Healey, Stefan Priebe, Mary Lavelle, David Dodwell, Richard Laugh- arne, Amelia Snell, and Stephen Bremner. 2013. Shared understanding in psychiatrist-patient com- munication: Association with treatment adherence in schizophrenia. Patient education and counseling, 93(1):73-79.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding", |
| "authors": [ |
| { |
| "first": "Gr\u00e9goire", |
| "middle": [], |
| "last": "Mesnil", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "INTERSPEECH", |
| "volume": "", |
| "issue": "", |
| "pages": "3771--3775", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gr\u00e9goire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural- network architectures and learning methods for spo- ken language understanding. In INTERSPEECH, pages 3771-3775.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Disfluency annotation stylebook for the switchboard corpus. ms", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Meteer", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Macintyre", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Iyer", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Meteer, A. Taylor, R. MacIntyre, and R. Iyer. 1995. Disfluency annotation stylebook for the switchboard corpus. ms. Technical report, Department of Com- puter and Information Science, University of Penn- sylvania.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Disfluency detection across domains", |
| "authors": [ |
| { |
| "first": "Helena", |
| "middle": [], |
| "last": "Moniz", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Ferreira", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Batista", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabel", |
| "middle": [], |
| "last": "Trancoso", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "The 6th Workshop on Disfluency in Spontaneous Speech (DiSS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Helena Moniz, Jaime Ferreira, Fernando Batista, and Isabel Trancoso. 2015. Disfluency detection across domains. In The 6th Workshop on Disfluency in Spontaneous Speech (DiSS).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Disfluency detection using multi-step stacked learning", |
| "authors": [ |
| { |
| "first": "Xian", |
| "middle": [], |
| "last": "Qian", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "820--825", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xian Qian and Yang Liu. 2013. Disfluency detection using multi-step stacked learning. In Proceedings of NAACL-HLT, pages 820-825.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Flexible turn-taking for spoken dialog systems", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Raux", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antoine Raux. 2008. Flexible turn-taking for spoken dialog systems. Ph.D. thesis, US National Science Foundation.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A General, Abstract Model of Incremental Dialogue Processing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Skantze", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Dialoge & Discourse", |
| "volume": "2", |
| "issue": "1", |
| "pages": "83--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Schlangen and Gabriel Skantze. 2011. A Gen- eral, Abstract Model of Incremental Dialogue Pro- cessing. Dialoge & Discourse, 2(1):83-111.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "From reaction to prediction: Experiments with computational models of turn-taking", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of Interspeech 2006, Panel on Prosody of Dialogue Acts and Turn-Taking", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Schlangen. 2006. From reaction to predic- tion: Experiments with computational models of turn-taking. In Proceedings of Interspeech 2006, Panel on Prosody of Dialogue Acts and Turn-Taking.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "How to train dependency parsers with inexact search for joint sentence boundary detection and parsing of entire documents", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Seeker", |
| "suffix": "" |
| }, |
| { |
| "first": "Agnieszka", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Falenska", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1923--1934", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders Seeker, Agnieszka Bj\u00f6rkelund, Wolfgang Falenska, and Jonas Kuhn. 2016. How to train de- pendency parsers with inexact search for joint sen- tence boundary detection and parsing of entire doc- uments. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1923-1934, Berlin. ACL.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Prosody-based automatic segmentation of speech into sentences and topics", |
| "authors": [ |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Shriberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| }, |
| { |
| "first": "Dilek", |
| "middle": [], |
| "last": "Hakkani-T\u00fcr", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00f6khan", |
| "middle": [], |
| "last": "T\u00fcr", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Speech communication", |
| "volume": "32", |
| "issue": "1", |
| "pages": "127--154", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elizabeth Shriberg, Andreas Stolcke, Dilek Hakkani- T\u00fcr, and G\u00f6khan T\u00fcr. 2000. Prosody-based au- tomatic segmentation of speech into sentences and topics. Speech communication, 32(1):127-154.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Preliminaries to a Theory of Speech Disfluencies", |
| "authors": [ |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Shriberg", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elizabeth Shriberg. 1994. Preliminaries to a Theory of Speech Disfluencies. Ph.D. thesis, University of California, Berkeley.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "A deep neural network approach for sentence boundary detection in broadcast news", |
| "authors": [ |
| { |
| "first": "Chenglin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Guangpu", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiong", |
| "middle": [], |
| "last": "Xiao", |
| "suffix": "" |
| }, |
| { |
| "first": "Engsiong", |
| "middle": [], |
| "last": "Chng", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of INTER-SPEECH", |
| "volume": "", |
| "issue": "", |
| "pages": "2887--2891", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chenglin Xu, Lei Xie, Guangpu Huang, Xiong Xiao, Engsiong Chng, and Haizhou Li. 2014. A deep neural network approach for sentence boundary de- tection in broadcast news. In Proceedings of INTER- SPEECH, pages 2887-2891.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Detecting speech repairs incrementally using a noisy channel approach", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Zwarts", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "Dale" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1371--1378", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Zwarts, Mark Johnson, and Robert Dale. 2010. Detecting speech repairs incrementally using a noisy channel approach. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics, pages 1371-1378, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td colspan=\"2\">timings classifier softmax t</td></tr><tr><td colspan=\"2\">memory unit t-1</td><td/></tr><tr><td>window of</td><td/><td/></tr><tr><td>timings</td><td>copy/</td><td/></tr><tr><td>t-(n-1)..t</td><td>storage</td><td>output</td></tr><tr><td/><td/><td>softmax</td></tr><tr><td/><td>memory</td><td>layer t</td></tr><tr><td>i t</td><td>unit t</td><td>Markov</td></tr><tr><td>f t</td><td/><td>Model</td></tr><tr><td/><td>o t</td><td>decoder</td></tr><tr><td>window of</td><td/><td/></tr><tr><td>words t-(n-1)..t</td><td/><td/></tr><tr><td/><td>hidden</td><td/></tr><tr><td>embeddings</td><td>layer t</td><td/></tr><tr><td>for words</td><td>copy/</td><td/></tr><tr><td>t-(n-1)..t</td><td>storage</td><td/></tr><tr><td colspan=\"2\">hidden layer t-1</td><td/></tr></table>", |
| "text": "https://www.ibm.com/watson/ developercloud/speech-to-text.html", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Eval.</td><td>System</td><td>Frps</td><td>Frps</td><td>Fe</td><td>Fe (per</td><td>FuttSeg</td><td>FuttSeg</td><td>NIST</td></tr><tr><td>Method</td><td/><td>(per</td><td>(per 10s</td><td>(per</td><td>10s</td><td>(per</td><td>(per 10s</td><td>SU</td></tr><tr><td/><td/><td>word)</td><td>window)</td><td>word)</td><td>window)</td><td>word)</td><td>window)</td><td>(word)</td></tr><tr><td/><td colspan=\"2\">LSTM (uttSeg only) -</td><td>-</td><td>-</td><td>-</td><td>0.727</td><td>0.679</td><td>46.17</td></tr><tr><td>Transcript</td><td>LSTM (disf only)</td><td>0.711</td><td>0.760</td><td>0.912</td><td>0.886</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td>LSTM (joint task)</td><td>0.719</td><td>0.764</td><td>0.918</td><td>0.889</td><td>0.748</td><td>0.707</td><td>43.64</td></tr><tr><td/><td colspan=\"2\">LSTM (uttSeg only) -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.657</td><td>-</td></tr><tr><td>ASR</td><td>LSTM (disf only)</td><td>-</td><td>0.531</td><td>-</td><td>0.721</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td>LSTM (joint task)</td><td>-</td><td>0.551</td><td>-</td><td>0.727</td><td>-</td><td>0.685</td><td>-</td></tr></table>", |
| "text": "Non-incremental (dialogue-final) results on transcripts and ASR results.", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Incremental results on transcripts and ASR results.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |