ACL-OCL / Base_JSON /prefixP /json /P15 /P15-1048.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P15-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:11:35.719603Z"
},
"title": "Efficient Disfluency Detection with Transition-based Parsing",
"authors": [
{
"first": "Shuangzhi",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology \u2021 Microsoft Research",
"location": {}
},
"email": ""
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology \u2021 Microsoft Research",
"location": {}
},
"email": "dozhang@microsoft.com"
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology \u2021 Microsoft Research",
"location": {}
},
"email": "mingzhou@microsoft.com"
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology \u2021 Microsoft Research",
"location": {}
},
"email": "tjzhao@hit.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic speech recognition (ASR) outputs often contain various disfluencies. It is necessary to remove these disfluencies before processing downstream tasks. In this paper, an efficient disfluency detection approach based on right-to-left transitionbased parsing is proposed, which can efficiently identify disfluencies and keep ASR outputs grammatical. Our method exploits a global view to capture long-range dependencies for disfluency detection by integrating a rich set of syntactic and disfluency features with linear complexity. The experimental results show that our method outperforms state-of-the-art work and achieves a 85.1% f-score on the commonly used English Switchboard test set. We also apply our method to in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.",
"pdf_parse": {
"paper_id": "P15-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic speech recognition (ASR) outputs often contain various disfluencies. It is necessary to remove these disfluencies before processing downstream tasks. In this paper, an efficient disfluency detection approach based on right-to-left transitionbased parsing is proposed, which can efficiently identify disfluencies and keep ASR outputs grammatical. Our method exploits a global view to capture long-range dependencies for disfluency detection by integrating a rich set of syntactic and disfluency features with linear complexity. The experimental results show that our method outperforms state-of-the-art work and achieves a 85.1% f-score on the commonly used English Switchboard test set. We also apply our method to in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the development of the mobile internet, speech inputs have become more and more popular in applications where automatic speech recognition (ASR) is the key component to convert speech into text. ASR outputs often contain various disfluencies which create barriers to subsequent text processing tasks like parsing, machine translation and summarization. Usually, disfluencies can be classified into uncompleted words, filled pauses (e.g. \"uh\", \"um\"), discourse markers (e.g. \"I mean\"), editing terms (e.g. \"you know\") and repairs. To identify and remove disfluencies, straightforward rules can be designed to tackle the former four classes of disfluencies since they often belong to a closed set. However, the repair type disfluency poses particularly more difficult problems as their form is more arbitrary. Typically, as shown in Figure 1 , a repair disfluency type consists of a reparandum (\"to Boston\") and a filled pause (\"um\"), followed by its repair (\"to Denver\"). This special structure of disfluency constraint, which exists in many languages such as English and Chinese, reflects the scenarios of spontaneous speech and conversation, where people often correct preceding words with following words when they find that the preceding words are wrong or improper. This procedure might be interrupted and inserted with filled pauses when people are thinking or hesitating. The challenges of detecting repair disfluencies are that reparandums vary in length, may occur everywhere, and are sometimes nested. There are many related works on disfluency detection, that mainly focus on detecting repair type of disfluencies. Straightforwardly, disfluency detection can be treated as a sequence labeling problem and solved by well-known machine learning algorithms such as conditional random fields (CRF) or max-margin markov network (M 3 N) (Liu et al., 2006; Georgila, 2009; Qian and Liu, 2013) , and prosodic features are also concerned in (Kahn et al., 2005; Zhang et al., 2006) . These methods achieve good performance, but are not powerful enough to capture complicated disfluencies with longer spans or distances. Recently, syntax-based models such as transitionbased parser have been used for detecting disflu-encies (Honnibal and Johnson, 2014; Rasooli and Tetreault, 2013) . These methods can jointly perform dependency parsing and disfluency detection. But in these methods, great efforts are made to distinguish normal words from disfluent words as decisions cannot be made imminently from left to right, leading to inefficient implementation as well as performance loss.",
"cite_spans": [
{
"start": 1846,
"end": 1864,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF9"
},
{
"start": 1865,
"end": 1880,
"text": "Georgila, 2009;",
"ref_id": "BIBREF3"
},
{
"start": 1881,
"end": 1900,
"text": "Qian and Liu, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 1947,
"end": 1966,
"text": "(Kahn et al., 2005;",
"ref_id": "BIBREF7"
},
{
"start": 1967,
"end": 1986,
"text": "Zhang et al., 2006)",
"ref_id": "BIBREF19"
},
{
"start": 2229,
"end": 2257,
"text": "(Honnibal and Johnson, 2014;",
"ref_id": "BIBREF4"
},
{
"start": 2258,
"end": 2286,
"text": "Rasooli and Tetreault, 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 836,
"end": 844,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose detecting disfluencies using a right-to-left transition-based dependency parsing (R2L parsing), where the words are consumed from right to left to build the parsing tree based on which the current word is predicted to be either disfluent or normal. The proposed models cater to the disfluency constraint and integrate a rich set of features extracted from contexts of lexicons and partial syntactic tree structure, where the parsing model and disfluency predicting model are jointly calculated in a cascaded way. As shown in Figure 2 (b), while the parsing tree is being built, disfluency tags are predicted and attached to the disfluency nodes. Our models are quite efficient with linear complexity of 2 * N (N is the length of input). : An instance of the detection procedure where 'N' stands for a normal word and 'X' a disfluency word. Words with italic font are Reparandums. (a) is the L2R detecting procedure and (b) is the R2L procedure.",
"cite_spans": [],
"ref_spans": [
{
"start": 551,
"end": 559,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intuitively, compared with previous syntaxbased work such as (Honnibal and Johnson, 2014) that uses left-to-right transition-based parsing (L2R parsing) model, our proposed approach simplifies disfluency detection by sequentially processing each word, without going back to modify the pre-built tree structure of disfluency words. As shown in Figure 2 (a), the L2R parsing based joint approach needs to cut the pre-built dependency link between \"did\" and \"he\" when \"was\" is identified as the repair of \"did\", which is never needed in our method as Figure 2(b). Furthermore, our method overcomes the deficiency issue in de-coding of L2R parsing based joint method, meaning the number of parsing transitions for each hypothesis path is not identical to 2 * N , which leads to the failure of performing optimal search during decoding. For example, the involvement of the extra cut operation in Figure 2 (a) destroys the competition scoring that accumulates over 2 * N transition actions among hypotheses in the standard transition-based parsing. Although the heuristic score, such as the normalization of transition count (Honnibal and Johnson, 2014) , can be introduced, the total scores of all hypotheses are still not statistically comparable from a global view.",
"cite_spans": [
{
"start": 61,
"end": 89,
"text": "(Honnibal and Johnson, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 1119,
"end": 1147,
"text": "(Honnibal and Johnson, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 2",
"ref_id": null
},
{
"start": 891,
"end": 899,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct the experiments on English Switchboard corpus. The results show that our method can achieve a 85.1% f-score with a gain of 0.7 point over state-of-the-art M 3 N labeling model in (Qian and Liu, 2013) and a gain of 1 point over state-of-the-art joint model proposed in (Honnibal and Johnson, 2014) . We also apply our method on Chinese annotated data. As there is no available public data in Chinese, we annotate 25k Chinese sentences manually for training and testing. We achieve 71.2% f-score with 15 points gained compared to the CRF-based baseline, showing that our models are robust and language independent.",
"cite_spans": [
{
"start": 190,
"end": 210,
"text": "(Qian and Liu, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 279,
"end": 307,
"text": "(Honnibal and Johnson, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a typical transition-based parsing, the Shift-Reduce decoding algorithm is applied and a queue and stack are maintained (Zhang and Clark, 2008) . The queue stores the stream of the input and the front of the queue is indexed as the current word. The stack stores the unfinished words which may be linked to the current word or a future word in the queue. When words in the queue are consumed in sequential order, a set of transition actions is applied to build a parsing tree. There are four kinds of transition actions in the parsing process (Zhang and Clark, 2008) , as described below.",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 546,
"end": 569,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "\u2022 Shift : Removes the front of the queue and pushes it to the stack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "\u2022 Reduce : Pops the top of the stack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "\u2022 LeftArc : Pops the top of the stack, and links the popped word to the front of the queue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "\u2022 RightArc : Links the front of the queue to the top of the stack and, removes the front of the queue and pushes it to the stack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "The choice of each transition action during parsing is scored by a generalized perceptron (Collins, 2002) which can be trained over a rich set of nonlocal features. In decoding, beam search is performed to search the optimal sequence of transition actions. As each word must be pushed to the stack once and popped off once, the number of actions needed to parse a sentence is always 2 * N , where N is the length of the sentence.",
"cite_spans": [
{
"start": 90,
"end": 105,
"text": "(Collins, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "Transition-based dependency parsing (Zhang and Clark, 2008) can be performed in either a leftto-right or a right-to-left way, both of which have a performance that is comparable as illustrated in Section 4. However, when they are applied to disfluency detection, their behaviors are very different due to the disfluency structure constraint. We prove that right-to-left transition-based parsing is more efficient than left-to-right transition-based parsing for disfluency detection.",
"cite_spans": [
{
"start": 36,
"end": 59,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "3 Our method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based dependency parsing",
"sec_num": "2"
},
{
"text": "Unlike previous joint methods (Honnibal and Johnson, 2014 ; Rasooli and Tetreault, 2013), we introduce dependency parsing into disfluency detection from theory. In the task of disfluency detection, we are given a stream of unstructured words from automatic speech recognition (ASR). We denote the word sequence with W n 1 := w 1 , w 2 ,w 3 ,...,w n , which is actually the inverse order of ASR words that should be w n , w n\u22121 ,w n\u22122 ,...,w 1 . The output of the task is a sequence of binary tags denoted as D n",
"cite_spans": [
{
"start": 30,
"end": 57,
"text": "(Honnibal and Johnson, 2014",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "1 = d 1 , d 2 ,d 3 ,...,d n , where each d i corresponds to w i , indicating whether w i is a dis- fluency word (X) or not (N). 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Our task can be modeled as formula (1), which is to search the best sequence D * given the stream of words W n 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D * = argmax D P (D n 1 |W n 1 )",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "The dependency parsing tree is introduced into model (1) to guide detection. The rewritten formula is shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D * = argmax D T P (D n 1 , T |W n 1 )",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "We jointly optimize disfluency detection and parsing with form (3), rather than considering all possible parsing trees:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(D * , T * ) = argmax (D,T ) P (D n 1 , T |W n 1 )",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "As both the dependency tree and the disfluency tags are generated word by word, we decompose formula (3) into:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(D * , T * ) = argmax (D,T ) n i=1 P (d i , T i 1 |W i 1 , T i\u22121 1 )",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "where T i 1 is the partial tree after word w i is consumed, d i is the disfluency tag of w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "We simplify the joint optimization in a cascaded way with two different forms (5) and 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(D * , T * ) = argmax (D,T ) n i=1 P (T i 1 |W i 1 , T i\u22121 1 ) \u00d7 P (d i |W i 1 , T i 1 ) (5) (D * , T * ) = argmax (D,T ) n i=1 P (d i |W i 1 , T i\u22121 1 ) \u00d7 P (T i 1 |W i 1 , T i\u22121 1 , d i )",
"eq_num": "(6)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Here,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "P (T i 1 |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": ") is the parsing model, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "P (d i |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": ") is the disfluency model used to predict the disluency tags on condition of the contexts of partial trees that have been built.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "In (5), the parsing model is calculated first, followed by the calculation of the disfluency model. Inspired by (Zhang et al., 2013) , we associate the disfluency tags to the transition actions so that the calculation of P (d i |W i 1 , T i 1 ) can be omitted as d i can be inferred from the partial tree T i 1 . We then get",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Zhang et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(D * , T * ) = argmax (D,T ) n i=1 P (d i , T i 1 |W i 1 , T i\u22121 1 )",
"eq_num": "(7)"
}
],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Where the parsing and disfluency detection are unified into one model. We refer to this model as the Unified Transition(UT) model. While in (6), the disfluency model is calculated first, followed by the calculation of the parsing model. We model P (d i |.) as a binary classifier to classify whether a word is disfluent or not. We refer to this model as the binary classifier transition (BCT) model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "In model (7), in addition to the standard 4 transition actions mentioned in Section 2, the UT model adds 2 new transition actions which extend the original Shift and RightArc transitions as shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unified transition-based model (UT)",
"sec_num": "3.2"
},
{
"text": "\u2022 Dis Shift: Performs what Shift does then marks the pushed word as disfluent. \u2022 Dis RightArc: Adds a virtual link from the front of the queue to the top of the stack which is similar to Right Arc, marking the front of the queue as disfluenct and pushing it to the stack. Figure 3 shows an example of how the UT model works. Given an input \"he did great was great\", the optimal parsing tree is predicted by the UT model. According to the parsing tree, we can get the disfluency tags \"N X X N N\" which have been attached to each word. To ensure the normal words are built grammatical in the parse tree, we apply a constraint to the UT model. UT model constraint: When a word is marked disfluent, all the words in its left and right subtrees will be marked disfluent and all the links of its descendent offsprings will be converted to virtual links, no matter what actions are applied to these words.",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 280,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unified transition-based model (UT)",
"sec_num": "3.2"
},
{
"text": "For example, the italic word \"great\" will be marked disfluent, no matter what actions are performed on it. Figure 3 : An example of UT model, where 'N' means the word is a fluent word and 'X' means it is disfluent. Words with italic font are Reparandums.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unified transition-based model (UT)",
"sec_num": "3.2"
},
{
"text": "was great did he root N N X N great X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unified transition-based model (UT)",
"sec_num": "3.2"
},
{
"text": "In model (6), we perform the binary classifier and the parsing model together by augmenting the Shift-Reduce algorithm with a binary classifier transition(BCT) action:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A binary classifier transition-based model (BCT)",
"sec_num": "3.3"
},
{
"text": "\u2022 BCT : Classifies whether the current word is disfluent or not. If it is, remove it from the queue, push it into the stack which is similar to Shift and then mark it as disfluent, otherwise the original transition actions will be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A binary classifier transition-based model (BCT)",
"sec_num": "3.3"
},
{
"text": "It is noted that when BCT is performed, the next action must be Reduce. This constraint guarantees that any disfluent word will not have any descendent offspring. Figure 2(b) shows an example of the BCT model. When the partial tree \"great was\" is built, the next word \"did\" is obviously disfluent. Unlike UT model, the BCT will not link the word \"did\" to any word. Instead only a virtual link will add it to the virtual root.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 174,
"text": "Figure 2(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A binary classifier transition-based model (BCT)",
"sec_num": "3.3"
},
{
"text": "In practice, we use the same linear model for both models (6) and 7to score a parsing tree as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and decoding",
"sec_num": "3.4"
},
{
"text": "Score(T ) = action \u03c6(action) \u2022 \u03bb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and decoding",
"sec_num": "3.4"
},
{
"text": "Where \u03c6(action) is the feature vector extracted from partial hypothesis T for a certain action and \u03bb is the weight vector. \u03c6(action) \u2022 \u03bb calculates the score of a certain transition action. The score of a parsing tree T is the sum of action scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and decoding",
"sec_num": "3.4"
},
{
"text": "In addition to the basic features introduced in (Zhang and Nivre, 2011) that are defined over bag of words and POS-tags as well as tree-based context, our models also integrate three classes of new features combined with Brown cluster features (Brown et al., 1992 ) that relate to the rightto-left transition-based parsing procedure as detailed below. \u2022 N # (a 0..n , b): The count of words among a 0 .. a n that are on the right of the subtree rooted at b. Table 1 summarizes the features we use in the model computation, where w s denotes the top word of the stack, w 0 denotes the front word of the queue and w 0..2 denotes the top three words of the queue. Every p i corresponds to the POS-tag of w i and p 0..2 represents the POS-tags of w 0..2 . In addition, w i c means the Brown cluster of w i . With these symbols, several new feature templates are defined in Table 1 . Both our models have the same feature templates.",
"cite_spans": [
{
"start": 244,
"end": 263,
"text": "(Brown et al., 1992",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 458,
"end": 465,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 869,
"end": 876,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Training and decoding",
"sec_num": "3.4"
},
{
"text": "All templates in (Zhang and Nivre, 2011) New disfluency features Function unigrams (Zhang and Clark, 2008; Zhang and Nivre, 2011), we train our models by averaged perceptron (Collins, 2002) . In decoding, beam search is performed to get the optimal parsing tree as well as the tag sequence.",
"cite_spans": [
{
"start": 17,
"end": 40,
"text": "(Zhang and Nivre, 2011)",
"ref_id": "BIBREF18"
},
{
"start": 83,
"end": 106,
"text": "(Zhang and Clark, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 107,
"end": 107,
"text": "",
"ref_id": null
},
{
"start": 175,
"end": 190,
"text": "(Collins, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic features",
"sec_num": null
},
{
"text": "\u03b4 I (w s , w 0 );\u03b4 I (p s , p 0 ); \u03b4 L (w 0 , w s );\u03b4 L (p 0 , p s ); \u03b4 R (w 0 , w s );\u03b4 R (p 0 , p s ); N I (w 0 , w s );N I (p 0 , p s ); N # (w 0..2 , w s );N # (p 0..2 , p s ); Function bigrams \u03b4 I (w s , w 0 )\u03b4 I (p s , p 0 ); \u03b4 L (w 0 , w s )\u03b4 L (p 0 , p s ); \u03b4 R (w 0 , w s )\u03b4 R (p 0 , p s ); N I (w 0 , w s )N I (p 0 , p s ); N # (w 0..2 , w s )N # (p 0..2 , p s ); \u03b4 I (w s , w 0 )w s c; \u03b4 I (w s , w 0 )w 0 c; Function trigrams w s w 0 \u03b4 I (w s , w 0 ); w s w 0 \u03b4 I (p s , p 0 );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic features",
"sec_num": null
},
{
"text": "Our training data is the Switchboard portion of the English Penn Treebank (Marcus et al., 1993) corpus, which consists of telephone conversations about assigned topics. As not all the Switchboard data has syntactic bracketing, we only use the subcorpus of PAESED/MRG/SWBD. Following the experiment settings in (Charniak and Johnson, 2001) , the training subcorpus contains directories 2 and 3 in PAESED/MRG/SWBD and directory 4 is split into test and development sets. We use the Stanford dependency converter (De Marneffe et al., 2006) to get the dependency structure from the Switchboard corpus, as Honnibal and Johnson (2014) prove that Stanford converter is robust to the Switchboard data.",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF10"
},
{
"start": 310,
"end": 338,
"text": "(Charniak and Johnson, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 510,
"end": 536,
"text": "(De Marneffe et al., 2006)",
"ref_id": "BIBREF2"
},
{
"start": 601,
"end": 628,
"text": "Honnibal and Johnson (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "For our Chinese experiments, no public Chinese corpus is available. We annotate about 25k spoken sentences with only disfluency annotations according to the guideline proposed by Meteer et al. (1995) . In order to generate similar data format as English Switchboard corpus, we use Chinese dependency parsing trained on the Chinese Treebank corpus to parse the annotated data and use these parsed data for training and testing . For our Chinese experiment setting, we respectively select about 2k sentences for development and testing. The rest are used for training.",
"cite_spans": [
{
"start": 179,
"end": 199,
"text": "Meteer et al. (1995)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "To train the UT model, we create data format adaptation by replacing the original Shift and RightArc of disfluent words with Dis Shift and Dis RightArc, since they are just extensions of Shift and RightArc. For the BCT model, disfluent words are directly depended to the root node and all their links and labels are removed. We then link all the fluent children of disfluent words to parents of disfluent words. We also remove partial words and punctuation from data to simulate speech recognizer results where such information is not available . Additionally, following Honnibal and Johnson (2014) , we remove all one token sentences as these sentences are trivial for disfluency detection, then lowercase the text and discard filled pauses like \"um\" and \"uh\".",
"cite_spans": [
{
"start": 571,
"end": 598,
"text": "Honnibal and Johnson (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "The evaluation metrics of disfluency detection are precision (Prec.), recall (Rec.) and f-score (F1). For parsing accuracy metrics, we use unlabeled attachment score (UAS) and labeled attachment score (LAS). For our primary comparison, we evaluate the widely used CRF labeling model, the state-of-the-art M 3 N model presented by Qian and Liu (2013) which has been commonly used as baseline in previous works and the state-of-the-art L2R parsing based joint model proposed by Honnibal and Johnson (2014).",
"cite_spans": [
{
"start": 330,
"end": 349,
"text": "Qian and Liu (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "The evaluation results of both disfluency detection and parsing accuracy are presented in (Qian and Liu, 2013) . H&J is the L2R parsing based joint model in (Honnibal and Johnson, 2014) . The results of M 3 N \u2020 come from the experiments with toolkit released by Qian and Liu (2013) on our pre-processed corpus.",
"cite_spans": [
{
"start": 90,
"end": 110,
"text": "(Qian and Liu, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 157,
"end": 185,
"text": "(Honnibal and Johnson, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 262,
"end": 281,
"text": "Qian and Liu (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of disfluency detection on English Swtichboard corpus",
"sec_num": "4.2.1"
},
{
"text": "sults reported in (Qian and Liu, 2013) . The results of M 3 N \u2020 come from our experiments with the toolkit 2 released by Qian and Liu (2013) which uses our data set with the same pre-processing. It is comparable between our models and the L2R parsing based joint model presented by Honnibal and Johnson 2014, as we all conduct experiments on the same pre-processed data set. In order to compare parsing accuracy, we use the CRF and M 3 N \u2020 model to pre-process the test set by removing all the detected disfluencies, then evaluate the parsing performance on the processed set. From the table, our BCT model with new disfluency features achieves the best performance on disfluency detection as well as dependency parsing. The performance of the CRF model is low, because the local features are not powerful enough to capture long span disfluencies. Our main comparison is with the M 3 N \u2020 labeling model and the L2R parsing based model by Honnibal and Johnson (2014) . As illustrated in Table 2 , the BCT model outperforms the M 3 N \u2020 model (we got an accuracy of 84.4%, though 84.1% was reported in their paper) and the L2R parsing based model respectively by 0.7 point and 1 point on disfluency detection, which shows our method can efficiently tackle disfluencies. This is because our method can cater extremely well to the disfluency constraint and perform optimal search with identical transition counts over all hypotheses in beam search. Furthermore, our global syntactic and dis- 2 The toolkit is available at https://code.google.com/p/disfluency-detection/downloads. fluency features can help capture long-range dependencies for disfluency detection. However, the UT model does not perform as well as BCT. This is because the UT model suffers from the risk that normal words may be linked to disfluencies which may bring error propagation in decoding. In addition our models with only basic features respectively score about 3 points below the models adding new features, which shows that these features are important for disfluency detection. In comparing parsing accuracy, our BCT model outperforms all the other models, showing that this model is more robust on disfluent parsing.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "(Qian and Liu, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 121,
"end": 140,
"text": "Qian and Liu (2013)",
"ref_id": "BIBREF14"
},
{
"start": 938,
"end": 965,
"text": "Honnibal and Johnson (2014)",
"ref_id": "BIBREF4"
},
{
"start": 1487,
"end": 1488,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 986,
"end": 993,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Performance of disfluency detection on English Swtichboard corpus",
"sec_num": "4.2.1"
},
{
"text": "In this section, we further analyze the frequency of different part-of-speeches in disfluencies and test the performance on different part-of-speeches. Five classes of words take up more than 73% of all disfluencies as shown in Table 3 , which are pronouns (contain PRP and PRP$), verbs (contain VB,VBD,VBP,VBZ,VBN), determiners (contain DT), prepositions (contain IN) and conjunctions (contain CC). Obviously, these classes of words appear frequently in our communication.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of disfluency detection on different part-of-speeches",
"sec_num": "4.2.2"
},
{
"text": "Pron. Verb Dete. Prep. Conj. Others Dist. 30.2% 14.7% 13% 8.7% 6.7% 26.7% Table 3 :",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of disfluency detection on different part-of-speeches",
"sec_num": "4.2.2"
},
{
"text": "Distribution of different part-ofspeeches in disfluencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of disfluency detection on different part-of-speeches",
"sec_num": "4.2.2"
},
{
"text": "Conj.=conjunction; Dete.=determiner; Pron.=pronoun; Prep.= preposition. Table 4 illustrates the performance (f-score) on these classes of words. The results of L2R parsing based joint model in (Honnibal and Johnson, 2014) As shown in Table 4 , our BCT model outperforms all other models except that the performance on determiner is lower than M 3 N \u2020 , which shows that our algorithm can significantly tackle common disfluencies.",
"cite_spans": [
{
"start": 193,
"end": 221,
"text": "(Honnibal and Johnson, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 234,
"end": 241,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performance of disfluency detection on different part-of-speeches",
"sec_num": "4.2.2"
},
{
"text": "Chinese annotated corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of disfluency detection on",
"sec_num": "4.2.3"
},
{
"text": "In addition to English experiments, we also apply our method on Chinese annotated data. As there is no standard Chinese corpus, no Chinese experimental results are reported in (Honnibal and Johnson, 2014; Qian and Liu, 2013) . We only use the CRF-based labeling model with lexical and POStag features as baselines. 86.7% 59.5% 70.6% BCT(+new features) 85.5% 61% 71.2% Table 5 : Disfluency detection performance on Chinese annotated data.",
"cite_spans": [
{
"start": 176,
"end": 204,
"text": "(Honnibal and Johnson, 2014;",
"ref_id": "BIBREF4"
},
{
"start": 205,
"end": 224,
"text": "Qian and Liu, 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 368,
"end": 375,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Performance of disfluency detection on",
"sec_num": "4.2.3"
},
{
"text": "Our models outperform the CRF model with bag of words and POS-tag features by more than 15 points on f-score which shows that our method is more effective. As shown latter in 4.2.4, the standard transition-based parsing is not robust in parsing disfluent text. There are a lot of parsing errors in Chinese training data. Even though we are still able to get promising results with less data and un-golden parsing annotations. We believe that if we were to have the golden Chinese syntactic annotations and more data, we would get much better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of disfluency detection on",
"sec_num": "4.2.3"
},
{
"text": "parsing In order to show whether the advantage of the BCT model is caused by the disfluency constraint or the difference between R2L and L2R parsing models, in this section, we make a comparison between the original left-to-right transition-based parsing and right-to-left parsing. These experiments are performed with the Penn Treebank (PTB) Wall Street Journal (WSJ) corpus. We follow the standard approach to split the corpus as 2-21 for training, 22 for development and section 23 for testing (Mc-Donald et al., 2005) . The features for the two parsers are basic features in Table 1 . The POStagger model that we implement for a pre-process before parsing also uses structured perceptron for training and can achieve a competitive accuracy of 96.7%. The beam size for both POS-tagger and parsing is set to 5. The parsing accuracy on SWBD is lower than WSJ which means that the parsers are more robust on written text data. The performances of R2L and L2R parsing are comparable on both SWBD and WSJ test sets. This demonstrates that the effectiveness of our disfluency detection model mainly relies on catering to the disfluency constraint by using R2L parsing based approach, instead of the difference in parsing models between L2R and R2L parsings.",
"cite_spans": [
{
"start": 497,
"end": 521,
"text": "(Mc-Donald et al., 2005)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 579,
"end": 586,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Performance of transition-based",
"sec_num": "4.2.4"
},
{
"text": "In practice, disfluency detection has been extensively studied in both speech processing field and natural language processing field. Noisy channel models have been widely used in the past to detect disfluencies. proposed a TAG-based noisy channel model where the TAG model was used to find rough copies. Thereafter, a language model and MaxEnt reranker were added to the noisy channel model by . Following their framework, Zwarts and Johnson (2011) extended this model using minimal expected f-loss oriented nbest reranking with additional corpus for language model training.",
"cite_spans": [
{
"start": 424,
"end": 449,
"text": "Zwarts and Johnson (2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Recently, the max-margin markov networks (M 3 N) based model has achieved great improvement in this task. Qian and Liu (2013) presented a multi-step learning method using weighted M 3 N model for disfluency detection. They showed that M 3 N model outperformed many other labeling models such as CRF model. Following this work, Wang et al. (2014) used a beam-search decoder to combine multiple models such as M 3 N and language model, they achieved the highest f-score. However, direct comparison with their work is difficult as they utilized the whole SWBD data while we only use the subcorpus with syntactic annotation which is only half the SWBD corpus and they also used extra corpus for language model training.",
"cite_spans": [
{
"start": 106,
"end": 125,
"text": "Qian and Liu (2013)",
"ref_id": "BIBREF14"
},
{
"start": 327,
"end": 345,
"text": "Wang et al. (2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Additionally, syntax-based approaches have been proposed which concern parsing and disfluency detection together. Lease and Johnson (2006) involved disfluency detection in a PCFG parser to parse the input along with detecting disfluencies. Miller and Schuler (2008) used a right corner transform of syntax trees to produce a syntactic tree with speech repairs. But their performance was not as good as labeling models. There exist two methods published recently which are similar to ours. Rasooli and Tetreault (2013) designed a joint model for both disfluency detection and dependency parsing. They regarded the two tasks as a two step classifications. Honnibal and Johnson (2014) presented a new joint model by extending the original transition actions with a new \"Edit\" transition. They achieved the state-of-theart performance on both disfluency detection and parsing. But this model suffers from the problem that the number of transition actions is not identical for different hypotheses in decoding, leading to the failure of performing optimal search. In contrast, our novel right-to-left transition-based joint method caters to the disfluency constraint which can not only overcome the decoding deficiency in previous work but also achieve significantly higher performance on disfluency detection as well as dependency parsing.",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "Lease and Johnson (2006)",
"ref_id": "BIBREF8"
},
{
"start": 240,
"end": 265,
"text": "Miller and Schuler (2008)",
"ref_id": "BIBREF13"
},
{
"start": 654,
"end": 681,
"text": "Honnibal and Johnson (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "In this paper, we propose a novel approach for disfluency detection. Our models jointly perform parsing and disfluency detection from right to left by integrating a rich set of disfluency features which can yield parsing structure and difluency tags at the same time with linear complexity. The algorithm is easy to implement without complicated backtrack operations. Experiential results show that our approach outperforms the baselines on the English Switchboard corpus and experiments on the Chinese annotated corpus also show the language independent nature of our method. The state-of-the-art performance on disfluency detection and dependency parsing can benefit the downstream tasks of text processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we will try to add new classes of features to further improve performance by capturing the property of disfluencies. We would also like to make an end-to-end MT test over transcribed speech texts with disfluencies removed based on the method proposed in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We just use tag 'N' to represent a normal word, in practice normal words will not be tagged anything by default.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1-8. Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the anonymous reviewers for their insightful comments. We also thank Mu Li, Shujie Liu, Lei Cui and Nan Yang for the helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Desouza",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer C",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Edit detection and parsing for transcribed speech",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2001. Edit detec- tion and parsing for transcribed speech. In Proceed- ings of the second meeting of the North American Chapter of the Association for Computational Lin- guistics on Language technologies, pages 1-9. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC",
"volume": "6",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De Marneffe, Bill MacCartney, Christopher D Manning, et al. 2006. Generat- ing typed dependency parses from phrase structure parses. In Proceedings of LREC, volume 6, pages 449-454.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using integer linear programming for detecting speech disfluencies",
"authors": [
{
"first": "Kallirroi",
"middle": [],
"last": "Georgila",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "109--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kallirroi Georgila. 2009. Using integer linear pro- gramming for detecting speech disfluencies. In Pro- ceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, Companion Volume: Short Papers, pages 109-112. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Joint incremental disfluency detection and dependency parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "131--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Mark Johnson. 2014. Joint incremental disfluency detection and dependency parsing. Transactions of the Association for Com- putational Linguistics, 2:131-142.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A tagbased noisy channel model of speech repairs",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 33. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson and Eugene Charniak. 2004. A tag- based noisy channel model of speech repairs. In Proceedings of the 42nd Annual Meeting on Asso- ciation for Computational Linguistics, page 33. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An improved model for recognizing disfluencies in conversational speech",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Lease",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Rich Transcription Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Eugene Charniak, and Matthew Lease. 2004. An improved model for recognizing disflu- encies in conversational speech. In Proceedings of Rich Transcription Workshop.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Effective use of prosody in parsing conversational speech",
"authors": [
{
"first": "G",
"middle": [],
"last": "Jeremy",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kahn",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Lease",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy G Kahn, Matthew Lease, Eugene Charniak, Mark Johnson, and Mari Ostendorf. 2005. Effective use of prosody in parsing conversational speech. In Proceedings of the conference on human language technology and empirical methods in natural lan- guage processing, pages 233-240. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Early deletion of fillers in processing conversational speech",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Lease",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "73--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Lease and Mark Johnson. 2006. Early dele- tion of fillers in processing conversational speech. In Proceedings of the Human Language Technol- ogy Conference of the NAACL, Companion Volume: Short Papers, pages 73-76. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Enriching speech recognition with automatic detection of sentence boundaries and disfluencies. Audio, Speech, and Language Processing",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Dustin",
"middle": [],
"last": "Hillard",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Transactions on",
"volume": "14",
"issue": "5",
"pages": "1526--1540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Elizabeth Shriberg, Andreas Stolcke, Dustin Hillard, Mari Ostendorf, and Mary Harper. 2006. Enriching speech recognition with automatic detec- tion of sentence boundaries and disfluencies. Audio, Speech, and Language Processing, IEEE Transac- tions on, 14(5):1526-1540.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large anno- tated corpus of english: The penn treebank. Compu- tational linguistics, 19(2):313-330.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of the 43rd an- nual meeting on association for computational lin- guistics, pages 91-98. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dysfluency annotation stylebook for the switchboard corpus",
"authors": [
{
"first": "W",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Ann",
"middle": [
"A"
],
"last": "Meteer",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Rukmini",
"middle": [],
"last": "Macintyre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iyer",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie W Meteer, Ann A Taylor, Robert MacIntyre, and Rukmini Iyer. 1995. Dysfluency annotation stylebook for the switchboard corpus. University of Pennsylvania.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A unified syntactic model for parsing fluent and disfluent speech",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers",
"volume": "",
"issue": "",
"pages": "105--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Miller and William Schuler. 2008. A unified syn- tactic model for parsing fluent and disfluent speech. In Proceedings of the 46th Annual Meeting of the As- sociation for Computational Linguistics on Human Language Technologies: Short Papers, pages 105- 108. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Disfluency detection using multi-step stacked learning",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "820--825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Qian and Yang Liu. 2013. Disfluency detection using multi-step stacked learning. In HLT-NAACL, pages 820-825.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Joint parsing and disfluency detection in linear time",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh Rasooli",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"R"
],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "124--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli and Joel R Tetreault. 2013. Joint parsing and disfluency detection in lin- ear time. In EMNLP, pages 124-129.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A beam-search decoder for disfluency detection",
"authors": [
{
"first": "Xuancong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Khe Chai",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuancong Wang, Hwee Tou Ng, and Khe Chai Sim. 2014. A beam-search decoder for disfluency detec- tion. In Proc. of COLING.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "562--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph- based and transition-based dependency parsing us- ing beam-search. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing, pages 562-571. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Transition-based dependency parsing with rich non-local features",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "188--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies: short papers-Volume 2, pages 188-193. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A progressive feature selection algorithm for ultra large feature spaces",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fuliang",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "561--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Zhang, Fuliang Weng, and Zhe Feng. 2006. A pro- gressive feature selection algorithm for ultra large feature spaces. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and the 44th annual meeting of the Association for Com- putational Linguistics, pages 561-568. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Punctuation prediction with transition-based parsing",
"authors": [
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shuangzhi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "752--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongdong Zhang, Shuangzhi Wu, Nan Yang, and Mu Li. 2013. Punctuation prediction with transition-based parsing. In ACL (1), pages 752- 760.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The impact of language models and loss functions on repair disfluency detection",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Zwarts",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "703--711",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Zwarts and Mark Johnson. 2011. The impact of language models and loss functions on repair disflu- ency detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1, pages 703-711. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A typical example of repair type disfluency consists of FP (Filled Pause), RM (Reparandum), and RP (Repair). The preceding RM is corrected by the following RP.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Figure 2: An instance of the detection procedure where 'N' stands for a normal word and 'X' a disfluency word. Words with italic font are Reparandums. (a) is the L2R detecting procedure and (b) is the R2L procedure.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Simple repetition function \u2022 \u03b4 I (a, b): A logic function which indicates whether a and b are identical. Syntax-based repetition function \u2022 \u03b4 L (a, b): A logic function which indicates whether a is a left child of b. \u2022 \u03b4 R (a, b): A logic function which indicates whether a is a right child of b. Longest subtree similarity function \u2022 N I (a, b): The count of identical children on the left side of the root node between subtrees rooted at a and b.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"text": "Feature templates designed for disfluency detection and dependency parsing.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "The accuracy of M 3 N directly refers to the re-",
"num": null,
"content": "<table><tr><td/><td colspan=\"5\">Disfluency detection accuracy Parsing accuracy</td></tr><tr><td>Method</td><td>Prec.</td><td>Rec.</td><td>F1</td><td>UAS</td><td>LAS</td></tr><tr><td>CRF(BOW)</td><td>81.2%</td><td>44.9%</td><td>57.8%</td><td>88.7%</td><td>84.7%</td></tr><tr><td>CRF(BOW+POS)</td><td>88.3%</td><td>62.2%</td><td>73.1%</td><td>89.2%</td><td>85.6%</td></tr><tr><td>M 3 N</td><td>N/A</td><td>N/A</td><td>84.1%</td><td>N/A</td><td>N/A</td></tr><tr><td>M 3 N \u2020</td><td>90.5%</td><td>79.1%</td><td>84.4%</td><td>91%</td><td>88.2%</td></tr><tr><td>H&amp;J</td><td>N/A</td><td>N/A</td><td>84.1%</td><td>90.5%</td><td>N/A</td></tr><tr><td>UT(basic features)</td><td>86%</td><td>72.5%</td><td>78.7%</td><td>91.9%</td><td>89.0%</td></tr><tr><td>UT(+new features)</td><td>88.8%</td><td>75.1%</td><td>81.3%</td><td>92.1%</td><td>89.4%</td></tr><tr><td>BCT(basic features)</td><td>88.2%</td><td>77.9%</td><td>82.7%</td><td>92.1%</td><td>89.3%</td></tr><tr><td>BCT(+new features)</td><td>90.3%</td><td>80.5%</td><td>85.1%</td><td>92.2%</td><td>89.6%</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Disfluency detection and parsing accuracies on English Switchboard data. The accuracy of M 3 N refers to the result reported in",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "are not listed because we cannot get such detailed data.",
"num": null,
"content": "<table><tr><td/><td>CRF (BOW)</td><td>CRF (BOW +POS)</td><td>M 3 N \u2020</td><td>UT (+feat.)</td><td>BCT (+feat.)</td></tr><tr><td colspan=\"2\">Pron. 73.9%</td><td>85%</td><td>92%</td><td>91.5%</td><td>93.8%</td></tr><tr><td>Verb</td><td colspan=\"4\">38.2% 64.8% 84.2% 82.3%</td><td>84.7%</td></tr><tr><td colspan=\"2\">Dete. 66.8%</td><td>80%</td><td>88%</td><td>83.7%</td><td>87%</td></tr><tr><td>Prep.</td><td>60%</td><td colspan=\"3\">71.5% 79.1% 76.1%</td><td>79.3%</td></tr><tr><td colspan=\"5\">Conj. 75.2% 82.2% 81.6% 79.5%</td><td>83.2%</td></tr><tr><td colspan=\"2\">Others 43.2%</td><td>61%</td><td colspan=\"2\">78.4% 72.3%</td><td>79.1%</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: Performance on different classes</td></tr><tr><td>of words. Dete.=determiner; Pron.=pronoun;</td></tr><tr><td>Conj.=conjunction; Prep.= preposition. feat.=new</td></tr><tr><td>disfluency features</td></tr></table>",
"html": null
},
"TABREF5": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">shows the results</td></tr><tr><td colspan=\"2\">of Chinese disfluency detection.</td><td/></tr><tr><td>Model</td><td>Prec. Rec.</td><td>F1</td></tr><tr><td>CRF(BOW)</td><td colspan=\"2\">89.5% 35.6% 50.9%</td></tr><tr><td>CRF(BOW+POS)</td><td colspan=\"2\">83.4% 41.6% 55.5%</td></tr><tr><td>UT(+new features)</td><td/><td/></tr></table>",
"html": null
},
"TABREF6": {
"type_str": "table",
"text": "presents the results on WSJ test set and Switchboard (SWBD) test set.",
"num": null,
"content": "<table><tr><td>Data sets</td><td>Model</td><td>UAS</td><td>LAS</td></tr><tr><td>WSJ</td><td colspan=\"3\">L2R Parsing 92.1% 89.8%</td></tr><tr><td/><td colspan=\"3\">R2L Parsing 92.0% 89.6%</td></tr><tr><td>SWBD</td><td colspan=\"3\">L2R Parsing 88.4% 84.4%</td></tr><tr><td/><td colspan=\"3\">R2L Parsing 88.7% 84.8%</td></tr></table>",
"html": null
},
"TABREF7": {
"type_str": "table",
"text": "Performance of our parsers on different test sets.",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}