ACL-OCL / Base_JSON /prefixD /json /D12 /D12-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D12-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:23:47.729998Z"
},
"title": "Joint Chinese Word Segmentation, POS Tagging and Parsing",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas",
"location": {}
},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas",
"location": {}
},
"email": "yangl@hlt.utdallas.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a novel decoding algorithm for discriminative joint Chinese word segmentation, part-of-speech (POS) tagging, and parsing. Previous work often used a pipeline method-Chinese word segmentation followed by POS tagging and parsing, which suffers from error propagation and is unable to leverage information in later modules for earlier components. In our approach, we train the three individual models separately during training, and incorporate them together in a unified framework during decoding. We extend the CYK parsing algorithm so that it can deal with word segmentation and POS tagging features. As far as we know, this is the first work on joint Chinese word segmentation, POS tagging and parsing. Our experimental results on Chinese Tree Bank 5 corpus show that our approach outperforms the state-of-the-art pipeline system.",
"pdf_parse": {
"paper_id": "D12-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a novel decoding algorithm for discriminative joint Chinese word segmentation, part-of-speech (POS) tagging, and parsing. Previous work often used a pipeline method-Chinese word segmentation followed by POS tagging and parsing, which suffers from error propagation and is unable to leverage information in later modules for earlier components. In our approach, we train the three individual models separately during training, and incorporate them together in a unified framework during decoding. We extend the CYK parsing algorithm so that it can deal with word segmentation and POS tagging features. As far as we know, this is the first work on joint Chinese word segmentation, POS tagging and parsing. Our experimental results on Chinese Tree Bank 5 corpus show that our approach outperforms the state-of-the-art pipeline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For Asian languages such as Japanese and Chinese that do not contain explicitly marked word boundaries, word segmentation is an important first step for many subsequent language processing tasks, such as POS tagging, parsing, semantic role labeling, and various applications. Previous studies for POS tagging and syntax parsing on these languages sometimes assume that gold standard word segmentation information is provided, which is not the real scenario. In a fully automatic system, a pipeline approach is often adopted, where raw sentences are first segmented into word sequences, then POS tagging and parsing are performed. This kind of approach suffers from error propagation. For example, word segmentation errors will result in tagging and parsing errors. Additionally, early modules cannot use information from subsequent modules. Intuitively a joint model that performs the three tasks together should help the system make the best decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a unified model for joint Chinese word segmentation, POS tagging, and parsing. Three sub-models are independently trained using the state-of-the-art methods. We do not use the joint inference algorithm for training because of the high complexity caused by the large amount of parameters. We use linear chain Conditional Random Fields (CRFs) (Lafferty et al., 2001) to train the word segmentation model and POS tagging model, and averaged perceptron (Collins, 2002) to learn the parsing model. During decoding, parameters of each sub-model are scaled to represent its importance in the joint model. Our decoding algorithm is an extension of CYK parsing. Initially, weights of all possible words together with their POS tags are calculated. When searching the parse tree, the word and POS tagging features are dynamically generated and the transition information of POS tagging is considered in the span merge operation.",
"cite_spans": [
{
"start": 367,
"end": 390,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF12"
},
{
"start": 475,
"end": 490,
"text": "(Collins, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments are conducted on Chinese Tree Bank (CTB) 5 dataset, which is widely used for Chinese word segmentation, POS tagging and parsing. We compare our proposed joint model with the pipeline system, both built using the state-of-the-art submodels. We also propose an evaluation metric to calculate the bracket scores for parsing in the face of word segmentation errors. Our experimental results show that the joint model significantly outperforms the pipeline method based on the state-of-the-art sub-models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is very limited previous work on joint Chinese word segmentation, POS tagging, and parsing. Previous joint models mainly focus on word segmentation and POS tagging task, such as the virtual nodes method (Qian et al., 2010) , cascaded linear model (Jiang et al., 2008a) , perceptron (Zhang and Clark, 2008) , sub-word based stacked learning (Sun, 2011) , reranking (Jiang et al., 2008b) . These joint models showed about 0.2 \u2212 1% F-score improvement over the pipeline method. Recently, joint tagging and dependency parsing has been studied as well (Li et al., 2011; Lee et al., 2011) .",
"cite_spans": [
{
"start": 209,
"end": 228,
"text": "(Qian et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 253,
"end": 274,
"text": "(Jiang et al., 2008a)",
"ref_id": "BIBREF7"
},
{
"start": 288,
"end": 311,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF20"
},
{
"start": 346,
"end": 357,
"text": "(Sun, 2011)",
"ref_id": "BIBREF19"
},
{
"start": 370,
"end": 391,
"text": "(Jiang et al., 2008b)",
"ref_id": "BIBREF8"
},
{
"start": 553,
"end": 570,
"text": "(Li et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 571,
"end": 588,
"text": "Lee et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous research has showed that word segmentation has a great impact on parsing accuracy in the pipeline method (Harper and Huang, 2009) . In (Jiang et al., 2009) , additional data was used to improve Chinese word segmentation, which resulted in significant improvement on the parsing task using the pipeline framework. Joint segmentation and parsing was also investigated for Arabic (Green and Manning, 2010) . A study that is closely related to ours is (Goldberg and Tsarfaty, 2008) , where a single generative model was proposed for joint morphological segmentation and syntactic parsing for Hebrew. Different from that work, we use a discriminative model, which benefits from large amounts of features and is easier to deal with unknown words. Another main difference is that, besides segmentation and parsing, we also incorporate the POS tagging model into the CYK parsing framework.",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "(Harper and Huang, 2009)",
"ref_id": "BIBREF5"
},
{
"start": 144,
"end": 164,
"text": "(Jiang et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 386,
"end": 411,
"text": "(Green and Manning, 2010)",
"ref_id": "BIBREF4"
},
{
"start": 457,
"end": 486,
"text": "(Goldberg and Tsarfaty, 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For a given Chinese sentence, our task is to generate the word sequence, its POS tag sequence, and the parse tree (constituent parsing). A joint model is expected to make more optimal decisions than a pipeline approach; however, such a model will be too complex and it is difficult to estimate model parameters. Therefore we do not perform joint inference for training. Instead, we develop three individ-ual models independently during training and perform joint decoding using them. In this section, we first describe the three sub-models and then the joint decoding algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Methods for Chinese word segmentation can be broadly categorized into character based and word based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "Previous studies showed that character-based models are more effective to detect out-of-vocabulary words while word-based models are more accurate to predict in-vocabulary words (Zhang et al., 2006) . Here, we use order-0 semi-Markov model (Sarawagi and Cohen, 2004) to take advantages of both approaches.",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "(Zhang et al., 2006)",
"ref_id": "BIBREF23"
},
{
"start": 240,
"end": 266,
"text": "(Sarawagi and Cohen, 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "More specifically, given a sentence x = c 1 , c 2 , . . . , c l (where c i is the i th Chinese character, l is the sentence length), the character-based model assigns each character with a word boundary tag. Here we use the BCDIES tag set, which achieved the best official performance (Zhao and Kit, 2008) : B, C, D, E denote the first, second, third, and last character of a multi-character word respectively, I denotes the other characters, and S denotes the single character word. We use the same characterbased feature templates as in the best official system, shown in Table 1 (1.1-1.3), including character unigram and bigram features, and transition features. Linear chain CRFs are used for training.",
"cite_spans": [
{
"start": 285,
"end": 305,
"text": "(Zhao and Kit, 2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 574,
"end": 581,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "Feature templates in the word-based model are shown in Table 1 (1.4-1.6), including word features, sub-word features, and character bigrams within words. The word feature is activated if a predicted word w is in the vocabulary (i.e., appears in training data). Subword(w) is the longest in-vocabulary word within w. To use word features, we adopt a Kbest reranking approach. The top K candidate segmentation results for each training sample are generated using the character-based model, and the gold segmentation is added if it is not in the candidate set. We use the Maximum Entropy (ME) model to learn the weights of word features such that the probability of the gold candidate is maximal.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "A problem arises when combining the two models and using it in joint segmentation and parsing, since the linear chain used in the character-based model is incompatible with CYK parsing model and the word-based model due to the transition informa- tion. Thus, we slightly modify the linear chain CRFs by fixing the weights of transition features during training and testing. That is, weights of impossible transition features (e.g., B\u2192B) are set to \u2212\u221e, and weights of the other transition features (e.g., E\u2192B) are set to 0. In this way, the transition feature could be neglected in testing for two reasons. First, all illegal label assignments are prohibited in prediction, since their weights are \u2212\u221e; second, because weights of legal transition features are 0, they do not affect the prediction at all. In the following, transition features are excluded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "Character Level Feature Templates (1.1) c i\u22122 y i , c i\u22121 y i , c i y i , c i+1 y i , c i+2 y i (1.2) c i\u22121 c i y i , c i c i+1 y i , c i\u22121 c i+1 y i (1.3) y i\u22121 y i Word Level Feature Templates (1.4) word w (1.5) subword(w) (1.6) character bigrams within w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "Now we can use order-0 semi Markov model as the hybrid model. We define the score of a word as the sum of the weights of all the features within the word. Formally, the score of a multi-character word w = c i , . . . , c j is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score seg (x, i, j) = \u03b8 CRF \u2022 f CRF (x, y i = B) + . . . +\u03b8 CRF \u2022 f CRF (x, y j = E) + \u03b8 M E \u2022 f M E (x, i, j) \u2261 \u03b8 seg f seg (x, i, j)",
"eq_num": "(1)"
}
],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "where f CRF and f M E are the feature vectors in the character and word based models respectively, and \u03b8 CRF , \u03b8 M E are their corresponding weight vectors. For simplicity, we denote \u03b8 seg = \u03b8 CRF \u2295M E , f seg = f CRF \u2295M E , where \u03b8 CRF \u2295M E means the concatenation of \u03b8 CRF and \u03b8 M E . Scores for single character words are defined similarly. These word scores will be used in the joint segmentation and parsing task Section 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation Model",
"sec_num": "3.1"
},
{
"text": "Though syntax parsing model can directly predict the POS tag itself, we choose not to use this, but use an independent POS tagger for two reasons. First, there is a large amount of data with labeled POS tags but no syntax annotations, such as the People's Daily corpus and SIGHAN bakeoff corpora (Jin and Chen, 2008) . Such data can only be used to train POS taggers, but not for training the parsing model. Often using a larger training set will result in a better POS tagger. Second, the state-of-the-art POS tagging systems are often trained by sequence labeling models, not parsing models. Table 2 : Feature templates for POS tagging. w i is the i th word in the sentence, t i is its POS tag. For a word w, c j (w) is its j th character, c \u2212j (w) is the last j th character, and l(w) is its length.",
"cite_spans": [
{
"start": 296,
"end": 316,
"text": "(Jin and Chen, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 594,
"end": 601,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS Tagging Model",
"sec_num": "3.2"
},
{
"text": "(2.1) w i\u22122 t i , w i\u22121 t i , w i t i , w i+1 t i , w i+2 t i (2.2) w i\u22122 w i\u22121 t i , w i\u22121 w i t i , w i w i+1 t i , w i+1 w i+2 t i w i\u22121 w i+1 t i (2.3) c 1 (w i )t i , c 2 (w i )t i , c 3 (w i )t i , c \u22122 (w i )t i c \u22121 (w i )t i (2.4) c 1 (w i )c 2 (w i )t i , c \u22122 (w i )c \u22121 (w i )t i (2.5) l(w i )t i (2.5) t i\u22121 t i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging Model",
"sec_num": "3.2"
},
{
"text": "The POS tagging problem is to assign a POS tag t \u2208 T to each word in a sentence. We also use linear chain CRFs for POS tagging. Feature templates shown in Table 2 are the same as those in (Qian et al., 2010) , which have been shown effective on CTB corpus. Three feature sets are considered: (i) word level features, including surrounding word unigrams, bigrams, and word length; (ii) character level features, such as the first and last characters in the words; (iii) transition features.",
"cite_spans": [
{
"start": 188,
"end": 207,
"text": "(Qian et al., 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS Tagging Model",
"sec_num": "3.2"
},
{
"text": "We choose discriminative models for parsing since it is easy to handle unknown words by simply adding character level features. Online structured learning algorithms were demonstrated to be effective for training, such as stochastic optimization (Finkel et al., 2008) . In this study, we use averaged perceptron algorithm for parameter estimation since it is easier to implement and has competitive performance.",
"cite_spans": [
{
"start": 246,
"end": 267,
"text": "(Finkel et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.3"
},
{
"text": "A Context Free Grammar (CFG) consists of (i) a set of terminals; (ii) a set of nonterminals {N k }; (iii) a designated start symbol ROOT; and (iv) a set of rules, {r = N i \u2192 \u03b6 j }, where \u03b6 j is a sequence of terminals and nonterminals. In the parsing task, ter- minals are the words, and nonterminals are the POS tags and phrase types. In this paper, nonterminal is named state for short. A parse tree T of sentence x can be factorized into several one-level subtrees, each corresponding to a rule r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.3"
},
{
"text": "In practice, binarization of rules is necessary to obtain cubic parsing time. That is, the right hand side of each rule should contain no more than 2 states. We used right branching binarization, as illustrated in Figure 1 . We did not use parent annotation, since we found it degraded the performance in our experiments (shown in Section 4). We used the same preprocessing step as (Harper and Huang, 2009) , collapsing all the allowed nonterminal-yield unary chains to single unary rules. Therefore, all spans in the binarized trees contain no more than one unary rules. To facilitate decoding, we unify the form of spans so that each span contains exactly one unary rule. This is done by adding identity unary rules (N \u2192 N ) to spans that have no unary rule. These identity unary rules will be removed in evaluation. Hence, there are two states of a span: the top state N and the bottom state N that correspond to the left and right hand of the unary rule r unary = N \u2192 N respectively, as shown in Figure 2 . ), which extract the transition information from bottom states to top states; (iv) binary rule features The score function for a sentence x with parse tree T is defined as:",
"cite_spans": [
{
"start": 382,
"end": 406,
"text": "(Harper and Huang, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1000,
"end": 1008,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.3"
},
{
"text": "f binary (i, j, k, x, r binary i,j,k = N i,j \u2192 N i,k\u22121 + N k,r ), where N i,k\u22121 , N k,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.3"
},
{
"text": "score(x, T ) = \u2211 N i,j \u2208T \u03b8 bottom \u2022 f bottom (i, j, x, N i,j ) + \u2211 N i,j \u2208T \u03b8 top \u2022 f top (i, j, x, N i,j ) + \u2211 r unary i,j \u2208T \u03b8 unary \u2022 f unary (i, j, x, r unary i,j ) + \u2211 r binary i,j,k \u2208T \u03b8 binary \u2022 f binary (i, j, x, r binary i,j,k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.3"
},
{
"text": "where \u03b8 bottom , \u03b8 top , \u03b8 unary , \u03b8 binary are the weight vectors of the four feature sets. Given the training corpus {(x i ,T i )}, the learning task is to estimate the weight vectors so that for each sentence x i , the gold standard treeT i achieves the maximal score among all the possible trees. The perceptron algorithm is guaranteed to find the solution if it exists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.3"
},
{
"text": "The three models described above are separately trained to make parameter estimation feasible as well as optimize each individual component. In test- Table 3 : Feature templates for parsing, where X can be word, first and last character of word, first and last character bigram of word, POS tag. X l+a /X r\u2212a denotes the first/last a th X in the span, while X l\u2212a /X r+a denotes the a th X left/right to span. X m is the first X of right child, and X m\u22121 is the last X of the left child. len, len l , len r denote the length of the span, left child and right child respectively. wl is the length of word. ROOT/LEAF means the template can only generate the features for the root/initial span.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 157,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Joint Decoding",
"sec_num": "3.4"
},
{
"text": "(3.1) Binary rule templates N \u2192 N l + N r X l Xm\u22121Xr len l lenr X l Xm Xr len l lenr X l Xm\u22121 Xr wordm\u22121(ROOT) X l + Xm Xr wordm(ROOT) (3.2) Unary rule templates N \u2192 N (3.3) Bottom state templates X l len Xrlen X l\u22122 X l\u22121 Xr+1len X l\u22121 Xr+1 Xr+2len wl l wlrX l len wl l wlrXrlen X l Xrwl l len X l Xrwlrlen word l wordrX l Xrlen word l wordrX l Xr X l\u22121 X l (LEAF) X l+1 X l (LEAF) X l word l (LEAF) X l wl l (LEAF) X l+a X r+b len word l+a word r+b \u22121 \u2264 a, b \u2264 1 (3.3) Top state templates X l\u22121 X l (LEAF) X l+1 X l (LEAF) X l word l (LEAF) X l wl l (LEAF) X l+a X r+b len word l+a word r+b \u22121 \u2264 a, b \u2264 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding",
"sec_num": "3.4"
},
{
"text": "ing, we perform joint decoding to combine information from the three models. Parameters of word segmentation (\u03b8 seg ), POS tagging (\u03b8 pos ), and parsing models (\u03b8 parse = \u03b8 bottom\u2295top\u2295 unary\u2295bianry ) are scaled by three positive hyper-parameters \u03b1, \u03b2, and \u03b3 respectively, which control their contribution in the joint model. If \u03b1 >> \u03b2 >> \u03b3, then the joint model is equivalent to a pipeline model, in which there is no feedback from downstream models to upstream ones. For well tuned hyper-parameters, we expect that segmentation and POS tagging results can be improved by parsing information. The hyperparameters are tuned on development data. In the following sections, for simplicity we drop \u03b1, \u03b2, \u03b3, and just use \u03b8 seg , \u03b8 pos , \u03b8 parse to represent the scaled parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding",
"sec_num": "3.4"
},
{
"text": "The basic idea of our decoding algorithm is to extend the CYK parsing algorithm so that it can deal with transition features in POS tagging and segmentation scores in word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding",
"sec_num": "3.4"
},
{
"text": "The joint decoding algorithm is shown in Algorithm 1. Given a sentence x = c 1 , . . . , c l , Line 0 calculates the scores of all possible words in the sentence using Eq(1). There are l(l + 1)/2 word candidates in total.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "Surrounding words are important features for POS tagging and parsing; however, they are unavailable because segmentation is incomplete before parsing. Therefore, we adopt pseudo surrounding features by simply fixing the context words as the single most likely ones. Given a word candidate w i,j from c i to c j , its previous word s \u2032 is the rightmost one in the best word sequence of c 1 , . . . , c i\u22121 , which can be obtained by dynamic programming. Recursively, the second word left to w i,j is the previous word of s \u2032 . The next word of w i,j is defined similarly. In Line 1, we use bidirectional Viterbi decoding to obtain all the surrounding words. In the forward direction, the algorithm starts from the first character boundary to the last, and finds the best previous word for the i th character boundary b i . In the backward direction, the algorithm starts from right to left, and finds the best next word of each b i . In Line 2, for each word candidate, we can calculate the score of each POS tag using state features in the POS tagging model, since the context words are available now. The score function of word w i,j with POS tag t is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "score seg\u2295pos (x, i, j, t) = score seg (x, i, j) + \u03b8 pos \u2022 f pos (x, w i,j , t) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "In Line 3, POS tags of surrounding words can be obtained similarly using bidirectional decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "Algorithm 1 Joint Word Segmentation, POS tagging, and Parsing Algorithm Input: Sentence x = c 1 , . . . , c l , beam size B, scaled word segmentation model, POS tagging model and parsing model. Output: Word sequence, POS tag sequence, and parse tree 0: \u22000 \u2264 i \u2264 j \u2264 l \u2212 1, calculate score seg (x, i, j) using Equation 11: For each character boundary b i , 0 \u2264 i \u2264 l, get the best previous and next words of b i using bidirectional Viterbi decoding 2: \u22000 \u2264 i \u2264 j \u2264 l \u2212 1, t \u2208 T , calculate score seg\u2295pos (x, i, j, t) using Equation 23: \u2200b i , 0 \u2264 i \u2264 l, t \u2208 T , get the best POS tags of words left/right to b i using bidirectional viterbi decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "4: For each word candidate w i,j , 0 \u2264 i \u2264 j \u2264 l \u2212 1 5: For each bottom state N , POS tag t \u2208 T \u00a1 step 1 (Line 5-7): get bottom states 6: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "score bottom (x, i, j, w i,j , t, N ) = score seg\u2295pos (x, i, j, t) + \u03b8 bottom \u2022 f bottom (x, i, j, w i,j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "score top (x, i, j, w i,j , t, N ) = max N {score bottom (x, i, j, w i,j , t, N ) + \u03b8 top \u2022 f top (x, i, j, w i,j , t, N ) +\u03b8 unary \u2022 f unary (x, i, j, w i,j , t, N \u2192 N ) }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "10: for i = 0, . . . , l \u2212 1 do 11: for width = 1, . . . , l \u2212 1 do 12: j = i + width 13: for k = i + 1, . . . , j do 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "score bottom (x, i, j, w, t, N ) = max l,r { score top (x, i, k \u2212 1, w l , t l , N l ) + score top (x, k, j, w r , t r , N r ) +\u03b8 binary \u2022 f binary (x, i, j, k, w, t, N \u2192 N r + N r ) + \u03b8 pos \u2022 f pos (t last l \u2192 t f irst r ) +\u03b8 bottom f bottom (x, i, j, w, t, N )} 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "Keep B best score bottom \u00a1 step 1 (Line 14-15): get bottom states 16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "For each top state N \u00a1 step 2 (Line 16-17): get top states 17:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "score top (x, i, j, w, t, N ) = max N {score bottom (x, i, j, w, t, N ) +\u03b8 unary \u2022 f unary (x, i, j, w, t, N \u2192 N ) }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "18: end for 19: end for 20: end for Line 0 1 2 3 6 9 14 15 Total Bound(w.r.t. l) That is, for w i,j with POS tag t, we use Viterbi algorithm to search the optimal POS tags of its left and right words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "Complexity l 2 l 2 |T |l 2 |T | 2 l 2 |T |M l 2 BM l 2 l 3 M B 2 BM l 2 l 3 M B 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "In Lines 4-9, each word was initialized as a basic span. A span structure in the joint model is a 6-tuple: S (i, j, w, t, N , N ) , where i, j are the boundary indices, w, t are the word sequence and POS sequence within the span respectively, and N , N are the bottom and top states. There are two types of surrounding n-grams: one is inside the span, for example, the first word of a span, which can be obtained from w; the other is outside the span, for example, the previous word of a span, which is obtained from the pseudo context information. The score of a basic span depends on its corresponding word and POS pair score, and the weights of the active state and unary features.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 129,
"text": "(i, j, w, t, N , N )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "To avoid enumerating the combination of the bottom and top states, initialization for each span is divided into 2 steps. In the first step, the score of every bottom state is calculated using bottom state features, and only the B best states are maintained (see Line 6-7). In the second step, top state features and unary rule features are used to get the score of each top state (Line 9), and only the top B states are preserved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "Similarly, there are two steps in the merge operation: S (i, j, w, t, N , N ) ",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(i, j, w, t, N , N )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "= S l (i, k, w l , t l , N l , N l ) + S r (k + 1, j, w r , t r , N r , N r ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "The score of the bottom state N is calculated using binary features f binary (x, i, j, k, w, t, N \u2192 N r , j, w, t, N ) , and POS tag transition features that depend on the boundary POS tags of S l and S r . See Line 14 of Algorithm 1, where t last l and t f irst r are the POS tags of the last word in the left child span and the first word in the right child span respectively.",
"cite_spans": [
{
"start": 77,
"end": 103,
"text": "(x, i, j, k, w, t, N \u2192 N r",
"ref_id": null
}
],
"ref_spans": [
{
"start": 104,
"end": 118,
"text": ", j, w, t, N )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "+ N r ), bottom state features f bottom (x, i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.4.1"
},
{
"text": "Given a sentence of length l, the complexity for each line of Algorithm 1 is listed in Table 4 , where |T | is the size of POS tag set, M is the number of states, and B is the beam size.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Complexity analysis",
"sec_num": "3.4.2"
},
{
"text": "For comparison with other systems, we use the CT-B5 corpus, which has been studied for Chinese word segmentation, POS tagging and parsing. We use the standard train/develop/test split of the data. Details are shown in Table 5 ",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We evaluate system performance on the individual tasks, as well as the joint tasks. 1 For word segmentation, three metrics are used for evaluation: precision (P), recall (R), and F-score (F) defined by 2PR/(P+R). Precision is the percentage of correct words in the system output. Recall is the percentage of words in gold standard annotations that are correctly predicted. For parsing, we use the standard parseval evaluation metrics: bracketing precision, recall and F-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "For joint word segmentation and POS tagging, a word is correctly predicted if both the boundaries and the POS tag are correctly identified. For joint segmentation, POS tagging, and parsing task, when calculating the bracket scores using existing parseval tools, we need to consider possible word segmentation errors. To do this, we add the word boundary information in states -a bracket is correct only if its boundaries, label and word segmentation are all correct. One example is shown in Figure 3 . Notice that identity unary rules are removed during evaluation. The basic spans are characters, not words, because the number of words in reference and prediction may be different. POS tags are removed since they do not affect the bracket scores. If the segmentation is perfect, then the bracket scores of the modified tree are exactly the same as the original tree. This is similar to evaluating parsing performance on speech transcripts with automatic sentence segmentation (Roark et al., 2006) .",
"cite_spans": [
{
"start": 978,
"end": 998,
"text": "(Roark et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 491,
"end": 499,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "NP(0,2,5) Shanghai office NP NR NN - - - - -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "Figure 3: Boundary information is added to states to calculate the bracket scores in the face of word segmentation errors. Left: the original parse tree, Right: the converted parse tree. The numbers in the brackets are the indices of the character boundaries based on word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shanghai office",
"sec_num": null
},
{
"text": "We train three submodels using the gold features, that is, POS tagger is trained using the perfect segmentation, and parser is trained using perfect segmentation and POS tags. Some studies reported that better performance may be achieved by training subsequent models using representative output of the preceding models (Che et al., 2009) . Hence for comparison we trained another parser using automatically generated POS tags obtained from 10-fold cross validation, but did not find significant difference between these two parsers when testing on the perfectly segmented development dataset. Therefore we use the parser trained with perfect POS tags for the joint task. Three hyper-parameters, \u03b1, \u03b2, and \u03b3, are tuned on development data using a heuristic search. Parameters that achieved the best joint parsing result are selected. In the search, we fixed \u03b3 = 1 and varied \u03b1, \u03b2. First, we set \u03b2 = 1, and enumerate \u03b1 = 1 4 , 1 2 , 1, 2, . . . , and choose the best \u03b1 * . Then, we set \u03b1 = \u03b1 * and vary \u03b2 = 1 4 , 1 2 , 1, 2, . . . , and select the best \u03b2 * . Table 6 lists the parameters we used for training the submodels, as well as the hyper-parameters for joint decoding. ",
"cite_spans": [
{
"start": 320,
"end": 338,
"text": "(Che et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1058,
"end": 1065,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "4.3"
},
{
"text": "In this section we first show that our sub-models are better than or comparable to state-of-the-art systems, and then the joint model is superior to the pipeline approach. Table 7 shows word segmentation results using our word segmentation submodel, in comparison to a few state-of-the-art systems. For our segmentor, we show results for two variants: one removes transition features as described in Section 3.1, the other uses CRFs to learn the weights of transition features. We can see that our system is competitive with all the others except Sun's that used additional idiom resources. Our two word segmentors have similar performance. Since the one without transition features can be naturally integrated into the joint system, we use it in the following joint tasks. System P R F (Jiang et al., 2008b) --97.74 (Jiang et al., 2008a) --97.85 (Kruengkrai et al., 2009) 97 For the POS tagging only task that takes gold standard word segmentation as input, we have two systems. One uses the linear chain CRFs as described in Section 3.2, the other is obtained using the parser described in Section 3.3 -the parser generates POS tag hypotheses when POS tag features are not used. The POS tagging accuracy is 95.53% and 95.10% using these two methods respectively. The better performance from the former system may be because the local label dependency is more helpful for POS tagging than the long distance dependencies that might be noisy. This result also confirms our choice of using an independent POS tagger for the sub-model, rather than relying on a parser for POS tagging. However, since there are no reported results for this setup, we demonstrate the competence of our POS tagger using the joint word segmentation and POS tagging task. Table 8 shows the performance of a few systems along with ours, all using the pipeline approach where automatic segmentation is followed by POS tagging. We can see that our POS tagger is comparable to the others. System P R F (Jiang et al., 2008b) --93.37 (Jiang et al., 2008a) --93.41 (Kruengkrai et al., 2009) Table 8 : Results for the joint word segmentation and POS tagging task.",
"cite_spans": [
{
"start": 787,
"end": 808,
"text": "(Jiang et al., 2008b)",
"ref_id": "BIBREF8"
},
{
"start": 817,
"end": 838,
"text": "(Jiang et al., 2008a)",
"ref_id": "BIBREF7"
},
{
"start": 847,
"end": 872,
"text": "(Kruengkrai et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 1973,
"end": 1994,
"text": "(Jiang et al., 2008b)",
"ref_id": "BIBREF8"
},
{
"start": 2003,
"end": 2024,
"text": "(Jiang et al., 2008a)",
"ref_id": "BIBREF7"
},
{
"start": 2033,
"end": 2058,
"text": "(Kruengkrai et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 7",
"ref_id": null
},
{
"start": 1747,
"end": 1754,
"text": "Table 8",
"ref_id": null
},
{
"start": 2059,
"end": 2066,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.4"
},
{
"text": "For parsing, Table 9 presents the parsing result on gold standard segmented sentence. Notice that the result of (Harper and Huang, 2009; Zhang and Clark, 2011) are not directly comparable to ours, as they used a different data split. The best published system result on CTB5 is Petrov and Klein's, which used PCFG with latent Variables. Our system performs better mainly because it benefits from a large amount of features.",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "(Harper and Huang, 2009;",
"ref_id": "BIBREF5"
},
{
"start": 137,
"end": 159,
"text": "Zhang and Clark, 2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating Sub-models",
"sec_num": "4.4.1"
},
{
"text": "LP LR F (Petrov and Klein, 2007) 84.8 81.9 83.3 (Jiang et al., 2009) --82.35 (Harper and Huang, 2009) Table 9 : Parsing results using gold standard word segmentation.",
"cite_spans": [
{
"start": 8,
"end": 32,
"text": "(Petrov and Klein, 2007)",
"ref_id": "BIBREF15"
},
{
"start": 48,
"end": 68,
"text": "(Jiang et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 77,
"end": 101,
"text": "(Harper and Huang, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "For our parser, besides the model described in Section 3.3, we tried two variations: one does not use the automatic POS tag features, the other one is learned on the parent annotated training data. The results in Table 9 show that there is a performance degradation when using parent annotation. This may be due to the introduction of a large number of states, resulting in sparse features. We also notice that with the help of the POS tag information, even automatically generated, the parser gained 0.9% improvement in F-score. This demonstrates the advantage of using a better independent POS tagger and incorporating it in parsing.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 220,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Finally Table 10 shows the results for the three tasks using our joint decoding method in comparison to the pipeline method. We can see that the joint model outperforms the pipeline one. This is mainly because of a better parsing module as well as joint decoding. In the table we also include results of (Jiang et al., 2009) , which is the only reported joint parsing result we found using the same data split on CTB5. They achieved 80.28% parsing F-score using automatic word segmentation. Their adapted system Jiang09 + leveraged additional corpus to improve Chinese word segmentation, resulting in an Fscore of 81.07%. Our system has better performance than these. Table 10 : Results for the joint segmentation, tagging, and parsing task using pipeline and joint models.",
"cite_spans": [
{
"start": 304,
"end": 324,
"text": "(Jiang et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 8,
"end": 16,
"text": "Table 10",
"ref_id": "TABREF0"
},
{
"start": 668,
"end": 676,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "We compared the results from the pipeline and our joint decoding systems in order to understand the impact of the joint model on word segmentation and POS tagging. We notice that the joint model tend to generate more words than the pipeline model. For example, \"\u5df4\u5c14\u4e00\u884c\" is one word in the pipeline model, but correctly segmented as two words \"\u5df4 \u5c14/\u4e00\u884c\" in the joint model. This tendency of segmentation also makes it fail to recognize some long words, especially OOV words. For example, \"\u4e8b \u5b9e\u4e0a\" is segmented as \"\u4e8b\u5b9e/\u4e0a\". In the data set, we find that, the joint model corrected 10 missing boundaries over the pipeline method, and introduced 3 false positive segmentation errors. For the analysis of POS tags, we only examined the words that are correctly segmented by both the pipeline and the joint models. Table 11 shows the increase and decrease of error patterns of the joint model over the pipeline POS tagger. An error pattern \"X \u2192 Y\" means that the word whose true tag is 'X' is assigned a tag 'Y'. All the patterns are ranked in descending order of the reduction/increase of the error number. We can see that the joint model has a clear advantage in the disambiguation of {VV, NN} and {DEG, DEC}, which results in the overall improved performance. In contrast, the joint method performs worse on ambiguous POS pairs such as {N N, N R}. This observation is similar to those reported by (Li et al., 2011; Hatori et al., 2011) .",
"cite_spans": [
{
"start": 1386,
"end": 1403,
"text": "(Li et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 1404,
"end": 1424,
"text": "Hatori et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 801,
"end": 809,
"text": "Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.5"
},
{
"text": "In this paper, we proposed a new algorithm for joint Chinese word segmentation, POS tagging, and parsing. Our algorithm is an extension of the CYK Table 11 : POS tagging error patterns. # means the error number of the corresponding pattern made by the pipeline tagging model. \u2193 and \u2191 mean the error number reduced or increased by the joint model. parsing method. The sub-models are independently trained for the three tasks to reduce model complexity and optimize individual sub-models. Our experiments demonstrate the advantage of the joint models. In the future work, we will compare this joint model to the pipeline approach that uses multiple candidates or soft decisions in the early modules. We will also investigate methods for joint learning as well as ways to speed up the joint decoding algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 155,
"text": "Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Note that the joint task refers to automatic segmentation and tagging/parsing. It can be achieved using a pipeline system or our joint decoding method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments The authors thank Zhongqiang Huang for his help with experiments. This work is partly supported by DARPA under Contract No. HR0011-12-C-0016. Any opinions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multilingual dependency-based syntactic and semantic parsing",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuhang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL 09",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Zhenghua Li, Yongqiang Li, Yuhang Guo, Bing Qin, and Ting Liu. 2009. Multilingual dependency-based syntactic and semantic parsing. In Proceedings of CoNLL 09, pages 49-54.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP 2002",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP 2002, pages 1-8.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Efficient, feature-based, conditional random field parsing",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kleeman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "959--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, condition- al random field parsing. In Proceedings of ACL-08: HLT, pages 959-967.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A single generative model for joint morphological segmentation and syntactic parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL 2008: HLT",
"volume": "",
"issue": "",
"pages": "371--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Reut Tsarfaty. 2008. A single gener- ative model for joint morphological segmentation and syntactic parsing. In Proceedings of ACL 2008: HLT, pages 371-379.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Better arbic parsing: Baselines, evaluations, and analysis",
"authors": [
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Coling 2010",
"volume": "",
"issue": "",
"pages": "394--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spence Green and Christopher D. Manning. 2010. Better arbic parsing: Baselines, evaluations, and analysis. In Proceedings of Coling 2010, pages 394-402.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Chinese statistical parsing",
"authors": [
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
},
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary Harper and Zhongqiang Huang. 2009. Chinese statistical parsing. In Gale Book.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incremental joint pos tagging and dependency parsing in chinese",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Hatori",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IJCNLP 2011",
"volume": "",
"issue": "",
"pages": "1216--1224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2011. Incremental joint pos tagging and dependency parsing in chinese. In Proceedings of IJCNLP 2011, pages 1216-1224.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A cascaded linear model for joint chinese word segmentation and part-of-speech tagging",
"authors": [
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "L\u00fc",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL 2008: HLT",
"volume": "",
"issue": "",
"pages": "897--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L\u00fc. 2008a. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceed- ings of ACL 2008: HLT, pages 897-904.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Word lattice reranking for chinese word segmentation and partof-speech tagging",
"authors": [
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Coling",
"volume": "",
"issue": "",
"pages": "385--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbin Jiang, Haitao Mi, and Qun Liu. 2008b. Word lat- tice reranking for chinese word segmentation and part- of-speech tagging. In Proceedings of Coling 2008, pages 385-392.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic adaptation of annotation standards: Chinese word segmentation and pos tagging -a case study",
"authors": [
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP 2009",
"volume": "",
"issue": "",
"pages": "522--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Au- tomatic adaptation of annotation standards: Chinese word segmentation and pos tagging -a case study. In Proceedings of ACL-IJCNLP 2009, pages 522-530.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The fourth international chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chinese pos tagging",
"authors": [
{
"first": "Guangjin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangjin Jin and Xiao Chen. 2008. The fourth interna- tional chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chi- nese pos tagging. In Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An error-driven word-character hybrid model for joint chinese word segmentation and pos tagging",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Yiou",
"middle": [],
"last": "Jun'ichi Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL 2009",
"volume": "",
"issue": "",
"pages": "513--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai, Kiyotaka Uchimoto, Jun'ichi Kaza- ma, Yiou Wang, Kentaro Torisawa, and Hitoshi Isa- hara. 2009. An error-driven word-character hybrid model for joint chinese word segmentation and pos tagging. In Proceedings of ACL 2009, pages 513-521.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic mod- els for segmenting and labeling sequence data. In Pro- ceedings of ICML 2001, pages 282-289.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A discriminative model for joint morphological disambiguation and dependency parsing",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings A-CL 2011: HLT",
"volume": "",
"issue": "",
"pages": "885--894",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lee, Jason Naradowsky, and David A. Smith. 2011. A discriminative model for joint morphological disam- biguation and dependency parsing. In Proceedings A- CL 2011: HLT, pages 885-894.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Joint models for chinese pos tagging and dependency parsing",
"authors": [
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP 2011",
"volume": "",
"issue": "",
"pages": "1180--1191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenghua Li, Min Zhang, Wanxiang Che, Ting Liu, Wen- liang Chen, and Haizhou Li. 2011. Joint models for chinese pos tagging and dependency parsing. In Pro- ceedings of EMNLP 2011, pages 1180-1191.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improved inference for unlexicalized parsing",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NAACL 2007",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of NAACL 2007, pages 404-411.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Joint training and decoding using virtual nodes for cascaded segmentation and tagging tasks",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yaqian",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lide",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP 2010",
"volume": "",
"issue": "",
"pages": "187--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Qian, Qi Zhang, Yaqian Zhou, Xuanjing Huang, and Lide Wu. 2010. Joint training and decoding using virtual nodes for cascaded segmentation and tagging tasks. In Proceedings of EMNLP 2010, pages 187- 195.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sparseval: Evaluation metrics for parsing speech",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [
"G"
],
"last": "Kahn",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Krasnyanskaya",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark, Mary Harper, Eugene Charniak, Bonnie Dorr, Mark Johnson, Jeremy G. Kahn, Yang Liu, Mari Ostendorf, John Hale, Anna Krasnyanskaya, Matthew Lease, Izhak Shafran, Matthew Snover, Robin Stew- art, Lisa Yung, and Lisa Yung. 2006. Sparseval: E- valuation metrics for parsing speech. In Proceedings Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semimarkov conditional random fields for information extraction",
"authors": [
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunita Sarawagi and William W. Cohen. 2004. Semi- markov conditional random fields for information ex- traction. In Proceedings of NIPS 2004.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A stacked sub-word model for joint chinese word segmentation and part-of-speech tagging",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL 2011",
"volume": "",
"issue": "",
"pages": "1385--1394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Sun. 2011. A stacked sub-word model for join- t chinese word segmentation and part-of-speech tag- ging. In Proceedings of ACL 2011, pages 1385-1394.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joint word segmentation and POS tagging using a single perceptron",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL 2008: HLT",
"volume": "",
"issue": "",
"pages": "888--896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2008. Joint word seg- mentation and POS tagging using a single perceptron. In Proceedings of ACL 2008: HLT, pages 888-896.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A fast decoder for joint word segmentation and POS-tagging using a single discriminative model",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP 2010",
"volume": "",
"issue": "",
"pages": "843--852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and POS-tagging using a sin- gle discriminative model. In Proceedings of EMNLP 2010, pages 843-852.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Syntactic processing using the generalized perceptron and beam search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Comput. Linguist",
"volume": "37",
"issue": "1",
"pages": "105--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2011. Syntactic process- ing using the generalized perceptron and beam search. Comput. Linguist., 37(1):105-151.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Subword-based tagging for confidencedependent chinese word segmentation",
"authors": [
{
"first": "Ruiqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL 2006",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiqiang Zhang, Genichiro Kikui, and Eiichiro Sumi- ta. 2006. Subword-based tagging for confidence- dependent chinese word segmentation. In Proceedings of the COLING/ACL 2006, pages 961-968.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised segmentation helps supervised learning of character tagging forword segmentation and named entity recognition",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "106--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2008. Unsupervised segmen- tation helps supervised learning of character tagging forword segmentation and named entity recognition. In Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing, pages 106-111.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Parse tree binarization",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "r are the top states of the left and right children.",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"text": "Feature templates for word segmentation. c i is the i th character in the sentence, y i is its label, w is a predicted word.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF1": {
"num": null,
"text": "lists the feature templates we use for parsing. There are 4 feature sets: (i) bottom state features fbottom (i, j, x, N i,j ), which depend on the bot-",
"type_str": "table",
"html": null,
"content": "<table><tr><td>CP</td><td/><td>top state</td><td>CP</td></tr><tr><td>IP</td><td/><td/><td/></tr><tr><td>VP</td><td colspan=\"2\">bottom state</td><td>VP</td></tr><tr><td>NP</td><td>VV</td><td>NP</td><td>VV</td></tr><tr><td>NT</td><td/><td>NT</td><td>VV</td></tr><tr><td>Last year</td><td>realized</td><td>Last year</td><td>realized</td></tr><tr><td colspan=\"4\">Figure 2: Unary rule normalization. Nonterminal-yield</td></tr><tr><td colspan=\"4\">unary chains are collapsed to single unary rules. Identity</td></tr><tr><td colspan=\"4\">unary rules are added to spans that have no unary rule.</td></tr><tr><td colspan=\"4\">tom states; (ii) top state features f top (i, j, x, N i,j ); (iii) unary rule features f unary (i, j, x, r unary i,j</td></tr></table>"
},
"TABREF3": {
"num": null,
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF5": {
"num": null,
"text": "Training, development, and test data of CTB 5.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF7": {
"num": null,
"text": "Parameters used in our system.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF11": {
"num": null,
"text": "Ours Seg. 97.45 98.24 97.85 Pipeline POS 93.10 93.96 93.53 Parse 81.87 81.65 81.76 Ours Seg. 97.56 98.36 97.96 Joint POS 93.43 94.20 93.81 Parse 83.03 82.66 82.85",
"type_str": "table",
"html": null,
"content": "<table><tr><td>System</td><td>Task</td><td>P</td><td>R</td><td>F</td></tr><tr><td>Jiang09</td><td>Parse</td><td>-</td><td>-</td><td>80.28</td></tr><tr><td colspan=\"2\">Jiang09 + Parse</td><td>-</td><td>-</td><td>81.07</td></tr></table>"
}
}
}
}