| { |
| "paper_id": "D14-1010", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:54:33.493577Z" |
| }, |
| "title": "Semi-Supervised Chinese Word Segmentation Using Partial-Label Learning With Conditional Random Fields", |
| "authors": [ |
| { |
| "first": "Fan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nuance Communications Inc", |
| "location": {} |
| }, |
| "email": "fan.yang@nuance.com" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Vozila", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nuance Communications Inc", |
| "location": {} |
| }, |
| "email": "paul.vozila@nuance.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "There is rich knowledge encoded in online web data. For example, punctuation and entity tags in Wikipedia data define some word boundaries in a sentence. In this paper we adopt partial-label learning with conditional random fields to make use of this valuable knowledge for semi-supervised Chinese word segmentation. The basic idea of partial-label learning is to optimize a cost function that marginalizes the probability mass in the constrained space that encodes this knowledge. By integrating some domain adaptation techniques, such as EasyAdapt, our result reaches an F-measure of 95.98% on the CTB-6 corpus, a significant improvement from both the supervised baseline and a previous proposed approach, namely constrained decode.", |
| "pdf_parse": { |
| "paper_id": "D14-1010", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "There is rich knowledge encoded in online web data. For example, punctuation and entity tags in Wikipedia data define some word boundaries in a sentence. In this paper we adopt partial-label learning with conditional random fields to make use of this valuable knowledge for semi-supervised Chinese word segmentation. The basic idea of partial-label learning is to optimize a cost function that marginalizes the probability mass in the constrained space that encodes this knowledge. By integrating some domain adaptation techniques, such as EasyAdapt, our result reaches an F-measure of 95.98% on the CTB-6 corpus, a significant improvement from both the supervised baseline and a previous proposed approach, namely constrained decode.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "A general approach for supervised Chinese word segmentation is to formulate it as a character sequence labeling problem, to label each character with its location in a word. For example, Xue (2003) proposes a four-label scheme based on some linguistic intuitions: 'B' for the beginning character of a word, 'I' for the internal characters, 'E' for the ending character, and 'S' for singlecharacter word. Thus the word sequence \"\u6d3d\u8c08\u4f1a \u5f88 \u6210\u529f\" can be turned into a character sequence with labels as \u6d3d\\B \u8c08\\I \u4f1a\\E \u5f88\\S \u6210\\B \u529f\\E. A machine learning algorithm for sequence labeling, such as conditional random fields (CRF) (Lafferty et al., 2001) , can be applied to the labelled training data to learn a model.", |
| "cite_spans": [ |
| { |
| "start": 187, |
| "end": 197, |
| "text": "Xue (2003)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 610, |
| "end": 633, |
| "text": "(Lafferty et al., 2001)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Labelled data for supervised learning of Chinese word segmentation, however, is usually expensive and tends to be of a limited amount. Researchers are thus interested in semi-supervised learning, which is to make use of unlabelled data to further improve the performance of supervised learning. There is a large amount of unlabelled data available, for example, the Gigaword corpus in the LDC catalog or the Chinese Wikipedia on the web.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Faced with the large amount of unlabelled data, an intuitive idea is to use self-training or EM, by first training a baseline model (from the supervised data) and then iteratively decoding the unlabelled data and updating the baseline model. Jiao et al. (2006) and Mann and McCallum (2007) further propose to minimize the entropy of the predicted label distribution on unlabeled data and use it as a regularization term in CRF (i.e. entropy regularization). Beyond these ideas, Liang (2005) and Sun and Xu (2011) experiment with deriving a large set of statistical features such as mutual information and accessor variety from unlabelled data, and add them to supervised discriminative training. Zeng et al. (2013b) experiment with graph propagation to extract information from unlabelled data to regularize the CRF training. Yang and Vozila (2013) , Zhang et al. (2013) , and Zeng et al. (2013a) experiment with co-training for semi-supervised Chinese word segmentation. All these approaches only leverage the distribution of the unlabelled data, yet do not make use of the knowledge that the unlabelled data might have integrated in.", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 260, |
| "text": "Jiao et al. (2006)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 265, |
| "end": 289, |
| "text": "Mann and McCallum (2007)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 478, |
| "end": 490, |
| "text": "Liang (2005)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 495, |
| "end": 512, |
| "text": "Sun and Xu (2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 696, |
| "end": 715, |
| "text": "Zeng et al. (2013b)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 826, |
| "end": 848, |
| "text": "Yang and Vozila (2013)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 851, |
| "end": 870, |
| "text": "Zhang et al. (2013)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 877, |
| "end": 896, |
| "text": "Zeng et al. (2013a)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There could be valuable information encoded within the unlabelled data that researchers can take advantage of. For example, punctuation creates natural word boundaries (Li and Sun, 2009) : the character before a comma can only be labelled as either 'S' or 'E', while the character after a comma can only be labelled as 'S' or 'B'. Furthermore, entity tags (HTML tags or Wikipedia tags) on the web, such as emphasis and cross reference, also provide rich information for word segmentation: they might define a word or at least give word boundary information similar to punctuation. Jiang et al. (2013) refer to such structural information on the web as natural annotations, and propose that they encode knowledge for NLP. For Chinese word segmentation, natural annotations and punctuation create a sausage 1 constraint for the possible labels, as illustrated in Figure 1 . In the sentence \"\u8fd1\u5e74\u6765\uff0c\u4eba\u5de5\u667a\u80fd\u548c\u673a\u5668\u5b66\u4e60\u8fc5 \u731b\u53d1\u5c55\u3002\", the first character \u8fd1 can only be labelled with 'S' or 'B'; and the characters \u6765 before the comma and \u5c55 before the Chinese period can only be labelled as 'S' or 'E'. \"\u4eba\u5de5\u667a\u80fd\" and \"\u673a \u5668\u5b66\u4e60\" are two Wikipedia entities, and so they define the word boundaries before the first character and after the last character of the entities as well. The single character \u548c between these two entities has only one label 'S'. This sausage constraint thus encodes rich information for word segmentation.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 186, |
| "text": "(Li and Sun, 2009)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 581, |
| "end": 600, |
| "text": "Jiang et al. (2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 861, |
| "end": 869, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To make use of the knowledge encoded in the sausage constraint, Jiang et al. (2013) adopt a constrained decode approach. They first train a baseline model with labelled data, and then run constrained decode on the unlabelled data by binding the search space with the sausage; and so the decoded labels are consistent with the sausage constraint. The unlabelled data, together with the labels from constrained decode, are then selectively added to the labelled data for training the final model. This approach, using constrained decode as a middle step, provides an indirect way of leaning the knowledge. However, the middle step, constrained decode, has the risk of reinforcing the errors in the baseline model: the decoded labels added to the training data for building the final model might contain errors introduced from the baseline model. The knowledge encoded in the data carrying the information from punctuation and natural annotations is thus polluted by the errorful re-decoded labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A sentence where each character has exactly one label is fully-labelled; and a sentence where each character receives all possible labels is zerolabelled. A sentence with sausage-constrained labels can be viewed as partially-labelled. These partial labels carry valuable information that researchers would like to learn in a model, yet the normal CRF training typically uses fully-labelled sentences. Recently, T\u00e4ckstr\u00f6m et al. (2013) propose an approach to train a CRF model directly from partial labels. The basic idea is to marginalize the probability mass of the constrained sausage in the cost function. The normal CRF training using fully-labelled sentences is a special case where the sausage constraint is a linear line; while on the other hand a zero-labelled sentence, where the sausage constraint is the full lattice, makes no contribution in the learning since the sum of probabilities is deemed to be one. This new approach, without the need of using constrained re-decoding as a middle step, provides a direct means to learn the knowledge in the partial labels.", |
| "cite_spans": [ |
| { |
| "start": 411, |
| "end": 434, |
| "text": "T\u00e4ckstr\u00f6m et al. (2013)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this research we explore using the partiallabel learning for semi-supervised Chinese word segmentation. We use the CTB-6 corpus as the labelled training, development and test data, and use the Chinese Wikipedia as the unlabelled data. We first train a baseline model with labelled data only, and then selectively add Wikipedia data with partial labels to build a second model. Because the Wikipedia data is out of domain and has distribution bias, we also experiment with two domain adaptation techniques: model interpolation and EasyAdapt (Daum\u00e9 III, 2007) . Our result reaches an F-measure of 95.98%, an absolute improvement of 0.72% over the very strong base-line (corresponding to 15.19% relative error reduction), and 0.33% over the constrained decode approach (corresponding to 7.59% relative error reduction). We conduct a detailed error analysis, illustrating how partial-label learning excels constrained decode in learning the knowledge encoded in the Wikipedia data. As a note, our result also out-performs (Wang et al., 2011) and (Sun and Xu, 2011) .", |
| "cite_spans": [ |
| { |
| "start": 543, |
| "end": 560, |
| "text": "(Daum\u00e9 III, 2007)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1021, |
| "end": 1040, |
| "text": "(Wang et al., 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1045, |
| "end": 1063, |
| "text": "(Sun and Xu, 2011)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we review in more detail the partial-label learning algorithm with CRF proposed by (T\u00e4ckstr\u00f6m et al., 2013) . CRF is an exponential model that expresses the conditional probability of the labels given a sequence, as Equation 1, where y denotes the labels, x denotes the sequence, \u03a6(x, y) denotes the feature functions, and \u03b8 is the parameter vector.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 124, |
| "text": "(T\u00e4ckstr\u00f6m et al., 2013)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Z(x) = \u2211 y exp(\u03b8 T \u03a6(x, y)) is the normalization term. p \u03b8 (y|x) = exp(\u03b8 T \u03a6(x, y)) Z(x)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In full-label training, where each item in the sequence is labelled with exactly one tag, maximum likelihood is typically used as the optimization target. We simply sum up the log-likelihood of the n labelled sequences in the training set, as shown in Equation 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "L(\u03b8) = n \u2211 i=1 log p \u03b8 (y|x) = n \u2211 i=1 (\u03b8 T \u03a6(x i , y i ) \u2212 log Z(x i )) (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The gradient is calculated as Equation 3, in which the first term 1 n \u2211 n i=1 \u03a6 j is the empirical expectation of feature function \u03a6 j , and the second term E[\u03a6 j ] is the model expectation. Typically a forward-backward process is adopted for calculating the latter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2202 \u2202\u03b8 j L(\u03b8) = 1 n n \u2211 i=1 \u03a6 j \u2212 E[\u03a6 j ]", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In partial-label training, each item in the sequence receives multiple labels, and so for each sequence we have a sausage constraint, denoted a\u015d Y (x,\u1ef9). The marginal probability of the sausage is defined as Equation 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p \u03b8 (\u0176 (x,\u1ef9)|x) = \u2211 y\u2208\u0176 (x,\u1ef9) p \u03b8 (y|x)", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The optimization target thus is to maximize the probability mass of the sausage, as shown in Equation 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L(\u03b8) = n \u2211 i=1 logp \u03b8 (\u0176 (x i ,\u1ef9 i )|x i )", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A gradient-based approach such as L-BFGS (Liu and Nocedal, 1989) can be employed to optimize Equation 5. The gradient is calculated as Equation 6, where", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 64, |
| "text": "(Liu and Nocedal, 1989)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "E\u0176 (x,\u1ef9) [\u03a6 j ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "is the empirical expectation of feature function \u03a6 j constrained by the sausage, and E[\u03a6 j ] is the same model expectation as in standard CRF. E\u0176 (x,\u1ef9) [\u03a6 j ] can be calculated via a forward-backward process in the constrained sausage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2202 \u2202\u03b8 j L(\u03b8) = E\u0176 (x,\u1ef9) [\u03a6 j ] \u2212 E[\u03a6 j ]", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For fully-labelled sentences,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "E\u0176 (x,\u1ef9) [\u03a6 j ] = 1 n \u2211 n i=1 \u03a6 j ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "and so the standard CRF is actually a special case of the partial-label learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-Label Learning with CRF", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section we describe the basic setup for our experiments of semi-supervised Chinese word segmentation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment setup", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We use the CTB-6 corpus as the labelled data. We follow the official CTB-6 guideline in splitting the corpus into a training set, a development set, and a test set. The training set has 23420 sentences; the development set has 2079 sentences; and the test set has 2796 sentences. These are fully-labelled data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For unlabelled data we use the Chinese Wikipedia. The Wikipedia data is quite noisy and asks for a lot of cleaning. We first filter out references and lists etc., and sentences with obviously bad segmentations, for example, where every character is separated by a space. We also remove sentences that contain mostly English words. We then convert all characters into full-width. We also convert traditional Chinese characters into simplified characters using the tool mediawiki-zhconverter 2 . We then randomly select 7737 sentences and reserve them as the test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To create the partial labels in the Wikipedia data, we use the information from cross-reference, emphasis, and punctuation. In our pilot study we found that it's beneficial to force a cross-reference or emphasis entity as a word when the item has 2 or 3 characters. That is, if an entity in the Wikipedia has three characters it receives the labels of \"BIE\"; and if it has two characters it is labelled as \"BE\". 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We create the baseline supervised model by using an order-1 linear CRF with L2 regularization, to label a character sequence with the four candidate labels \"BIES\". We use the tool wapiti (Lavergne et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 187, |
| "end": 210, |
| "text": "(Lavergne et al., 2010)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Following , Sun (2010) , and Low et al. 2005, we extract two types of features: character-level features and word-level features. Given a character c 0 in the character sequence", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 22, |
| "text": "Sun (2010)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "...c \u22122 c \u22121 c 0 c 1 c 2 ...:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Character-level features :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 Character unigrams: c \u22122 , c \u22121 , c 0 , c 1 , c 2 \u2022 Character bigrams: c \u22122 c \u22121 , c \u22121 c \u22120 , c 0 c 1 , c 1 c 2 \u2022 Consecutive character equivalence: ?c \u22122 = c \u22121 , ?c \u22121 = c \u22120 , ?c 0 = c 1 , ?c 1 = c 2 \u2022 Separated character equivalence: ?c \u22123 = c \u22121 , ?c \u22122 = c 0 , ?c \u22121 = c 1 , ?c 0 = c 2 , ?c 1 = c 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 Whether the current character is a punctuation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "?Punct(c 0 ) \u2022 Character sequence pattern: T (C \u22122 )T (C \u22121 )T (C 0 )T (C 1 )T (C 2 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We classify all characters into four types. Type one has three characters '\u5e74' (year) '\u6708' (month) '\u65e5' (date). Type two includes number characters. Type three includes English characters. All others are Type four characters. Thus \"\u53bb\u5e74\u4e09\u6708S\" would generate the character sequence pattern \"41213\". (Sun and Xu, 2011) . If the current character c 0 and its surrounding context compose an idiom, we generate a feature for c 0 of its position in the idiom. For example, if c \u22121 c 0 c 1 c 2 is an idiom, we generate feature \"Idiom-2\" for c 0 .", |
| "cite_spans": [ |
| { |
| "start": 291, |
| "end": 309, |
| "text": "(Sun and Xu, 2011)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The above features together with label bigrams are fed to wapiti for training. The supervised baseline model is created with the CTB-6 corpus without the use of Wikipedia data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised baseline model", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The overall process of applying partial-label learning to Wikipedia data is shown in Algorithm 1. Following (Jiang et al., 2013) , we first train the supervised baseline model, and use it to estimate the potential contribution for each sentence in the Wikipedia training data. We label the sentence with the baseline model, and then compare the labels with the constrained sausage. For each character, a consistent label is defined as an element in the constrained labels. For example, if the constrained labels for a character are \"SB\", the label 'S' or 'B' is consistent but 'I' or 'E' is not. The number of inconsistent labels for each sentence is then used as its potential contribution to the partial-label learning: higher number indicates that the partial-labels for the sentence contain more knowledge that the baseline system does not integrate, and so have higher potential contribution. The Wikipedia training sentences are then ranked by their potential contribution, and the top Figure 2 : Encoded knowledge: inconsistency ratio and label reduction K sentences together with their partial labels are then added to the CTB-6 training data to build a new model, using partial-label learning. 4 In our experiments, we try six data points with K = 100k, 200k, 300k, 400k, 500k, 600k. Figure 2 gives a rough idea of the knowledge encoded in Wikipedia for these data points with inconsistency ratio and label reduction. Inconsistency ratio is the percentage of characters that have inconsistent labels; and label reduction is the percentage of the labels reduced in the full lattice.", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 128, |
| "text": "(Jiang et al., 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1203, |
| "end": 1204, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 992, |
| "end": 1000, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1293, |
| "end": 1301, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Partial-label learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We modify wapiti to implement the partial-label learning as described in Section 2. Same as baseline, L2 regularization is adopted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-label learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Algorithm 1 Partial-label learning 1. Train supervised baseline model M 0 2. For each sentence x in Wiki-Train: 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-label learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "y \u2190 Decode(x, M 0 ) 4. diff \u2190 Inconsistent(y,\u0176 (x,\u1ef9)) 5. if diff > 0: 6. C \u2190 C \u222a (\u0176 (x,\u1ef9), diff) 7. Sort(C, diff, reverse) 8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-label learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Train model M pl with CTB-6 and top K sentences in C using partial-label learning", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial-label learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Jiang et al. (2013) implement the constrained decode algorithm with perceptron. However, CRF is generally believed to out-perform perceptron, yet the comparison of CRF vs perceptron is out 4 Knowledge is sparsely distributed in the Wikipedia data. Using the Wikipedia data without the CTB-6 data for partiallabel learning does not necessarily guarantee convergence. Also the CTB-6 training data helps to learn that certain label transitions, such as \"B B\" or \"E E\", are not legal. of the scope of this paper. Thus for fair comparison, we re-implement the constrained decode algorithm with CRF.", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 190, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained decode", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Algorithm 2 shows the constrained decode implementation. We first train the baseline model with the CTB-6 data. We then use this baseline model to run normal decode and constrained decode for each sentence in the Wikipedia training set. If the normal decode and constrained decode have different labels, we add the constrained decode together with the number of different labels to the filtered Wikipedia training corpus. The filtered Wikipedia training corpus is then sorted using the number of different labels, and the top K sentences with constrained decoded labels are then added to the CTB-6 training data for building a new model using normal CRF.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained decode", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Algorithm 2 Constrained decode 1. Train supervised baseline model M 0 2. For each sentence x in Wiki-Train: 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained decode", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "y \u2190 Decode(x, M 0 ) 4.\u0233 \u2190 ConstrainedDecode(x, M 0 ) 5. diff \u2190 Difference(y,\u0233) 6. if diff > 0: 7. C \u2190 C \u222a (\u0233, diff) 8. Sort(C, diff, reverse) 9.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained decode", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Train model M cd with CTB-6 and top K sentences in C using normal CRF", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constrained decode", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In order to determine how well the models learn the encoded knowledge (i.e. partial labels) from the Wikipedia data, we first evaluate the models against the Wikipedia test set. The Wikipedia test set, however, is only partially-labelled. Thus the metric we use here is consistent label accuracy, similar to how we rank the sentences in Section 3.3, defined as whether a predicted label for a character is an element in the constrained labels. Because partial labels are only sparsely distributed in the test data, a lot of characters receive all four labels in the constrained sausage. Evaluating against characters with all four labels do not really represent the models' difference as it is deemed to be consistent. Thus beyond evaluating against all characters in the Wikipedia test set (referred to as Full measurement), we also evaluate against characters that are only constrained with less than four labels (referred to as Label measurement). The Label measurement focuses on en-coded knowledge in the test set and so can better represent the model's capability of learning from the partial labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation on Wikipedia test set", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Results are shown in Figure 3 with the Full measurement and in Figure 4 with the Label measurement. The x axes are the size of Wikipedia training data, as explained in Section 3.3. As can be seen, both constrained decode and partiallabel learning perform much better than the baseline supervised model that is trained from CTB-6 data only, indicating that both of them are learning the encoded knowledge from the Wikipedia training data. Also we see the trend that the performance improves with more data in training, also suggesting the learning of encoded knowledge. Most importantly, we see that partial-label learning consistently out-performs constrained decode in all data points. With the Label measurement, partial-label learning gives 1.7% or higher absolute improvement over constrained decode across all data points. At the data point of 600k, constrained decode gives an accuracy of 97.14%, while partial-label learning gives 98.93% (baseline model gives 87.08%). The relative gain (from learning the knowledge) of partial-label learning over constrained decode is thus 18% ((98.93 \u2212 97.14)/(97.14 \u2212 87.08)). These results suggest that partial-label learning is more effective in learning the encoded knowledge in the Wikipedia data than constrained decode.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 29, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 63, |
| "end": 71, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation on Wikipedia test set", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our ultimate goal, however, is to determine whether we can leverage the encoded knowledge in the Wikipedia data to improve the word segmentation in CTB-6. We run our models against the CTB-6 test set, with results shown in Figure 5. Because we have fully-labelled sentences in the CTB-6 data, we adopt the F-measure as our evaluation metric here. The baseline model achieves 95.26% in F-measure, providing a stateof-the-art supervised performance. Constrained decode is able to improve on this already very strong baseline performance, and we see the nice trend of higher performance with more unlabeled data for training, indicating that constrained decode is making use of the encoded knowledge in the Wikipedia data to help CTB-6 segmentation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 223, |
| "end": 229, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model adaptation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "When we look at the partial-label model, however, the results tell a totally different story. First, it actually performs worse than the baseline model, and the more data added to training, the worse the performance is. In the previous section we show that partial-label learning is more effective in learning the encoded knowledge in Wikipedia data than constrained decode. So, what goes wrong? We hypothesize that there is an out-of-domain distribution bias in the partial labels, and so the more data we add, the worse the in-domain performance is. Constrained decode actually helps to smooth out the out-of-domain distribution bias by using the re-decoded labels with the in-domain supervised baseline model. For example, both the baseline model and constrained decode correctly give the segmentation \"\u63d0\u4f9b/\u4e86/\u8fd0\u8f93/\u548c/\u7ed9 \u7ed9 \u7ed9\u6392 \u6392 \u6392\u6c34 \u6c34 \u6c34/\u4e4b/\u4fbf\", while partiallabel learning gives incorrect segmentation \"\u63d0 \u4f9b/\u4e86/\u8fd0 \u8f93/\u548c/\u7ed9 \u7ed9 \u7ed9/\u6392 \u6392 \u6392 \u6c34 \u6c34 \u6c34/\u4e4b/\u4fbf\". Looking at the Wikipedia training data, \u6392\u6c34 is tagged as an entity 13 times; and \u7ed9 \u6392 \u6c34, although occurs 13 times in the data, is never tagged as an entity. Partial-label learning, which focuses on the tagged entities, thus overrules the segmentation of \u7ed9\u6392 \u6c34. Constrained decode, on the other hand, by using the correctly re-decoded labels from the baseline model, observes enough evidence to correctly segment \u7ed9\u6392\u6c34 as a word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model adaptation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To smooth out the out-of-domain distribution bias, we experiment with two approaches: model interpolation and EasyAdapt (Daum\u00e9 III, 2007) .", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 137, |
| "text": "(Daum\u00e9 III, 2007)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model adaptation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We linearly interpolate the model of partial-label learning M pl with the baseline model M 0 to create the final model M pl + , as shown in Equation 7. The interpolation weight is optimized via a grid search between 0.0 and 1.0 with a step of 0.1, tuned on the CTB-6 development set. Again we modify wapiti so that it takes two models and an interpolation weight as input. For each model it creates a search lattice with posteriors, and then linearly combines the two lattices using the interpolation weight to create the final search space for decoding. As shown in Figure 5 , model M pl + consistently out-performs constrained decode in all data points. We also see the trend of better performance with more training data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 567, |
| "end": 575, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model interpolation", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "M pl + = \u03bb * M 0 + (1 \u2212 \u03bb) * M pl (7) 5.1.2 EasyAdapt", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model interpolation", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "EasyAdapt is a straightforward technique but has been shown effective in many domain adaptation tasks (Daum\u00e9 III, 2007) . We train the model M pl ea with feature augmentation. For each out-ofdomain training instance < x o , y o >, where x o is the input features and y o is the (partial) labels, we copy the features and file them as an additional feature set, and so the training instance becomes < x o , x o , y o >. The in-domain training data remains the same. Consistent with (Daum\u00e9 III, 2007) , EasyAdapt gives us the best performance, as show in Figure 5 . Furthermore, unlike in (Jiang et al., 2013) where they find a plateau, our results show no harm adding more training data for partial-label learning when integrated with domain adaptation, although the performance seems to saturate after 400k sentences.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 119, |
| "text": "(Daum\u00e9 III, 2007)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 481, |
| "end": 498, |
| "text": "(Daum\u00e9 III, 2007)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 587, |
| "end": 607, |
| "text": "(Jiang et al., 2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 553, |
| "end": 561, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model interpolation", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "Finally, we search for the parameter setting of best performance on the CTB-6 development set, which is to use EasyAdapt with K = 600k sentences of Wikipedia data. With this setting, the performance on the CTB-6 test set is 95.98% in F-measure. This is 0.72% absolute improvement over supervised baseline (corresponding to 15.19% relative error reduction), and 0.33% absolute improvement over constrained decode (corresponding to 7.59% relative error reduction); the differences are both statistically significant (p < 0.001). 5 As a note, this result out-performs (Sun and Xu, 2011) (95.44%) and (Wang et al., 2011) (95.79%), and the differences are also statistically significant (p < 0.001).", |
| "cite_spans": [ |
| { |
| "start": 597, |
| "end": 616, |
| "text": "(Wang et al., 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model interpolation", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "To better understand why partial-label learning is more effective in learning the encoded knowledge, we look at cases where M 0 and M cd have the incorrect segmentation while M pl (and its domain adaptation variance M pl + and M pl ea ) have the correct segmentation. We find that the majority is due to the error in re-decoded labels outside of encoded knowledge. For example, M 0 and M cd give the segmentation \"\u5730\u9707/\u4e3a/\u91cc \u91cc \u91cc/\u6c0f \u6c0f \u6c0f/6.9/ \u7ea7\", yet the correct segmentation given by partial-label learning is \"\u5730 \u9707/\u4e3a/\u91cc \u91cc \u91cc \u6c0f \u6c0f \u6c0f/6.9/ \u7ea7\". Looking at the Wikipedia training data, there are 38 tagged entities of \u91cc\u6c0f, but there are another 190 mentions of \u91cc\u6c0f that are not tagged as an entity. Thus for constrained decode it sees 38 cases of \"\u91cc\\B \u6c0f\\E\" and 190 cases of \"\u91cc\\S \u6c0f\\S\" in the Wikipedia training data. The former comes from the encoded knowledge while the latter comes from re-decoded labels by the baseline model. The much bigger number of incorrect labels from the baseline redecoding badly pollute the encoded knowledge. This example illustrates that constrained decode reinforces the errors from the baseline. On the other hand, the training materials for partial-label learning are purely the encoded knowledge, which is not impacted by the baseline model error. In this example, partial-label learning focuses only on the 38 cases of \"\u91cc\\B \u6c0f\\E\" and so is able to learn that \u91cc\u6c0f is a word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis with examples", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "As a final remark, we want to make a point that, although the re-decoded labels serve to smooth out the distribution bias, the Wikipedia data is indeed not the ideal data set for such a purpose, because it itself is out of domain. The performance tends to degrade when we apply the baseline model to re-decode the out-of-domain Wikipedia data. The errorful re-decoded labels, when being used to train the model M cd , could lead to further errors. For example, the baseline model M 0 is able to give the correct segmentation \"\u7535\u8111/\u5143 \u5143 \u5143\u5668 \u5668 \u5668\u4ef6 \u4ef6 \u4ef6\" in the CTB-6 test set. However, when it is applied to the Wikipedia data for constrained decode, for the seven occurrences of \u5143\u5668\u4ef6, three of which are correctly labelled as \"\u5143\\B \u5668\\I \u4ef6\\E\", but the other four have incorrect labels. The final model M cd trained from these labels then gives incorrect segmentation \"\u4e24/\u5e02/\u751f \u4ea7/\u7684/\u7535 \u8111/\u5143 \u5143 \u5143/\u5668 \u5668 \u5668\u4ef6 \u4ef6 \u4ef6/\u5927\u91cf/\u9500\u5f80/\u4e16\u754c/\u5404\u5730\" in the CTB-6 test set. On the other hand, model interpolation or EasyAdapt with partial-label learning, focusing only on the encoded knowledge and not being impacted by the errorful re-decoded labels, performs correctly in this case. For a more fair comparison between partial-label learning and constrained decode, we have also plotted the results of model interpolation and EasyAdapt for constrained decode in Figure 5 . As can be seen, they improve on constrained decode a bit but still fall behind the correspondent domain adaptation approach of partiallabel learning.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1313, |
| "end": 1321, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis with examples", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "There is rich information encoded in online web data. For example, punctuation and entity tags de-fine some word boundaries. In this paper we show the effectiveness of partial-label learning in digesting the encoded knowledge from Wikipedia data for the task of Chinese word segmentation. Unlike approaches such as constrained decode that use the errorful re-decoded labels, partial-label learning provides a direct means to learn the encoded knowledge. By integrating some domain adaptation techniques such as EasyAdapt, we achieve an F-measure of 95.98% in the CTB-6 corpus, a significant improvement from both the supervised baseline and constrained decode. Our results also beat (Wang et al., 2011) and (Sun and Xu, 2011) .", |
| "cite_spans": [ |
| { |
| "start": 683, |
| "end": 702, |
| "text": "(Wang et al., 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 707, |
| "end": 725, |
| "text": "(Sun and Xu, 2011)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this research we employ a sausage constraint to encode the knowledge for Chinese word segmentation. However, a sausage constraint does not reflect the legal label sequence. For example, in Figure 1 the links between label 'B ' and label 'S', between 'S' and 'E', and between 'E' and 'I' are illegal, and can confuse the machine learning. In our current work we solve this issue by adding some fully-labelled data into training. Instead we can easily extend our work to use a lattice constraint by removing the illegal transitions from the sausage. The partial-label learning stands the same, by executing the forward-backward process in the constrained lattice. In future work we will examine partial-label learning with this more enforced lattice constraint in depth.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 289, |
| "text": "' and label 'S', between 'S' and 'E', and between 'E' and 'I'", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 192, |
| "end": 200, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion and future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Also referred to as confusion network.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/tszming/mediawiki-zhconverter 3 Another possibility is to label it as \"SS\" but we find that it's very rare the case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Statistical significance is evaluated with z-test using the standard deviation of \u221a F * (1 \u2212 F )/N , where F is the Fmeasure and N is the number of words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank Wenbin Jiang, Xiaodong Zeng, and Weiwei Sun for helpful discussions, and the anonymous reviewers for insightful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Frustratingly easy domain adaptation", |
| "authors": [ |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Annual meetingassociation for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "256--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adap- tation. In Annual meetingassociation for computa- tional linguistics, pages 256-263. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Discriminative learning with natural annotations: Word segmentation as a case study", |
| "authors": [ |
| { |
| "first": "Wenbin", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Meng", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yajuan", |
| "middle": [], |
| "last": "Lv", |
| "suffix": "" |
| }, |
| { |
| "first": "Yating", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of The 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "761--769", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenbin Jiang, Meng Sun, Yajuan Lv, Yating Yang, and Qun Liu. 2013. Discriminative learning with natural annotations: Word segmentation as a case study. In Proceedings of The 51st Annual Meet- ing of the Association for Computational Linguis- tics, pages 761-769.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Semisupervised conditional random fields for improved sequence segmentation and labeling", |
| "authors": [ |
| { |
| "first": "Feng", |
| "middle": [], |
| "last": "Jiao", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaojun", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chi-Hoon", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Russell", |
| "middle": [], |
| "last": "Greiner", |
| "suffix": "" |
| }, |
| { |
| "first": "Dale", |
| "middle": [], |
| "last": "Schuurmans", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44", |
| "volume": "", |
| "issue": "", |
| "pages": "209--216", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, and Dale Schuurmans. 2006. Semi- supervised conditional random fields for improved sequence segmentation and labeling. In Proceed- ings of the 21st International Conference on Com- putational Linguistics and the 44th Annual Meet- ing of the Association for Computational Linguis- tics, ACL-44, pages 209-216.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Eighteenth International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning, pages 282-289, San Francisco, CA, USA.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Practical very large scale CRFs", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Lavergne", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Capp\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Yvon", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings the 48th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "504--513", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Lavergne, Olivier Capp\u00e9, and Fran\u00e7ois Yvon. 2010. Practical very large scale CRFs. In Proceed- ings the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 504-513. Association for Computational Linguistics, July.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Punctuation as implicit annotations for Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "Zhongguo", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "", |
| "pages": "505--512", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for Chinese word segmentation. Computational Linguistics, 35:505-512.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Semi-supervised learning for natural language", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang. 2005. Semi-supervised learning for natu- ral language. Master's thesis, MASSACHUSETTS INSTITUTE OF TECHNOLOGY, May.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "On the limited memory bfgs method for large scale optimization", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "C" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Mathematical Programming", |
| "volume": "45", |
| "issue": "3", |
| "pages": "503--528", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. C. Liu and J. Nocedal. 1989. On the limited mem- ory bfgs method for large scale optimization. Math- ematical Programming, 45(3):503-528, December.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A maximum entropy approach to chinese word segmentation", |
| "authors": [ |
| { |
| "first": "Jin", |
| "middle": [ |
| "Kiat" |
| ], |
| "last": "Low", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenyuan", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "161--164", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to chinese word seg- mentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 161-164, San Francisco, CA, USA.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Efficient computation of entropy gradient for semisupervised conditional random fields", |
| "authors": [ |
| { |
| "first": "Gideon", |
| "middle": [ |
| "S" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, NAACL-Short '07", |
| "volume": "", |
| "issue": "", |
| "pages": "109--112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gideon S. Mann and Andrew McCallum. 2007. Ef- ficient computation of entropy gradient for semi- supervised conditional random fields. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, NAACL-Short '07, pages 109-112.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Enhancing chinese word segmentation using unlabeled data", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Jia", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "970--979", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In Pro- ceedings of the 2011 Conference on Empirical Meth- ods in Natural Language Processing, pages 970- 979, Edinburgh, Scotland, UK., July.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A discriminative latent variable Chinese segmenter with hybrid word/character information", |
| "authors": [ |
| { |
| "first": "Xu", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yaozhong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Takuya", |
| "middle": [], |
| "last": "Matsuzaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshimasa", |
| "middle": [], |
| "last": "Tsuruoka", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "56--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshi- masa Tsuruoka, and Jun'ichi Tsujii. 2009. A dis- criminative latent variable Chinese segmenter with hybrid word/character information. In Proceedings of Human Language Technologies: The 2009 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, pages 56-64.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Word-based and character-based word segmentation models: comparison and combination", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", |
| "volume": "", |
| "issue": "", |
| "pages": "1211--1219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiwei Sun. 2010. Word-based and character-based word segmentation models: comparison and com- bination. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1211-1219.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Token and type constraints for cross-lingual part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tag- ging. Transactions of the Association for Computa- tional Linguistics, 1:1-12.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Improving chinese word segmentation and POS tagging with semi-supervised methods using large auto-analyzed data", |
| "authors": [ |
| { |
| "first": "Yiou", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshimasa", |
| "middle": [], |
| "last": "Kazama", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenliang", |
| "middle": [], |
| "last": "Tsuruoka", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "309--317", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yiou Wang, Jun'ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Tori- sawa. 2011. Improving chinese word segmentation and POS tagging with semi-supervised methods us- ing large auto-analyzed data. In Proceedings of the 5th International Joint Conference on Natural Lan- guage Processing, pages 309-317.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Chinese word segmentation as character tagging", |
| "authors": [ |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics and Chinese Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "29--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, pages 29-48.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "An empirical study of semi-supervised Chinese word segmentation using co-training", |
| "authors": [ |
| { |
| "first": "Fan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Vozila", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1191--1200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fan Yang and Paul Vozila. 2013. An empirical study of semi-supervised Chinese word segmentation us- ing co-training. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 1191-1200, Seattle, Washington, USA, October. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Co-regularizing characterbased and word-based models for semi-supervised chinese word segmentation", |
| "authors": [ |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Zeng", |
| "suffix": "" |
| }, |
| { |
| "first": "Derek", |
| "middle": [ |
| "F" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "Lidia", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chao", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabel", |
| "middle": [], |
| "last": "Trancoso", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of The 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "171--176", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Isabel Trancoso. 2013a. Co-regularizing character- based and word-based models for semi-supervised chinese word segmentation. In Proceedings of The 51st Annual Meeting of the Association for Compu- tational Linguistics, pages 171-176.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Graph-based semisupervised model for joint chinese word segmentation and part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Zeng", |
| "suffix": "" |
| }, |
| { |
| "first": "Derek", |
| "middle": [ |
| "F" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "Lidia", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chao", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabel", |
| "middle": [], |
| "last": "Trancoso", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of The 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "770--779", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Isabel Trancoso. 2013b. Graph-based semi- supervised model for joint chinese word segmen- tation and part-of-speech tagging. In Proceedings of The 51st Annual Meeting of the Association for Computational Linguistics, pages 770-779.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Exploring representations from unlabeled data with co-training for Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "Longkai", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Houfeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xu", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Mairgup", |
| "middle": [], |
| "last": "Mansur", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "311--321", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from un- labeled data with co-training for Chinese word seg- mentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing, pages 311-321, Seattle, Washington, USA, October. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Sausage constraint (partial labels) from natural annotations and punctuation" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Wiki label evaluation results: Full Figure 4: Wiki label evaluation results: Label Figure 5: CTB evaluation results" |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>\u2022 The identity of the bi-gram c[s : i \u2212</td></tr><tr><td>1]c[i : e] (i \u2212 6 < s, e < i + 6), if</td></tr><tr><td>it matches a word bigram; multiple fea-</td></tr><tr><td>tures could be generated.</td></tr><tr><td>\u2022 The identity of the bi-gram c[s : i]c[i +</td></tr><tr><td>1 : e] (i \u2212 6 < s, e < i +6), if it matches</td></tr><tr><td>a word bigram; multiple features could</td></tr><tr><td>be generated.</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "Word-level features :\u2022 The identity of the string c[s : i] (i\u22126 < s < i), if it matches a word from the list of word unigrams; multiple features could be generated. \u2022 The identity of the string c[i : e] (i < e < i+6), if it matches a word; multiple features could be generated." |
| } |
| } |
| } |
| } |