ACL-OCL / Base_JSON /prefixQ /json /Q17 /Q17-1013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q17-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:12:08.061445Z"
},
"title": "Nonparametric Bayesian Semi-supervised Word Segmentation",
"authors": [
{
"first": "Ryo",
"middle": [],
"last": "Fujii",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hakuhodo Inc. R&D Division",
"location": {
"addrLine": "5-3-1 Akasaka, Minato-ku",
"settlement": "Tokyo"
}
},
"email": "ryo.b.fujii@hakuhodo.co.jp"
},
{
"first": "Ryo",
"middle": [],
"last": "Domoto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hakuhodo Inc. R&D Division",
"location": {
"addrLine": "5-3-1 Akasaka, Minato-ku",
"settlement": "Tokyo"
}
},
"email": "ryo.domoto@hakuhodo.co.jp"
},
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": "",
"affiliation": {},
"email": "daichi@ism.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel hybrid generative/discriminative model of word segmentation based on nonparametric Bayesian methods. Unlike ordinary discriminative word segmentation which relies only on labeled data, our semi-supervised model also leverages a huge amounts of unlabeled text to automatically learn new \"words\", and further constrains them by using a labeled data to segment non-standard texts such as those found in social networking services. Specifically, our hybrid model combines a discriminative classifier (CRF; Lafferty et al. (2001) and unsupervised word segmentation (NPYLM; Mochihashi et al. (2009)), with a transparent exchange of information between these two model structures within the semisupervised framework (JESS-CM; Suzuki and Isozaki (2008)). We confirmed that it can appropriately segment non-standard texts like those in Twitter and Weibo and has nearly state-of-the-art accuracy on standard datasets in Japanese, Chinese, and Thai.",
"pdf_parse": {
"paper_id": "Q17-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel hybrid generative/discriminative model of word segmentation based on nonparametric Bayesian methods. Unlike ordinary discriminative word segmentation which relies only on labeled data, our semi-supervised model also leverages a huge amounts of unlabeled text to automatically learn new \"words\", and further constrains them by using a labeled data to segment non-standard texts such as those found in social networking services. Specifically, our hybrid model combines a discriminative classifier (CRF; Lafferty et al. (2001) and unsupervised word segmentation (NPYLM; Mochihashi et al. (2009)), with a transparent exchange of information between these two model structures within the semisupervised framework (JESS-CM; Suzuki and Isozaki (2008)). We confirmed that it can appropriately segment non-standard texts like those in Twitter and Weibo and has nearly state-of-the-art accuracy on standard datasets in Japanese, Chinese, and Thai.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For any unsegmented language, especially East Asian languages such as Chinese, Japanese and Thai, word segmentation is almost an inevitable first step in natural language processing. In fact, it is becoming increasingly important lately because of the growing interest in processing user-generated media, such as Twitter and blogs. Texts in such media are often written in a colloquial style that contains many new words and expressions that are not present in any existing dictionaries. Since such words are theoretically infinite in number, we need to leverage unsupervised learning to automatically identify them in corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this purpose, ordinary supervised learning is clearly unsatisfactory; even hand-crafted dictionar-ies will not suffice because functional expressions more complex than simple nouns need to be recognized through their relationship with other words in text, which also might be unknown in advance. Previous studies of this issue used character and word information in the framework of supervised learning (Kruengkrai et al., 2009; Sun et al., 2009; Sun and Xu, 2011) . However, they",
"cite_spans": [
{
"start": 407,
"end": 432,
"text": "(Kruengkrai et al., 2009;",
"ref_id": "BIBREF8"
},
{
"start": 433,
"end": 450,
"text": "Sun et al., 2009;",
"ref_id": "BIBREF22"
},
{
"start": 451,
"end": 468,
"text": "Sun and Xu, 2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) did not explicitly model new words, or (2) did not give a seamless combination with discriminative classifiers (e.g., they just used a threshold to discriminate between known and unknown words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast, unsupervised word segmentation methods (Goldwater et al., 2006; Mochihashi et al., 2009) use nonparametric Bayesian generative models for word generation to infer the \"words\" only from observations of raw input strings. These methods work quite well and have been used not only for tokenization but also for machine translation (Nguyen et al., 2010) , speech recognition (Lee and Glass, 2012; Heymann et al., 2014) , and even robotics (Nakamura et al., 2014) .",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Goldwater et al., 2006;",
"ref_id": "BIBREF4"
},
{
"start": 77,
"end": 101,
"text": "Mochihashi et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 341,
"end": 362,
"text": "(Nguyen et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 384,
"end": 405,
"text": "(Lee and Glass, 2012;",
"ref_id": "BIBREF11"
},
{
"start": 406,
"end": 427,
"text": "Heymann et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 448,
"end": 471,
"text": "(Nakamura et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, from a practical point of view, such purely unsupervised approaches do not suffice. Since they only aim to maximize the probability of the language model on the observed set of strings, they sometimes yield word segmentations that are Figure 1: Excerpt of Weibo tweets. It contains many \"unknown\" words such as novel proper nouns, terms from local dialects, etc., that cannot be covered by ordinary labeled data or dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 5, pp. 179-189, 2017. Action Editor: Masaaki Nagata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "179",
"sec_num": null
},
{
"text": "Submission batch: 10/2016; Revision batch: 12/2016; Published 6/2017. c 2017 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license. different from human standards on low frequency words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "179",
"sec_num": null
},
{
"text": "To solve this problem, this paper describes a novel combination of a nonparametric Bayesian generative model (NPYLM; Mochihashi et al. (2009) ) and a discriminative classifier (CRF; Lafferty et al. (2001) ). This combination is based on a semisupervised framework called JESS-CM (Suzuki and Isozaki, 2008) , and it requires a nontrivial exchange of information between these two models. In this approach, the generative and discriminative models will \"teach each other\" and yield a novel log-linear model for word segmentation.",
"cite_spans": [
{
"start": 117,
"end": 141,
"text": "Mochihashi et al. (2009)",
"ref_id": "BIBREF15"
},
{
"start": 182,
"end": 204,
"text": "Lafferty et al. (2001)",
"ref_id": "BIBREF10"
},
{
"start": 279,
"end": 305,
"text": "(Suzuki and Isozaki, 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "179",
"sec_num": null
},
{
"text": "Experiments on standard datasets of Chinese, Japanese, and Thai indicate that this hybrid model achieves nearly state-of-the-art accuracy on standard corpora, and, thanks to our nonparametric Bayesian model of infinite vocabulary it can accurately segment non-standard texts like those in Twitter and Weibo (the Chinese equivalent of Twitter) without any human intervention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "179",
"sec_num": null
},
{
"text": "This paper is organized as follows. Section 2 introduces NPYLM which will be leveraged in the framework of JESS-CM, described in Section 3. Section 4 introduces our model, NPYCRF, and the necessary exchange of information, while Section 5 is devoted to experiments on datasets in Chinese, Japanese, and Thai. We analyze the results and discuss future directions of research on semisupervised learning in Section 6 and conclude in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "179",
"sec_num": null
},
{
"text": "To acquire new words from an observation consisting of raw strings, a generative model of words can be extremely useful for word segmentation. Goldwater et al. (2006) showed that a bigram hierarchical Dirichlet process (HDP) model based on Gibbs sampling can effectively find \"words\" in small corpora. In extending this work, Mochihashi et al. (2009) proposed a nested Pitman-Yor language model (NPYLM), a hierarchical Bayesian language model, where character n-grams (actually, \u221e-grams (Mochihashi and Sumita, 2008) ) are embedded in word n-grams, and an efficient dynamic programming algorithm for inference exists. Conceptually, NPYLM posits that an infinite number of spellings, Character HPYLM Word HPYLM Figure 2 : The structure of NPYLM by a Chinese Restaurant Process representation (replicated from Mochihashi et al. (2009) ). The word and character HPYLM are drawn as suffix trees; the character HPYLM is a base measure for the word HPYLM, and the two are learned as a single model. Each black customer is a count in HPYLM, and a white customer is a latent proxy customer initiated from each black customer: see Teh (2006) for details.",
"cite_spans": [
{
"start": 143,
"end": 166,
"text": "Goldwater et al. (2006)",
"ref_id": "BIBREF4"
},
{
"start": 326,
"end": 350,
"text": "Mochihashi et al. (2009)",
"ref_id": "BIBREF15"
},
{
"start": 487,
"end": 516,
"text": "(Mochihashi and Sumita, 2008)",
"ref_id": "BIBREF14"
},
{
"start": 808,
"end": 832,
"text": "Mochihashi et al. (2009)",
"ref_id": "BIBREF15"
},
{
"start": 1122,
"end": 1132,
"text": "Teh (2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 710,
"end": 718,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "i.e., \"words\", are probabilistically generated from character n-grams, and a word unigram is drawn using the character n-grams as the base measure. Then bigram and trigram distributions are hierarchically generated and the final string is yielded from the \"word\" n-grams, as shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 292,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "Practically, NPYLM can be considered as a hierarchical smoothing of the Bayesian n-gram language model, HPYLM (Teh, 2006) . In HPYLM, the predictive distribution of a word w = w t given a history",
"cite_spans": [
{
"start": 110,
"end": 121,
"text": "(Teh, 2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "h = w t\u2212(n\u22121) \u2022 \u2022 \u2022 w t\u22121 is expressed as p(w|h) = c(w|h)\u2212d\u2022t hw \u03b8+c(h) + \u03b8+d\u2022t h \u2022 \u03b8+c(h) \u2022p(w|h \u2032 ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "where c(w|h) denotes the observed counts, \u03b8 and d are model parameters, and t hw and t h\u2022 = w t hw are latent variables estimated in the model. The probability of w given h is recursively interpolated using a shorter history",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "h \u2032 = w t\u2212(n\u22122) \u2022 \u2022 \u2022 w t\u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "If h is already empty at the unigram level, NPYLM employs a back-off distribution using character n-grams for p(w|h \u2032 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p 0 (w) = p(c 1 \u2022 \u2022 \u2022 c k ) (2) = k i=1 p(c i |c 1 \u2022 \u2022 \u2022 c i\u22121 ) .",
"eq_num": "(3)"
}
],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "In this way, NPYLM can assign appropriate probabilities to every possible sequence of segmentation and learn the word and character n-grams at the same time by using a single generative model (Mochihashi et al., 2009) . Semi-Markov view of NPYLM NPYLM formulates unsupervised word segmentation as learning with a semi-Markov model (Figure 3 ). Here, each node corresponds to an inside probability \u03b1[t][k] 1 that equals the probability of a substring c t 1 = c 1 \u2022 \u2022 \u2022 c t with the last k characters c t t\u2212k+1 being a word. This inside probability can be computed recursively as follows:",
"cite_spans": [
{
"start": 192,
"end": 217,
"text": "(Mochihashi et al., 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 331,
"end": 340,
"text": "(Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "\u03b1[t][k] = L j=1 p(c t t\u2212k+1 |c t\u2212k t\u2212k\u2212j+1 ) \u2022 \u03b1[t\u2212k][j] (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "Here, 1 \u2264 L \u2264 t\u2212k is the maximum allowed length of a word. With these inside probabilities, we can make use of Markov Chain Monte Carlo (MCMC) method with an efficient forward filtering-backward sampling algorithm (Scott, 2002) , namely a \"stochastic Viterbi\" algorithm to iteratively sample \"words\" from raw strings in a completely unsupervised fashion, while avoiding local minima.",
"cite_spans": [
{
"start": 214,
"end": 227,
"text": "(Scott, 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "Problems and Beyond Unsupervised word segmentation with NPYLM works surprisingly well for many languages (Mochihashi et al., 2009) ; however, it has certain issues. First, since it optimizes the performance of the language model, its segmentation does not always conform to human standards and depends on subtle modeling decisions. For example, NPYLM often separates inflectional suffixes in Japanese like \" \" in \" -\" from the rest of the verb, when it is actually a part of the verb itself. Second, it can produce deficient segmentations for low-frequency words and the beginning or ending of a string where the available information comes from only one direction. These issues can be alleviated by using na\u00efve semi-supervised learning method (Mochihashi et al., 2009 ) that simply adds n-gram counts from supervised segmentations in advance. However, this solution is not perfect because these supervised counts will eventually be overwhelmed by the unsupervised counts, because the overall objective function remains unsupervised.",
"cite_spans": [
{
"start": 105,
"end": 130,
"text": "(Mochihashi et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 744,
"end": 768,
"text": "(Mochihashi et al., 2009",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "To resolve this issue, we must resort to an explicit semi-supervised learning framework that combines both discriminative and generative models. We used JESS-CM (Suzuki and Isozaki, 2008) , currently the best such framework for this purpose, which we will briefly introduce below.",
"cite_spans": [
{
"start": 161,
"end": 187,
"text": "(Suzuki and Isozaki, 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Segmentation",
"sec_num": "2"
},
{
"text": "JESS-CM (Joint probability model Embedding style Semi-Supervised Conditional Model) is a semisupervised learning framework that outperforms other generative and log-linear models (Druck and McCallum, 2010) . In JESS-CM, the probability of a label sequence y given an input sequence x is written as follows:",
"cite_spans": [
{
"start": 179,
"end": 205,
"text": "(Druck and McCallum, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "p(y|x) \u221d p DISC (y|x; \u039b) p GEN (y, x; \u0398) \u03bb 0 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "where p DISC and p GEN are respectively the discriminative and generative models, and \u039b and \u0398 are their corresponding parameters. Equation 5is the product of the experts, where each expert works as a \"constraint\" to the other with a relative geometrical interpolation weight 1 : \u03bb 0 . If we take p DISC to be a log-linear model like CRF (Lafferty et al., 2001) :",
"cite_spans": [
{
"start": 337,
"end": 360,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "p DISC (y|x) \u221d exp K k=1 \u03bb k f k (y, x) , (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "Equation (5) can be also expressed as a loglinear model with a new \"feature function\" log p GEN (y, x):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y|x) \u221d exp \u03bb 0 log p GEN (y, x) + K k=1 \u03bb k f k (y, x) = exp (\u039b \u2022 F (y, x)) .",
"eq_num": "(7)"
}
],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "Here, the parameter",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "\u039b = (\u03bb 0 , \u03bb 1 , \u2022 \u2022 \u2022 , \u03bb K ) includes the interpolation weight \u03bb 0 and F (y, x) = (log p GEN (y, x), f 1 (y, x), \u2022 \u2022 \u2022 , f K (y, x)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "JESS-CM interleaves the optimization of \u039b and \u0398 to maximize the objective function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "p(Y l , X u |X l ; \u039b, \u0398) = p(Y l |X l ; \u039b) \u2022 p(X u ; \u0398) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "where X l , Y l is the labeled dataset and X u is the unlabeled dataset. Suzuki and Isozaki (2008) conducted semisupervised learning on a combination of a CRF and an HMM, as shown in Figure 4 . Since CRF and HMM have the same Markov model structure, they interpolate two weights",
"cite_spans": [
{
"start": 73,
"end": 98,
"text": "Suzuki and Isozaki (2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 183,
"end": 191,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K k=1 \u03bb k f k (y t , y t\u22121 , x) and (9) \u03bb 0 log p GEN (y t |y t\u22121 , x)",
"eq_num": "(10)"
}
],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "on the corresponding path, altenately",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "\u2022 fixing \u0398 and optimizing \u039b of CRF on X l , Y l , and \u2022 fixing \u039b and optimizing \u0398 of HMM on X u until convergence, and thereby iteratively maximizing the two terms in (8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "Through this optimization, p DISC and p GEN will \"teach each other\" to make the feature log p GEN more accurate, and further rectified by p DISC with respect to the labeled data. Note that the interpolation weight \u03bb 0 is automatically computed through this process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration with a Discriminative Model",
"sec_num": "3"
},
{
"text": "We wish to integrate NPYLM and CRF, applying semi-supervised learning via JESS-CM. Note that Suzuki and Isozaki (2008) implicitly assumed that the discriminative and generative models have the same structure as shown in Figure 4 . Since NPYLM is a semi-Markov model as described in Section 2, a na\u00efve approach would be to combine it with a semi-Markov CRF (Sarawagi and Cohen, 2005) as the discriminative model. However, this strategy does not work well for two reasons: First, since a semi-Markov CRF is a model for transitions between segments, it cannot deal with character-level transitions and thus performs suboptimally on its own. In fact, our preliminary supervised word segmentation experiments showed a F 1 measure of around 95%, whereas a character-wise Markov CRF achieves >99%. Second, the semi-Markov CRF was originally designed to chunk at most a few words (Sarawagi and Cohen, 2005) . However, in word segmentation of Japanese, for example, we often encounter long proper nouns or Katakana sequences that are more than ten characters, requiring a huge amount of memory even for a small dataset.",
"cite_spans": [
{
"start": 93,
"end": 118,
"text": "Suzuki and Isozaki (2008)",
"ref_id": "BIBREF25"
},
{
"start": 356,
"end": 382,
"text": "(Sarawagi and Cohen, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 872,
"end": 898,
"text": "(Sarawagi and Cohen, 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 220,
"end": 228,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Connecting Two Worlds: NPYCRF",
"sec_num": "4"
},
{
"text": "In this paper we instead transparently exchange information between the Markov model (CRF) on characters and the semi-Markov model (NPYLM) on words to perform a semi-supervised learning on different model structures. Called NPYCRF, this unified statistical model makes good use of the discriminative model (CRF) from the labeled data and the generative model (NPYLM) from the unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connecting Two Worlds: NPYCRF",
"sec_num": "4"
},
{
"text": "To convert from a CRF to NPYLM, we can easily translate Markov potentials into semi-Markov potentials as shown in Andrew (2006) for the supervised learning case.",
"cite_spans": [
{
"start": 114,
"end": 127,
"text": "Andrew (2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "Consider the situation depicted in Figure 5 . Here we can see that the potential of the substring \" \" (Tokyo prefecture) in the semi-Markov model (left) corresponds to the sum of the potentials in the Markov model (right) along the path shown in bold. Here, we introduce binary hidden states in the Markov model for each character, similarly to the BI tags used in supervised learning, where state 1 represents the beginning of a word and state 0 represents a continuation of the word.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "Mathematically, we define \u03b3[a, b) as the sum of the potentials along a U-shaped path over an interval [a, b) (a < b) as shown in Figure 5 , which begins with state 1 and ends with (but does not include) 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 137,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "k j t t\u2212k+1 t\u2212k t\u2212k\u2212j +1 k j t\u2212k+1 t t+j t+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "Figure 6: Substring transitions for marginalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "Using this notation, the potential that corresponds to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "\u03b1[t][k] is \u03b3[t \u2212 k + 1, t + 1) covering c t\u2212k+1 \u2022 \u2022 \u2022 c t ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "and thus the forward recursion of the inside probability \u03b1[t][k] that incorporates the information from the CRF can be written as follows, instead of (4):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "\u03b1[t][k] = L j=1 exp \u03bb 0 log p(c t t\u2212k+1 |c t\u2212k t\u2212k\u2212j+1 ) + \u03b3[t\u2212k+1, t+1) \u2022 \u03b1[t\u2212k][j]. (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "Backward sampling can be performed in a similar fashion. In this way, we can incorporate information from the character-wise discriminative model (CRF) into the language model segmentation of NPYLM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF\u2192NPYLM",
"sec_num": "4.1"
},
{
"text": "On the other hand, translating the information from the semi-Markov to Markov model, i.e., translating a potential from the word-based language model into the character-wise discriminative classifier, is not trivial. However, as we describe below, it is actually possible to do so by extending the technique proposed in Andrew (2006) .",
"cite_spans": [
{
"start": 320,
"end": 333,
"text": "Andrew (2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Note that for the inference of CRF, from the standard theory of log-linear models we only have to compute its gradient with respect to the expectation of each feature in the current model. This reduces the problem to a computation of the marginal probability of each path, which can be derived within the framework of semi-Markov models as follows: Semi-Markov feature \u03bb 0 . Following the line of argument presented in the Section 4.1, the potential with respect to the semi-Markov feature weight \u03bb 0 that is associated with the word transition c t\u2212k t\u2212k\u2212j+1 \u2192 c t t\u2212k+1 , shown in Figure 6 , can be expressed as an expectation using the standard forward-backward formula:",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 590,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(c t t\u2212k+1 , c t\u2212k t\u2212k\u2212j+1 |s) = \u03b1[t\u2212k][j] \u03b2[t][k] \u2022 exp \u03bb 0 log p(c t t\u2212k+1 |c t\u2212k t\u2212k\u2212j+1 ) + \u03b3[t\u2212k+1, t+1) /Z(s)",
"eq_num": "(12)"
}
],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Here, Z(s) is a normalizing constant associated with each input string s, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "\u03b2[t][k]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "is a backward proba-bility similar to (11) computed by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "\u03b2[t][k] = L j=1 exp \u03bb 0 log p(c t+j t+1 |c t t\u2212k+1 ) \u03b3[t+1, t+j +2) \u2022 \u03b2[t+j][j] . (13) Markov features \u03bb 1 , \u2022 \u2022 \u2022 , \u03bb K .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Note that the features associated with label bigrams in our binary CRF can be divided into four types: 1-1,1-0,0-1, and 0-0, as shown in Figure 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 145,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Case 1-1: As shown in Figure 8 (a), this case means that a word of length 1 begins at time t, which is equivalent to the probability of substring c t t being a word:",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z t = 1, z t+1 = 1|s) = p(c t t |s).",
"eq_num": "(14)"
}
],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Here, p(c k \u2113 |s) is the marginal probability of a substring c \u2113 \u2022 \u2022 \u2022 c k being a word, which can be derived from equation 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(c k \u2113 |s) = j p(c k \u2113 , c \u2113\u22121 \u2113\u2212j |s) = j \u03b1[\u2113\u22121][j] \u2022 \u03b2[k][k\u2212\u2113+1] \u2022 exp \u03bb 0 log p(c k \u2113 |c \u2113\u22121 \u2113\u2212j ) + \u03b3[\u2113, k+1) /Z(s) = \u03b2[k][k\u2212\u2113+1] Z(s) \u2022 j exp \u03bb 0 log p(c k \u2113 |c \u2113\u22121 \u2113\u2212j ) + \u03b3[\u2113, k+1) \u03b1[\u2113\u22121][j] = \u03b1[k][k\u2212\u2113+1] \u2022 \u03b2[k][k\u2212\u2113+1] Z(s)",
"eq_num": "(15)"
}
],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Case 1-0: As shown in Figure 8 (b), this case means that a word begins at time t and has length at least 2. Since we do not know the endpoint of this word, we can obtain the probability p(z t = 1, z t+1 = 0) by marginalizing over the endpoint j (\u2022 \u2022 \u2022 means values all 0): Case 0-1: Similarly, as shown in Figure 8 (c) this case means that a word of length at least 2 begins before time t and ends at time t. Therefore, we can marginalize over the start point of a possible word to obtain the marginal probability:",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 8",
"ref_id": null
},
{
"start": 306,
"end": 314,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z t = 1, z t+1 = 0|s) = j=2 p(z t = 1, z t+1 = 0, \u2022 \u2022 \u2022 , z t+j = 1|s) = j=2 p(c t+j\u22121 t |s)",
"eq_num": "(16)"
}
],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z t = 0, z t+1 = 1|s) = j=1 p(z t\u2212j = 1, \u2022 \u2022 \u2022 , z t = 0, z t+1 = 1|s) (17) = j=1 p(c t t\u2212j |s) .",
"eq_num": "(18)"
}
],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Case 0-0: In principle, this means that a word begins before time t and ends later than (and including) time t + 1. Therefore, we can marginalize over both the start and end time of a possible word spanning [t, t+1] to obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "p(z t = 0, z t+1 = 0|s) = j=1 k=1 p(c t+k t\u2212j |s) . (19)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "However, in fact we can avoid this nested computation because the probability of p(z t , z t+1 ) over the possible values of z t and z t+1 must sum to 1. We can therefore simply calculate it as follows (Andrew, 2006) :",
"cite_spans": [
{
"start": 202,
"end": 216,
"text": "(Andrew, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z t = 0, z t+1 = 0|s) = 1\u2212p(1, 1)\u2212p(1, 0)\u2212p(0, 1)",
"eq_num": "(20)"
}
],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "where p(x, y) means p(z t = x, z t+1 = y|s).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NPYLM\u2192CRF",
"sec_num": "4.2"
},
{
"text": "Finally, we obtain the inference algorithm for NPY-CRF as a variant of the MCMC-EM algorithm (Wei and Tanner, 1990) shown in Figure 9. 2 In learning of a NPYLM, we add the CRF potentials as described in Section 4.1, and sample a possible segmentation from the posterior through Forward filtering-Backward sampling to update the model parameters. On the basis of this improved language model, the CRF weights are then optimized by incorporating language model features as explained in Section 4.2. We iterate this process until convergence.",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "(Wei and Tanner, 1990)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 125,
"end": 134,
"text": "Figure 9.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.3"
},
{
"text": "Note that we first have to learn an unsupervised segmentation in Step 2 before training the CRF. Since our inference algorithm includes an optimization of CRF and thus is not a true MCMC, the learning of word segmentation after the supervised information will be severely constrained and likely to get stuck in local minima.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.3"
},
{
"text": "In practice, we found that the EM-style batch learning of CRF described above often fails because our objective function is non-convex. Therefore, we switched to ADF below (Sun et al., 2014) , an adaptive stochastic gradient descent that yields state-ofthe-art accuracies for natural language processing problems including word segmentation. In this case, \u039b in Figure 9 was optimized with each minibatch through the labeled data X l , Y l , while incorporating information from the unlabeled data X u by the language model.",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Sun et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 361,
"end": 369,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.3"
},
{
"text": "Because of its heavy computational demands,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.3"
},
{
"text": "1: Add Y l , X l to NPYLM. 2: Optimize \u039b on Y l , X l . (pure CRF) 3: for j = 1 \u2022 \u2022 \u2022 M do 4: for i = randperm(1 \u2022 \u2022 \u2022 N ) do 5: if j > 1 then 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.3"
},
{
"text": "Remove customers of X Optimize \u039b of NPYCRF on Y l , X l . 12: end for Figure 9 : Basic learning algorithm for NPYCRF. X (i) u denotes the i-th sentence in the unlabeled data X u . We can also iterate steps 4 to 10 several times until \u0398 approximately converges, before updating \u039b. Test Chinese MSR 86,924 865,679 3,985 Weibo 10K-40K 880,920 3 30,000 Japanese Twitter 59,931 600,000 444 Thai InterBEST 10,000 30,133 10,000 we parallelized the NPYLM sampling over several processors and because of the possible correlation of segmentations within the samples, used the Metropolis-Hastings algorithm to correct them. The acceptance rate in our experiments was over 99%. For decoding, we can simply find a Viterbi path in the integrated semi-Markov model while fixing all the sampled segmentations on the unlabeled data.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Figure 9",
"ref_id": null
},
{
"start": 280,
"end": 401,
"text": "Test Chinese MSR 86,924 865,679 3,985 Weibo 10K-40K 880,920 3 30,000 Japanese Twitter 59,931 600,000 444 Thai",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Inference",
"sec_num": "4.3"
},
{
"text": "We conducted experiments on several corpora of unsegmented languages: Japanese, Chinese, and Thai. The corpora included standard corpora as well as text from Twitter and its equivalent, Weibo, in Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Chinese For Chinese, we first used a standard dataset from the SIGHAN Bakeoff 2005 (Emerson, 2005 for the labeled and test data, and Chinese gigaword version 2 (LDC2009T14) for the unlabeled data. We chose the MSR subset of SIGHAN Bakeoff written in simplified Chinese together with the provided training and test splits, which contain about 87K/40K sentences, respectively. For the unlabeled data, i.e., a collection of raw strings, we used a random subset of 880K sentences from Chinese gigaword with all spaces removed. We chose this size to be about 10 times larger than the labeled data, considering current computational requirements. We used the part from the Xinhua news agency 2004 and split the data into sentences at the end-of-sentence character \" \".",
"cite_spans": [
{
"start": 63,
"end": 82,
"text": "SIGHAN Bakeoff 2005",
"ref_id": null
},
{
"start": 83,
"end": 97,
"text": "(Emerson, 2005",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "Because the MSR and Xinhua datasets were compiled from newspapers, to meet our objective on informal text we conducted further experiments using Table 3 : Accuracies on Leiden Weibo corpus in Chinese. 'Label' and 'Unlabel' are the amounts of labeled and unlabeled data, respectively. \"Topline\" is an ideal situation of complete supervision, and K= 10 3 sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "the Leiden Weibo corpus 4 from Weibo, a Twitter equivalent in China. From this dataset, we used the sentences that have exact correspondence between the provided segmented-unsegmented pair, yielding about 880K sentences. Since we did not know how much supervision would be necessary for a decent performance, we conducted experiments with different amounts of labeled data: 10K, 20K, 40K and 880K(all). Note that the final case amounts to complete supervision, an ideal situation that is not likely in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "Japanese Word segmentation accuracies around 99% have already been reported for newspaper domains in Japanese (Kudo et al., 2004) . Therefore, we only conducted experiments on segmenting Twitter text. In addition to our random Twitter crawl in April 2014, we used a corpus of Japanese Twitter text compiled by the Tokyo Metropolitan University 5 . This corpus is actually very small, 944 sentences. It mainly targets transfer learning and is segmented according to BCCWJ (Basic Corpus of Contemporary Written Japanese) standards from the National Institute of Japanese Language (Maekawa, 2007) . Therefore, for the labeled data we used the \"core\" subset of BCCWJ consisting of about 59K sentences plus 500 random sentences from the Twitter dataset. We used the remaining 444 sentences for testing. For the unlabeled data, we used a random crawl of 600K Japanese sentences collected from Twitter in March-April, 2014.",
"cite_spans": [
{
"start": 110,
"end": 129,
"text": "(Kudo et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 578,
"end": 593,
"text": "(Maekawa, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "Thai Unsegmented languages, such as Thai, Lao, Myanmar, and Kumer, are also prevalent in South East Asia and are becoming increasingly important targets of natural language processing. Thus we also conducted an experiment on Thai, using the standard InterBEST 2009 dataset (Kosawat, 2009) . Since it is reported that the \"novel\" subset of InterBEST has relatively low precision, we used this part with a random split of 10K sentences for supervised learning, 30K sentences for unsupervised learning, and a further 10K sentences for testing.",
"cite_spans": [
{
"start": 273,
"end": 288,
"text": "(Kosawat, 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "Because Sun et al. (2012) report increased accuracy with three tags, {B,I,E} 6 , we also tried these tags in place of the binary tags described in Section 4.2. This modification resulted in 6 possible transitions out of 3 2 = 9 transitions, whose computation follows from the binary case in Section 4.2. We used normal priors of truncated N (1, \u03c3 2 ) and N (0, \u03c3 2 ) for \u03bb 0 and \u03bb 1 \u2022 \u2022 \u2022 \u03bb K , respectively, and fixed the CRF regularization parameter C to 1.0, and \u03c3 to 1.0 by preliminary experiments on the same data.",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "Sun et al. (2012)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Settings",
"sec_num": "5.2"
},
{
"text": "For the feature templates, we followed Sun et al. (2012) . In addition to those templates, we used character type bigrams, where the 'character type' was defined by Unicode blocks (like Hiragana or CJK Unified Ideographs for Chinese and Japanese) or Unicode character categories (Thai).",
"cite_spans": [
{
"start": 39,
"end": 56,
"text": "Sun et al. (2012)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Settings",
"sec_num": "5.2"
},
{
"text": "To reduce computations by restricting the search space appropriately, we employed a Negative Binomial generalized linear model on string features (Uchiumi et al., 2015) to predict the maximum length of a possible word for each character position in the training data. Therefore, the upper limit of L in (11) and (13) was L t for each position t, obtained 6 6 5 https 4 December 4 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2 (a) MSR (Simplified Chinese) (b) Twitter (Japanese) Figure 10 : New words acquired by NPYCRF. For each figure, the left column is the words that did not appear in the provided labeled data, and the right column is the frequencies NPYCRF recognized in the test data. In Chinese, we found many proper names including company and person name, and in Japanese, we found many novel slang words and proper names. from this statistical model trained on labeled segmentations. We observed that this prediction made the computation several times faster than, for example, using a fixed threshold in Japanese where quite long words are occasionally encountered.",
"cite_spans": [
{
"start": 146,
"end": 168,
"text": "(Uchiumi et al., 2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 355,
"end": 430,
"text": "6 6 5 https 4 December 4 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2",
"ref_id": "TABREF3"
},
{
"start": 483,
"end": 492,
"text": "Figure 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Settings",
"sec_num": "5.2"
},
{
"text": "Chinese Tables 2 and 3 show IV (in-vocabulary) and OOV (out-of-vocabulary) precision and Fmeasure, computed against segmented tokens. The results for standard newspaper text indicate that NPYCRF is basically comparable in performance to state-of-the-art supervised neural networks (Chen et al., 2015; Zhang et al., 2016 ) that require hand tuning of hyperparameters or model architectures. Figure 10 shows some of the learned words in the testset of the Bakeoff MSR corpus. As shown in Table 3, NPYCRF also yields higher precision than supervised learning on non-standard text like Weibo, which is the main objective for this study. Contrary to ordinary supervised learning, we can see that NPYCRF effectively learns many \"new words\" from the large amount of unlabeled data thanks to the generative model, while observing human standards of segmentation by the discriminative model. Note that in Weibo segmentation, complete supervision is not . \" \" is a proverb and \" \" is a full name of a person. available in practice. In fact, we realized that the Weibo segmentations were given automatically by an existing classifier, and contain many inappropriate segmentations, while NPYCRF finds much \"better\" segmentations. Figure 11 compares the results of CRF, NPYLM, and NPYCRF with the gold segmentation. While proverbs like \" \" (wide vision without action) are correctly captured from the unlabeled data by NPYLM, it is sometimes broken by CRF through integration. In another case, the name of a person is properly connected because of the information provided by the CRF. This comparison shows that there is still room for improvement in NPYCRF. Section 6 discusses future research directions for improvements. Japanese and Thai Figure 12 shows an example of the analysis of Japanese Twitter text. Shaded words are those that are not contained in labeled data (BCCWJ core) but were found by NPYCRF. Many segmentations, including new words, are correct. We expect NPYCRF would perform better with more unlabeled data that are easily obtained. Tables 4 and 5 show the segmentation accuracies of the Twitter data in Japanese and novel data in Thai. While there are no publicly available results for these data (the InterBEST testset is closed during competition), NPYCRF achieved better accuracies than vanilla supervised segmentation based on CRF. Considering that many new words were found in Figure 12 , for example, we believe NPYCRF is quite competitive thanks to its ability to learn the infinite vocabulary, which it inherits from NPYLM.",
"cite_spans": [
{
"start": 281,
"end": 300,
"text": "(Chen et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 301,
"end": 319,
"text": "Zhang et al., 2016",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 8,
"end": 22,
"text": "Tables 2 and 3",
"ref_id": "TABREF3"
},
{
"start": 390,
"end": 399,
"text": "Figure 10",
"ref_id": null
},
{
"start": 1218,
"end": 1227,
"text": "Figure 11",
"ref_id": "FIGREF5"
},
{
"start": 1729,
"end": 1738,
"text": "Figure 12",
"ref_id": "FIGREF6"
},
{
"start": 2042,
"end": 2056,
"text": "Tables 4 and 5",
"ref_id": "TABREF5"
},
{
"start": 2392,
"end": 2401,
"text": "Figure 12",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.3"
},
{
"text": "As shown in Figure 11 , NPYCRF makes good use of NPYLM but sometimes ignores its prediction by falling back to CRF, yielding suboptimal performance. This is mainly because the geometric interpolation weight \u03bb 0 is always constant and does not vary according to the input. For example, even if the substring to segment is very rare in the labeled data, NPYCRF trusts the supervised classifier (CRF) with a constant rate of 1/(1+ \u03bb 0 ) in the log probability domain. To alleviate this problem, Model IV OOV F CRF 0.939 0.706 0.916 NPYCRF 0.947 0.708 0.921 it is necessary to change \u03bb 0 depending on the input string in a log-linear framework. 7 While this might be achieved through Density Ratio estimation framework (Sugiyama et al., 2012; Tsuboi et al., 2009) , we believe it is a general problem of semisupervised learning and is beyond the scope of this paper.",
"cite_spans": [
{
"start": 715,
"end": 738,
"text": "(Sugiyama et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 739,
"end": 759,
"text": "Tsuboi et al., 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 12,
"end": 21,
"text": "Figure 11",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "This issue also affects the estimation of \u03bb 0 as a scalar: that is, we found that \u03bb 0 often fluctuates during training because \u039b (which includes \u03bb 0 ) is estimated using only limited X l , Y l . In practice, we terminated the EM algorithm in Figure 9 early after a few iterations. Therefore, with a more adaptive semi-supervised learning framework, we expect that NPYCRF will achieve higher accuracy than the current performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "In this paper, we presented a hybrid generative/discriminative model of word segmentation, leveraging a nonparametric Bayesian model for unsupervised segmentation. By combining CRF and NPYLM within the semi-supervised framework of JESS-CM, our NPYCRF not only works as well as the state-of-the-art neural segmentation without hand tuning of hyperparameters on standard corpora, but also appropriately segments non-standard texts found in Twitter and Weibo, for example, by automatically finding \"new words\" thanks to a nonparametric model of infinite vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We believe that our model lays the foundation for developing a methodology of combining nonparametric Bayesian models and discriminative classifiers, as well as providing an example of semisupervised learning on different model structures, i.e. Markov and semi-Markov models for word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "While we consider only bigrams in this paper for simplicity, the theory can be naturally extended to higher-order ngrams. However, it requires quite a complicated implementation, and the expected gain in performance will not be large, even if we use trigrams(Mochihashi et al., 2009).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is possible to fix NPYLM and just use this as a feature to CRF: this amounts to running only the first iteration (j = 1) of the EM algorithm. However, it still requires NPYLM\u2192CRF conversion in Section 4.2, and we found that the performance is not optimal while slightly better than plain CRF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is the total number of sentences in the experiment: the actual number of unsupervised sentences is this set minus the different number of supervised sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.leidenweibocorpus.nl/openaccess.php 5 https://github.com/tmu-nlp/TwitterCorpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The B, I, and E tags mean the beginning, internal part, and end of a word, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is reminiscent of context-dependent Bayesian smoothing of MacKay (1994) in the probability domain, as opposed to the fixed Jelinek-Mercer smoothing(Goodman, 2001).especially the editors-in-chief for the thorough comments for the final manuscript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are deeply grateful to Jun Suzuki (NTT CS Labs) for important discussions leading to this research, Xu Sun (Peking University) for details of his experiments in Chinese. We would also like to thank anonymous reviewers and the action editor,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Hybrid Markov/Semi-Markov Conditional Random Field for Sequence Segmentation",
"authors": [
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP 2006",
"volume": "",
"issue": "",
"pages": "465--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galen Andrew. 2006. A Hybrid Markov/Semi-Markov Conditional Random Field for Sequence Segmenta- tion. In EMNLP 2006, pages 465-472.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Gated Recursive Neural Network for Chinese Word Segmentation",
"authors": [
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Chenxi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL 2015",
"volume": "",
"issue": "",
"pages": "1744--1753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015. Gated Recursive Neural Network for Chinese Word Segmentation. In ACL 2015, pages 1744-1753.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "High-Performance Semi-Supervised Learning using Discriminatively Constrained Generative Models",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Druck",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "ICML 2010",
"volume": "",
"issue": "",
"pages": "319--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Druck and Andrew McCallum. 2010. High- Performance Semi-Supervised Learning using Dis- criminatively Constrained Generative Models. In ICML 2010, pages 319-326.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Second International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Tom",
"middle": [
"Emerson"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Emerson. 2005. The Second International Chinese Word Segmentation Bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Pro- cessing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Contextual Dependencies in Unsupervised Word Segmentation",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL/COLING 2006",
"volume": "",
"issue": "",
"pages": "673--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2006. Contextual Dependencies in Unsupervised Word Segmentation. In Proceedings of ACL/COLING 2006, pages 673-680.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Bit of Progress in Language Modeling, Extended Version",
"authors": [
{
"first": "Joshua",
"middle": [
"T"
],
"last": "Goodman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua T. Goodman. 2001. A Bit of Progress in Lan- guage Modeling, Extended Version. Technical Report MSR-TR-2001-72, Microsoft Research.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Iterative Bayesian Word Segmentation for Unsupervised Vocabulary Discovery from Phoneme Lattices",
"authors": [
{
"first": "Jahn",
"middle": [],
"last": "Heymann",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "Reinhold",
"middle": [],
"last": "H\u00e4b-Umbach",
"suffix": ""
},
{
"first": "Bhiksha",
"middle": [],
"last": "Raj",
"suffix": ""
}
],
"year": 2014,
"venue": "ICASSP 2014",
"volume": "",
"issue": "",
"pages": "4057--4061",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jahn Heymann, Oliver Walter, Reinhold H\u00e4b-Umbach, and Bhiksha Raj. 2014. Iterative Bayesian Word Segmentation for Unsupervised Vocabulary Discov- ery from Phoneme Lattices. In ICASSP 2014, pages 4057-4061.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "InterBEST 2009: Thai Word Segmentation Workshop",
"authors": [
{
"first": "Krit",
"middle": [],
"last": "Kosawat",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of 2009 Eighth International Symposium on Natural Language Processing (SNLP2009)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krit Kosawat. 2009. InterBEST 2009: Thai Word Seg- mentation Workshop. In Proceedings of 2009 Eighth International Symposium on Natural Language Pro- cessing (SNLP2009), Thailand.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A word and charactercluster hybrid model for Thai word segmentation",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Isahara",
"suffix": ""
},
{
"first": "Chuleerat",
"middle": [],
"last": "Jaruskulchai",
"suffix": ""
}
],
"year": 2009,
"venue": "Eighth International Symposium on Natural Lanugage Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai, Kiyotaka Uchimoto, Junichi Kazama, Kentaro Torisawa, Hiroshi Isahara, and Chuleerat Jaruskulchai. 2009. A word and character- cluster hybrid model for Thai word segmentation. In Eighth International Symposium on Natural Lanugage Processing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Applying Conditional Random Fields to Japanese Morphological Analysis",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Kaoru",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "230--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying Conditional Random Fields to Japanese Morphological Analysis. In EMNLP 2004, pages 230-237.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic Mod- els for Segmenting and Labeling Sequence Data. In Proc. of ICML 2001, pages 282-289.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Nonparametric Bayesian Approach to Acoustic Model Discovery",
"authors": [
{
"first": "Chia-Ying",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2012,
"venue": "ACL 2012",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-ying Lee and James Glass. 2012. A Nonparametric Bayesian Approach to Acoustic Model Discovery. In ACL 2012, pages 40-49.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Hierarchical Dirichlet Language Model",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "David",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Mackay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Peto",
"suffix": ""
}
],
"year": 1994,
"venue": "Natural Language Engineering",
"volume": "1",
"issue": "3",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David J. C. MacKay and L. Peto. 1994. A Hierarchical Dirichlet Language Model. Natural Language Engi- neering, 1(3):1-19.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Kotonoha and BCCWJ: Development of a Balanced Corpus of Contemporary Written Japanese",
"authors": [
{
"first": "Kikuo",
"middle": [],
"last": "Maekawa",
"suffix": ""
}
],
"year": 2007,
"venue": "Corpora and Language Research: Proceedings of the First International Conference on Korean Language, Literature, and Culture",
"volume": "",
"issue": "",
"pages": "158--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kikuo Maekawa. 2007. Kotonoha and BCCWJ: Devel- opment of a Balanced Corpus of Contemporary Writ- ten Japanese. In Corpora and Language Research: Proceedings of the First International Conference on Korean Language, Literature, and Culture, pages 158- 177.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Infinite Markov Model",
"authors": [
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems",
"volume": "20",
"issue": "",
"pages": "1017--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daichi Mochihashi and Eiichiro Sumita. 2008. The Infi- nite Markov Model. In Advances in Neural Informa- tion Processing Systems 20 (NIPS 2007), pages 1017- 1024.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bayesian Unsupervised Word Segmentation with Nested Pitman-Yor Language Modeling",
"authors": [
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Naonori",
"middle": [],
"last": "Ueda",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP 2009",
"volume": "",
"issue": "",
"pages": "100--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian Unsupervised Word Segmentation with Nested Pitman-Yor Language Modeling. In Pro- ceedings of ACL-IJCNLP 2009, pages 100-108.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mutual Learning of an Object Concept and Language Model Based on MLDA and NPYLM",
"authors": [
{
"first": "Tomoaki",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Takayuki",
"middle": [],
"last": "Nagai",
"suffix": ""
},
{
"first": "Kotaro",
"middle": [],
"last": "Funakoshi",
"suffix": ""
},
{
"first": "Shogo",
"middle": [],
"last": "Nagasaka",
"suffix": ""
},
{
"first": "Tadahiro",
"middle": [],
"last": "Taniguchi",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Iwahashi",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'14)",
"volume": "",
"issue": "",
"pages": "600--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoaki Nakamura, Takayuki Nagai, Kotaro Funakoshi, Shogo Nagasaka, Tadahiro Taniguchi, and Naoto Iwa- hashi. 2014. Mutual Learning of an Object Concept and Language Model Based on MLDA and NPYLM. In 2014 IEEE/RSJ International Conference on Intel- ligent Robots and Systems (IROS'14), pages 600-607.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Nonparametric Word Segmentation for Machine Translation",
"authors": [
{
"first": "Thuylinh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING 2010",
"volume": "",
"issue": "",
"pages": "815--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ThuyLinh Nguyen, Stephan Vogel, and Noah A. Smith. 2010. Nonparametric Word Segmentation for Ma- chine Translation. In COLING 2010, pages 815-823.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semi-Markov Conditional Random Fields for Information Extraction",
"authors": [
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in Neural Information Processing Systems 17 (NIPS 2004)",
"volume": "",
"issue": "",
"pages": "1185--1192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunita Sarawagi and William W. Cohen. 2005. Semi- Markov Conditional Random Fields for Information Extraction. In Advances in Neural Information Pro- cessing Systems 17 (NIPS 2004), pages 1185-1192.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bayesian Methods for Hidden Markov Models",
"authors": [
{
"first": "L",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scott",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of the American Statistical Association",
"volume": "97",
"issue": "",
"pages": "337--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven L. Scott. 2002. Bayesian Methods for Hidden Markov Models. Journal of the American Statistical Association, 97:337-351.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Density Ratio Estimation in Machine Learning",
"authors": [
{
"first": "Masashi",
"middle": [],
"last": "Sugiyama",
"suffix": ""
},
{
"first": "Taiji",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Takafumi",
"middle": [],
"last": "Kanamori",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. 2012. Density Ratio Estimation in Machine Learning. Cambridge University Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enhancing Chinese Word Segmentation using Unlabeled Data",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP 2011",
"volume": "",
"issue": "",
"pages": "970--979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing Chinese Word Segmentation using Unlabeled Data. In EMNLP 2011, pages 970-979.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Discriminative Latent Variable Chinese Segmenter with Hybrid Word/Character Information",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yaozhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL 2009",
"volume": "",
"issue": "",
"pages": "56--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun'ichi Tsujii. 2009. A Discrimina- tive Latent Variable Chinese Segmenter with Hybrid Word/Character Information. In NAACL 2009, pages 56-64.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Fast Online Training with Frequency-Adaptive Learning Rates for Chinese Word Segmentation and New Word Detection",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "ACL 2012",
"volume": "",
"issue": "",
"pages": "253--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast On- line Training with Frequency-Adaptive Learning Rates for Chinese Word Segmentation and New Word Detec- tion. In ACL 2012, pages 253-262.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Feature-Frequency-Adaptive Online Training for Fast and Accurate Natural Language Processing",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "3",
"pages": "563--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Wenjie Li, Houfeng Wang, and Qin Lu. 2014. Feature-Frequency-Adaptive Online Training for Fast and Accurate Natural Language Processing. Compu- tational Linguistics, 40(3):563-586.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semi-Supervised Sequential Labeling and Segmentation Using Giga-Word Scale Unlabeled Data",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL:HLT 2008",
"volume": "",
"issue": "",
"pages": "665--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Suzuki and Hideki Isozaki. 2008. Semi-Supervised Sequential Labeling and Segmentation Using Giga- Word Scale Unlabeled Data. In ACL:HLT 2008, pages 665-673.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Hierarchical Bayesian Language Model based on Pitman-Yor Processes",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL/COLING 2006",
"volume": "",
"issue": "",
"pages": "985--992",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh. 2006. A Hierarchical Bayesian Lan- guage Model based on Pitman-Yor Processes. In Pro- ceedings of ACL/COLING 2006, pages 985-992.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation",
"authors": [
{
"first": "Yuta",
"middle": [],
"last": "Tsuboi",
"suffix": ""
},
{
"first": "Hisashi",
"middle": [],
"last": "Kashima",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Hido",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Sugiyama",
"suffix": ""
}
],
"year": 2009,
"venue": "Information and Media Technologies",
"volume": "4",
"issue": "2",
"pages": "529--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuta Tsuboi, Hisashi Kashima, Shohei Hido, Steffen Bickel, and Masashi Sugiyama. 2009. Direct Den- sity Ratio Estimation for Large-scale Covariate Shift Adaptation. Information and Media Technologies, 4(2):529-546.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Inducing Word and Part-of-speech with Pitman-Yor Hidden Semi-Markov Models",
"authors": [
{
"first": "Kei",
"middle": [],
"last": "Uchiumi",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Tsukahara",
"suffix": ""
},
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL-IJCNLP 2015",
"volume": "",
"issue": "",
"pages": "1774--1782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kei Uchiumi, Hiroshi Tsukahara, and Daichi Mochi- hashi. 2015. Inducing Word and Part-of-speech with Pitman-Yor Hidden Semi-Markov Models. In ACL- IJCNLP 2015, pages 1774-1782.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation Algorithms",
"authors": [
{
"first": "C",
"middle": [
"G"
],
"last": "Greg",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"A"
],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tanner",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Statistical Association",
"volume": "85",
"issue": "411",
"pages": "699--704",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg C.G. Wei and Martin A. Tanner. 1990. A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation Algorithms. Journal of the American Statistical Association, 85(411):699- 704.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Transition-Based Neural Word Segmentation",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Transition-Based Neural Word Segmentation. In ACL 2016.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Semi-Markov model representation of NPYLM (simplest case of segment length \u2264 3). Each node corresponds to a substring ending at time t, and its length k is indexed by each row.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Semi-supervised learning of the same model structure (HMM and CRF) with JESS-CM. Discriminative and generative potentials are given relative weights 1 : \u03bb 0 , and added together in the log probability domain.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Equivalence of semi-Markov (left) and Markov (right) potentials. The potential of substring \" \" (Tokyo prefecture) being a word on the left is equivalent to the sum of potentials along the U-shaped path on the right.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Four types of label transitions in Markov CRF. Label bigram potentials for marginalization. The probability of each label bigram (bold) of the Markov model can be obtained by marginalizing the probability of the U-shaped path including it, which is computed in the semi-Markov model.where p(c t+j\u22121 t |s) is obtained from (15).",
"uris": null
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"text": "Example of segmentation of the SIGHAN Bakeoff MSR dataset made with supervised (CRF), unsupervised (NPYLM), and semi-supervised (NPYCRF) models in comparison with gold segmentations (Gold)",
"uris": null
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"text": "Samples of NPYCRF segmentation of Twitter text in Japanese that are difficult to analyze by ordinary supervised segmentation. It contains a lot of novel words, emoticons, and colloquial expressions that are not contained in the BCCWJ core text (shaded).",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Statistics of the datasets for the experiments."
},
"TABREF3": {
"num": null,
"content": "<table><tr><td colspan=\"5\">\"Filtered\" are the results with a simple post-hoc filter de-scribed in Sun et al. (2009).</td></tr><tr><td>Data</td><td colspan=\"3\">Label Unlabel IV OOV</td><td>F</td></tr><tr><td>Topline</td><td>880K</td><td>-</td><td colspan=\"2\">0.981 0.699 0.977</td></tr><tr><td>Sup 10K</td><td>10K</td><td>-</td><td colspan=\"2\">0.949 0.690 0.928</td></tr><tr><td>Sup 20K</td><td>20K</td><td>-</td><td colspan=\"2\">0.957 0.683 0.941</td></tr><tr><td>Sup 40K</td><td>40K</td><td>-</td><td colspan=\"2\">0.963 0.682 0.951</td></tr><tr><td colspan=\"5\">Semi 10K 10K 870K 0.954 0.698 0.933</td></tr><tr><td colspan=\"5\">Semi 20K 20K 860K 0.961 0.690 0.945 Semi 40K 40K 840K 0.970 0.648 0.955</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Accuracies of Bakeoff MSR dataset in Chinese."
},
"TABREF5": {
"num": null,
"content": "<table><tr><td>Model</td><td>IV</td><td>OOV</td><td>F</td></tr><tr><td colspan=\"4\">CRF NPYCRF 0.959 0.362 0.954 0.961 0.409 0.948</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Accuracies for Twitter text in Japanese."
},
"TABREF6": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Accuracies for InterBEST novel dataset in Thai."
}
}
}
}