ACL-OCL / Base_JSON /prefixP /json /P16 /P16-1039.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P16-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:57:12.130043Z"
},
"title": "Neural Word Segmentation Learning for Chinese",
"authors": [
{
"first": "Deng",
"middle": [],
"last": "Cai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {},
"email": "zhaohai@cs.sjtu.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most previous approaches to Chinese word segmentation formalize this problem as a character-based sequence labeling task so that only contextual information within fixed sized local windows and simple interactions between adjacent tags can be captured. In this paper, we propose a novel neural framework which thoroughly eliminates context windows and can utilize complete segmentation history. Our model employs a gated combination neural network over characters to produce distributed representations of word candidates, which are then given to a long shortterm memory (LSTM) language scoring model. Experiments on the benchmark datasets show that without the help of feature engineering as most existing approaches, our models achieve competitive or better performances with previous stateof-the-art methods.",
"pdf_parse": {
"paper_id": "P16-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "Most previous approaches to Chinese word segmentation formalize this problem as a character-based sequence labeling task so that only contextual information within fixed sized local windows and simple interactions between adjacent tags can be captured. In this paper, we propose a novel neural framework which thoroughly eliminates context windows and can utilize complete segmentation history. Our model employs a gated combination neural network over characters to produce distributed representations of word candidates, which are then given to a long shortterm memory (LSTM) language scoring model. Experiments on the benchmark datasets show that without the help of feature engineering as most existing approaches, our models achieve competitive or better performances with previous stateof-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most east Asian languages including Chinese are written without explicit word delimiters, therefore, word segmentation is a preliminary step for processing those languages. Since Xue (2003) , most methods formalize the Chinese word segmentation (CWS) as a sequence labeling problem with character position tags, which can be handled with su-pervised learning methods such as Maximum Entropy (Berger et al., 1996; Low et al., 2005) and Conditional Random Fields (Lafferty et al., 2001; Peng et al., 2004; Zhao et al., 2006a) . However, those methods heavily depend on the choice of handcrafted features.",
"cite_spans": [
{
"start": 179,
"end": 189,
"text": "Xue (2003)",
"ref_id": "BIBREF34"
},
{
"start": 383,
"end": 412,
"text": "Entropy (Berger et al., 1996;",
"ref_id": null
},
{
"start": 413,
"end": 430,
"text": "Low et al., 2005)",
"ref_id": "BIBREF16"
},
{
"start": 461,
"end": 484,
"text": "(Lafferty et al., 2001;",
"ref_id": "BIBREF13"
},
{
"start": 485,
"end": 503,
"text": "Peng et al., 2004;",
"ref_id": "BIBREF21"
},
{
"start": 504,
"end": 523,
"text": "Zhao et al., 2006a)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, neural models have been widely used for NLP tasks for their ability to minimize the effort in feature engineering. For the task of CWS, Zheng et al. (2013) adapted the general neural network architecture for sequence labeling proposed in (Collobert et al., 2011) , and used character embeddings as input to a two-layer network. Pei et al. (2014) improved upon (Zheng et al., 2013) by explicitly modeling the interactions between local context and previous tag. Chen et al. (2015a) proposed a gated recursive neural network to model the feature combinations of context characters. Chen et al. (2015b) used an LSTM architecture to capture potential long-distance dependencies, which alleviates the limitation of the size of context window but introduced another window for hidden states.",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "Zheng et al. (2013)",
"ref_id": "BIBREF48"
},
{
"start": 248,
"end": 272,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 338,
"end": 355,
"text": "Pei et al. (2014)",
"ref_id": "BIBREF20"
},
{
"start": 370,
"end": 390,
"text": "(Zheng et al., 2013)",
"ref_id": "BIBREF48"
},
{
"start": 471,
"end": 490,
"text": "Chen et al. (2015a)",
"ref_id": "BIBREF2"
},
{
"start": 590,
"end": 609,
"text": "Chen et al. (2015b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the differences, all these models are designed to solve CWS by assigning labels to the characters in the sequence one by one. At each time step of inference, these models compute the tag scores of character based on (i) context features within a fixed sized local window and (ii) tagging history of previous one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nevertheless, the tag-tag transition is insufficient to model the complicated influence from previous segmentation decisions, though it could sometimes be a crucial clue to later segmentation decisions. The fixed context window size, which is broadly adopted by these methods for feature engineering, also restricts the flexibility of modeling diverse distances. Moreover, word-level information, which is being the greater granularity unit as suggested in (Huang and Zhao, 2006) , remains",
"cite_spans": [
{
"start": 457,
"end": 479,
"text": "(Huang and Zhao, 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Words Tags character based (Zheng et al., 2013) , . . . (Chen et al., 2015b) c 0 , c 1 , . . . , c i , c i+1 , c i+2 -t i\u22121 t i word based (Zhang and Clark, 2007) unemployed.",
"cite_spans": [
{
"start": 27,
"end": 47,
"text": "(Zheng et al., 2013)",
"ref_id": "BIBREF48"
},
{
"start": 56,
"end": 76,
"text": "(Chen et al., 2015b)",
"ref_id": "BIBREF3"
},
{
"start": 139,
"end": 162,
"text": "(Zhang and Clark, 2007)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Characters",
"sec_num": null
},
{
"text": "c i\u22122 , c i\u22121 , c i , c i+1 , c i+2 - t i\u22121 t i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters",
"sec_num": null
},
{
"text": ", . . . c in w j\u22121 , w j , w j+1 w j\u22121 , w j , w j+1 - Ours c 0 , c 1 , . . . , c i w 0 , w 1 , . . . , w j -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters",
"sec_num": null
},
{
"text": "To alleviate the drawbacks inside previous methods and release those inconvenient constrains such as the fixed sized context window, this paper makes a latest attempt to re-formalize CWS as a direct segmentation learning task. Our method does not make tagging decisions on individual characters, but directly evaluates the relative likelihood of different segmented sentences and then search for a segmentation with the highest score. To feature a segmented sentence, a series of distributed vector representations (Bengio et al., 2003) are generated to characterize the corresponding word candidates. Such a representation setting makes the decoding quite different from previous methods and indeed much more challenging, however, more discriminative features can be captured.",
"cite_spans": [
{
"start": 515,
"end": 536,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Characters",
"sec_num": null
},
{
"text": "Though the vector building is word centered, our proposed scoring model covers all three processing levels from character, word until sentence. First, the distributed representation starts from character embedding, as in the context of word segmentation, the n-gram data sparsity issue makes it impractical to use word vectors immediately. Second, as the word candidate representation is derived from its characters, the inside character structure will also be encoded, thus it can be used to determine the word likelihood of its own. Third, to evaluate how a segmented sentence makes sense through word interacting, an LSTM (Hochreiter and Schmidhuber, 1997) is used to chain together word candidates incrementally and construct the representation of partially segmented sentence at each decoding step, so that the coherence between next word candidate and previous segmentation history can be depicted.",
"cite_spans": [
{
"start": 625,
"end": 659,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Characters",
"sec_num": null
},
{
"text": "To our best knowledge, our proposed approach to CWS is the first attempt which explicitly models the entire contents of the segmenter's state, including the complete history of both segmentation decisions and input characters. The compar- isons of feature windows used in different models are shown in Table 1 . Compared to both sequence labeling schemes and word-based models in the past, our model thoroughly eliminates context windows and can capture the complete history of segmentation decisions, which offers more possibilities to effectively and accurately model segmentation context.",
"cite_spans": [],
"ref_spans": [
{
"start": 302,
"end": 309,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Characters",
"sec_num": null
},
{
"text": "Neural Network Scoring Model Decoder \u2022\u2022\u2022 \u2022\u2022\u2022 \u2022\u2022\u2022 \u2022\u2022\u2022 \u2022\u2022\u2022 Max-Margin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characters",
"sec_num": null
},
{
"text": "We formulate the CWS problem as finding a mapping from an input character sequence x to a word sequence y, and the output sentence y * satisfies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2"
},
{
"text": "y * = arg max y\u2208GEN(x) ( n i=1 score(y i |y 1 , \u2022 \u2022 \u2022 , y i\u22121 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2"
},
{
"text": "where n is the number of word candidates in y, and GEN(x) denotes the set of possible segmentations for an input sequence x. Unlike all previous works, our scoring function is sensitive to the complete contents of partially segmented sentence. As shown in Figure 1 , to solve CWS in this way, a neural network scoring model is designed to evaluate the likelihood of a segmented sentence. Based on the proposed model, a decoder is developed to find the segmented sentence with the highest score. Meanwhile, a max-margin method is utilized to perform the training by comparing Figure 2 : Architecture of our proposed neural network scoring model, where c i denotes the i-th input character, y j denotes the learned representation of the j-th word candidate, p k denotes the prediction for the (k + 1)-th word candidate and u is the trainable parameter vector for scoring the likelihood of individual word candidates.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 575,
"end": 583,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "2"
},
{
"text": "segmented sentence Lookup Table GCNN Unit LSTM Unit Predicting Scoring c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 8 y 1 y 2 y 3 y 4 p 1 p 2 p 3 p 4 u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2"
},
{
"text": "the structured difference of decoder output and the golden segmentation. The following sections will introduce each of these components in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2"
},
{
"text": "The score for a segmented sentence is computed by first mapping it into a sequence of word candidate vectors, then the scoring model takes the vector sequence as input, scoring on each word candidate from two perspectives: (1) how likely the word candidate itself can be recognized as a legal word;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Scoring Model",
"sec_num": "3"
},
{
"text": "(2) how reasonable the link is for the word candidate to follow previous segmentation history immediately. After that, the word candidate is appended to the segmentation history, updating the state of the scoring system for subsequent judgements. Figure 2 illustrates the entire scoring neural network.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Network Scoring Model",
"sec_num": "3"
},
{
"text": "Character Embedding. While the scores are decided at the word-level, using word embedding (Bengio et al., 2003; Wang et al., 2016) immediately will lead to a remarkable issue that rare words and out-of-vocabulary words will be poorly estimated (Kim et al., 2015) . In addition, the character-level information inside an n-gram can be helpful to judge whether it is a true word. Therefore, a lookup table of character embeddings is used as the bottom layer. Formally, we have a character dictionary D of size |D|. Then each character c \u2208 D is represented as a real-valued vector (character embedding) c \u2208 R d , where d is the dimensionality of the vector space. The character embeddings are then stacked into an embedding matrix M \u2208 R d\u00d7|D| . For a character c \u2208 D, its character embedding c \u2208 R d is retrieved by the embedding layer according to its index.",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF0"
},
{
"start": 112,
"end": 130,
"text": "Wang et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 244,
"end": 262,
"text": "(Kim et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "Gated Combination Neural Network. In order to obtain word representation through its characters, in the simplest strategy, character vectors are integrated into their word representation using a weight matrix W (L) that is shared across all words with the same length L, followed by a non-linear function g(\u2022). Specifically, c i (1 \u2264 i \u2264 L) are d-dimensional character vector representations respectively, the corresponding word vector w will be d-dimensional as well:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w = g(W (L) \uf8ee \uf8ef \uf8f0 c 1 . . . c L \uf8f9 \uf8fa \uf8fb)",
"eq_num": "(1)"
}
],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "where W (L) \u2208 R d\u00d7Ld and g is a non-linear function as mentioned above. Although the mechanism above seems to work well, it can not sufficiently model the complicated combination features in practice, yet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "Gated structure in neural network can be useful for hybrid feature extraction according to (Chen et al., 2015a; Chung et al., 2014; ,",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "(Chen et al., 2015a;",
"ref_id": "BIBREF2"
},
{
"start": 112,
"end": 131,
"text": "Chung et al., 2014;",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "c 1 c L\u0175 w r 1 r L z N z 1 z L Figure 3: Gated combination neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "we therefore propose a gated combination neural network (GCNN) especially for character compositionality which contains two types of gates, namely reset gate and update gate. Intuitively, the reset gates decide which part of the character vectors should be mixed while the update gates decide what to preserve when combining the characters information. Concretely, for words with length L, the word vector w \u2208 R d is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "w = z N \u0175 + L i=1 z i c i where z N , z i (1 \u2264 i \u2264 L)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "are update gates for new activation\u0175 and governed characters respectively, and indicates element-wise multiplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "The new activation\u0175 is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "w = tanh(W (L) \uf8ee \uf8ef \uf8f0 r 1 c 1 . . . r L c L \uf8f9 \uf8fa \uf8fb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "W (L) \u2208 R d\u00d7Ld and r i \u2208 R d (1 \u2264 i \u2264 L)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "are the reset gates for governed characters respectively, which can be formalized as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "\uf8ee \uf8ef \uf8f0 r 1 . . . r L \uf8f9 \uf8fa \uf8fb = \u03c3(R (L) \uf8ee \uf8ef \uf8f0 c 1 . . . c L \uf8f9 \uf8fa \uf8fb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "where R (L) \u2208 R Ld\u00d7Ld is the coefficient matrix of reset gates and \u03c3 denotes the sigmoid function. The update gates can be formalized as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 z N z 1 . . . z L \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb = exp(U (L) \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0\u0175 c 1 . . . c L \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb ) \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 1/Z 1/Z . . . 1/Z \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "where U (L) \u2208 R (L+1)d\u00d7(L+1)d is the coefficient matrix of update gates, and Z \u2208 R d is the normal-ization vector,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "Z k = L i=1 [exp(U (L) \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0\u0175 c 1 . . . c L \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb )] d\u00d7i+k where 0 \u2264 k < d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "According to the normalization condition, the update gates are constrained by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "z N + L i=1 z i = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "The gated mechanism is capable of capturing both character and character interaction characteristics to give an efficient word representation (See Section 6.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "Word Score. Denote the learned vector representations for a segmented sentence y with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "[y 1 , y 2 , \u2022 \u2022 \u2022 , y n ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "where n is the number of word candidates in the sentence. word score will be computed by the dot products of vector y i (1 \u2264 i \u2264 n) and a trainable parameter vector u \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R d . Word Score(y i ) = u \u2022 y i",
"eq_num": "(2)"
}
],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "It indicates how likely a word candidate by itself is to be a true word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Score",
"sec_num": "3.1"
},
{
"text": "Inspired by the recurrent neural network language model (RNN-LM) (Mikolov et al., 2010; Sundermeyer et al., 2012) , we utilize an LSTM system to capture the coherence in a segmented sentence.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 88,
"end": 113,
"text": "Sundermeyer et al., 2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "Long Short-Term Memory Networks. The LSTM neural network (Hochreiter and Schmidhuber, 1997) is an extension of the recurrent neural network (RNN), which is an effective tool for sequence modeling tasks using its hidden states for history information preservation. At each time step t, an RNN takes the input x t and updates its recurrent hidden state h t by",
"cite_spans": [
{
"start": 57,
"end": 91,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "h t = g(Uh t\u22121 + Wx t + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "where g is a non-linear function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "Although RNN is capable, in principle, to process arbitrary-length sequences, it can be difficult to train an RNN to learn long-range dependencies due to the vanishing gradients. LSTM addresses this problem by introducing a memory cell to preserve states over long periods of time, and controls the update of hidden state and memory cell by three types of gates, namely input gate, forget gate and output gate. Concretely, each step of LSTM takes input x t , h t\u22121 , c t\u22121 and produces h t , c t via the following calculations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "y t\u22121 p t y t p t+1 y t+1 p t+2 h t\u22121 h t h t+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "i t = \u03c3(W i x t + U i h t\u22121 + b i ) f t = \u03c3(W f x t + U f h t\u22121 + b f ) o t = \u03c3(W o x t + U o h t\u22121 + b o ) c t = tanh(W c x t + U c h t\u22121 + b c ) c t = f t c t\u22121 + i t \u0109 t h t = o t tanh(c t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "where \u03c3, are respectively the element-wise sigmoid function and multiplication, i t , f t , o t , c t are respectively the input gate, forget gate, output gate and memory cell activation vector at time t, all of which have the same size as hidden state vector h t \u2208 R H .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "Link Score. LSTMs have been shown to outperform RNNs on many NLP tasks, notably language modeling (Sundermeyer et al., 2012 ). In our model, LSTM is utilized to chain together word candidates in a left-to-right, incremental manner. At time step t, a prediction p t+1 \u2208 R d about next word y t+1 is made based on the hidden state h t :",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "(Sundermeyer et al., 2012",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "p t+1 = tanh(W p h t + b p )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "link score for next word y t+1 is then computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "Link Score(y t+1 ) = p t+1 \u2022 y t+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "Due to the structure of LSTM, the prediction vector p t+1 carries useful information detected from the entire segmentation history, including previous segmentation decisions. In this way, our model gains the ability of sequence-level discrimination rather than local optimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Link Score",
"sec_num": "3.2"
},
{
"text": "Sentence score for a segmented sentence y with n word candidates is computed by summing up word scores (2) and link scores (3) as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence score",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(y [1:n] , \u03b8) = n t=1 (u \u2022 y t + p t \u2022 y t )",
"eq_num": "(4)"
}
],
"section": "Sentence score",
"sec_num": "3.3"
},
{
"text": "where \u03b8 is the parameter set used in our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence score",
"sec_num": "3.3"
},
{
"text": "The total number of possible segmented sentences grows exponentially with the length of character sequence, which makes it impractical to compute the scores of every possible segmentation. In order to get exact inference, most sequence-labeling systems address this problem with a Viterbi search, which takes the advantage of their hypothesis that the tag interactions only exist within adjacent characters (Markov assumption). However, since our model is intended to capture complete history of segmentation decisions, such dynamic programming algorithms can not be adopted in this situation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "Algorithm 1 Beam Search. To make our model efficient in practical use, we propose a beam-search algorithm with dynamic programming motivations as shown in Algorithm 1. The main idea is that any segmentation of the first i characters can be separated as two parts, the first part consists of characters with indexes from 0 to j that is denoted as y, the rest part is the word composed by c[j+1 : i]. The influence from previous segmentation y can be represented as a triple (y.score, y.h, y.c), where y.score, y.h, y.c indicate the current score, current hidden state vector and current memory cell vector respectively. Beam search ensures that the total time for segmenting a sentence of n characters is w \u00d7 k \u00d7 n, where w, k are maximum word length and beam size respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "We use the max-margin criterion (Taskar et al., 2005) to train our model. As reported in (Kummerfeld et al., 2015) , the margin methods generally outperform both likelihood and perception methods. For a given character sequence x (i) , denote the correct segmented sentence for x (i) as y (i) . We define a structured margin loss \u2206(y (i) ,\u0177) for predicting a segmented sentence\u0177:",
"cite_spans": [
{
"start": 32,
"end": 53,
"text": "(Taskar et al., 2005)",
"ref_id": "BIBREF31"
},
{
"start": 89,
"end": 114,
"text": "(Kummerfeld et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 289,
"end": 292,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "\u2206(y (i) ,\u0177) = m t=1 \u00b51{y (i),t =\u0177 t }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "where m is the length of sequence x (i) and \u00b5 is the discount parameter. The calculation of margin loss could be regarded as to count the number of incorrectly segmented characters and then multiple it with a fixed discount parameter for smoothing. Therefore, the loss is proportional to the number of incorrectly segmented characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "Given a set of training set \u2126, the regularized objective function is the loss function J(\u03b8) including an 2 norm term:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "J(\u03b8) = 1 |\u2126| (x (i) ,y (i) )\u2208\u2126 l i (\u03b8) + \u03bb 2 ||\u03b8|| 2 2 l i (\u03b8) = max y\u2208GEN(x (i) ) (s(\u0177, \u03b8) + \u2206(y (i) ,\u0177) \u2212 s(y (i) , \u03b8))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "where the function s(\u2022) is the sentence score defined in equation (4). Due to the hinge loss, the objective function is not differentiable, we use a subgradient method (Ratliff et al., 2007) which computes a gradientlike direction. Following (Socher et al., 2013) , we use the diagonal variant of AdaGrad (Duchi et al., 2011) with minibatchs to minimize the objective. The update for the i-th parameter at time step t is as follows:",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "(Ratliff et al., 2007)",
"ref_id": "BIBREF24"
},
{
"start": 242,
"end": 263,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 305,
"end": 325,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "\u03b8 t,i = \u03b8 t\u22121,i \u2212 \u03b1 t \u03c4 =1 g 2 \u03c4,i g t,i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "where \u03b1 is the initial learning rate and g \u03c4,i \u2208 R |\u03b8 i | is the subgradient at time step \u03c4 for parameter \u03b8 i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5"
},
{
"text": "To evaluate the proposed segmenter, we use two popular datasets, PKU and MSR, from the second International Chinese Word Segmentation Bakeoff (Emerson, 2005) . These datasets are commonly used by previous state-of-the-art models and neural network models. Both datasets are preprocessed by replacing the continuous English characters and digits with a unique token. All experiments are conducted with standard Bakeoff scoring program 1 calculating precision, recall, and F 1 -score.",
"cite_spans": [
{
"start": 142,
"end": 157,
"text": "(Emerson, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 6.1 Datasets",
"sec_num": "6"
},
{
"text": "Hyper-parameters of neural network model significantly impact on its performance. To determine a set of suitable hyper-parameters, we divide the training data into two sets, the first 90% sentences as training set and the rest 10% sentences as development set. We choose the hyper-parameters as shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 311,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Hyper-parameters",
"sec_num": "6.2"
},
{
"text": "We found that the character embedding size has a limited impact on the performance as long as it is large enough. The size 50 is chosen as a good trade-off between speed and performance. The number of hidden units is set to be the same as the character embedding. Maximum word length determines the number of parameters in GCNN part and the time consuming of beam search, since the words with a length l > 4 are relatively rare, 0.29% in PKU training data and 1.25% in MSR training data, we set the maximum word length to 4 in our experiments. 2 Dropout is a popular technique for improving the performance of neural networks by reducing overfitting (Srivastava et al., 2014) . We also drop the input layer of our model with dropout rate 20% to avoid overfitting.",
"cite_spans": [
{
"start": 650,
"end": 675,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameters",
"sec_num": "6.2"
},
{
"text": "Beam Size. We first investigated the impact of beam size over segmentation performance. Figure 5 shows that a segmenter with beam size 4 is enough to get the best performance, which makes our model find a good balance between accuracy and efficiency.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Analysis",
"sec_num": "6.3"
},
{
"text": "We then studied the role of GCNN in our model. To reveal the impact of GCNN, we re-implemented a simplified version of our model, models P R F Single layer (d = 50) 94.3 93.7 94.0 GCNN (d = 50) 95.8 95.2 95.5 Single layer (d = 100) 94.9 94.4 94.7 (Chen et al., 2015a) 94.9 95.9 95.8 96.2 (Chen et al., 2015b) 94.6 95.7 95.7 96.4 This work 95.7 -96.4 - which replaces the GCNN part with a single nonlinear layer as in equation 1. The results are listed in Table 3 , which demonstrate that the performance is significantly boosted by exploiting the GCNN architecture (94.0% to 95.5% on F 1 -score), while the best performance that the simplified version can achieve is 94.7%, but using a much larger character embedding size.",
"cite_spans": [
{
"start": 156,
"end": 164,
"text": "(d = 50)",
"ref_id": null
},
{
"start": 180,
"end": 193,
"text": "GCNN (d = 50)",
"ref_id": null
},
{
"start": 247,
"end": 267,
"text": "(Chen et al., 2015a)",
"ref_id": "BIBREF2"
},
{
"start": 288,
"end": 328,
"text": "(Chen et al., 2015b) 94.6 95.7 95.7 96.4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "GCNN.",
"sec_num": null
},
{
"text": "Link Score & Word Score. We conducted several experiments to investigate the individual effect of link score and word score, since these two types of scores are intended to estimate the sentence likelihood from two different perspectives: the semantic coherence between words and the existence of individual words. The learning curves of models with different scoring strategies are shown in Figure 6 . The model with only word score can be regarded as the situation that the segmentation decisions are made only based on local window information. The comparisons show that such a model gives moderate performance. By contrast, the model with only link score gives a much better performance close to the joint model, which demonstrates that the complete segmentation history, which can not be effectively modeled in previous schemes, possesses huge appliance value for word segmentation.",
"cite_spans": [],
"ref_spans": [
{
"start": 392,
"end": 400,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "GCNN.",
"sec_num": null
},
{
"text": "MSR P R F P R F (Zheng et al., 2013) 92.8 92.0 92.4 92.9 93.6 93.3 (Pei et al., 2014) 93.7 93.4 93.5 94.6 94.2 94.4 (Chen et al., 2015a) (Tseng et al., 2005) 95.0 96.4 -- (Zhang and Clark, 2007) 94.5 97.2 -- (Zhao and Kit, 2008b) 95.4 97.6 -- (Sun et al., 2009) 95.2 97.3 -- 95.4 97.4 -- (Zhang et al., 2013) --96.1* 97.4* (Chen et al., 2015a) 94.5 95.4 96.4* 97.6* (Chen et al., 2015b) 94.8 95.6 96.5* 97.4* This work 95.5 96.5 -- Table 6 : Comparison with previous state-of-the-art models. Results with * used external dictionary or corpus.",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Zheng et al., 2013)",
"ref_id": "BIBREF48"
},
{
"start": 67,
"end": 85,
"text": "(Pei et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 116,
"end": 136,
"text": "(Chen et al., 2015a)",
"ref_id": "BIBREF2"
},
{
"start": 137,
"end": 157,
"text": "(Tseng et al., 2005)",
"ref_id": "BIBREF32"
},
{
"start": 171,
"end": 194,
"text": "(Zhang and Clark, 2007)",
"ref_id": "BIBREF36"
},
{
"start": 208,
"end": 229,
"text": "(Zhao and Kit, 2008b)",
"ref_id": "BIBREF43"
},
{
"start": 243,
"end": 261,
"text": "(Sun et al., 2009)",
"ref_id": "BIBREF28"
},
{
"start": 288,
"end": 308,
"text": "(Zhang et al., 2013)",
"ref_id": "BIBREF38"
},
{
"start": 323,
"end": 343,
"text": "(Chen et al., 2015a)",
"ref_id": "BIBREF2"
},
{
"start": 366,
"end": 386,
"text": "(Chen et al., 2015b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "PKU",
"sec_num": null
},
{
"text": "We first compare our model with the latest neural network methods as shown in Table 4 . However, (Chen et al., 2015a; Chen et al., 2015b) used an extra preprocess to filter out Chinese idioms according to an external dictionary. 4 Table 4 lists the results (F 1 -scores) with different dictionaries, which show that our models perform better when under the same settings. Table 5 gives comparisons among previous neural network models. In the first block of Table 5 , the character embedding matrix M is randomly initialized. The results show that our proposed novel model outperforms previous neural network 4 In detail, when a dictionary is used, a preprocess is performed before training and test, which scans original text to find out Chinese idioms included in the dictionary and replace them with a unique token. This treatment does not strictly follow the convention of closed-set setting defined by SIGHAN Bakeoff, as no linguistic resources, either dictionary or corpus, other than the training corpus, should be adopted. 5 To make comparisons fair, we re-run their code (https://github.com/dalstonChen) without their unspecified Chinese idiom dictionary. methods.",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "(Chen et al., 2015a;",
"ref_id": "BIBREF2"
},
{
"start": 118,
"end": 137,
"text": "Chen et al., 2015b)",
"ref_id": "BIBREF3"
},
{
"start": 609,
"end": 610,
"text": "4",
"ref_id": null
},
{
"start": 1031,
"end": 1032,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 231,
"end": 238,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 372,
"end": 379,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 458,
"end": 465,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "PKU",
"sec_num": null
},
{
"text": "Previous works have found that the performance can be improved by pre-training the character embeddings on large unlabeled data. Therefore, we use word2vec (Mikolov et al., 2013 ) toolkit 6 to pre-train the character embeddings on the Chinese Wikipedia corpus and use them for initialization. Table 5 also shows the results with additional pre-trained character embeddings. Again, our model achieves better performance than previous neural network models do. Table 6 compares our models with previous state-of-the-art systems. Recent systems such as (Zhang et al., 2013) , (Chen et al., 2015b) and (Chen et al., 2015a ) rely on both extensive feature engineering and external corpora to boost performance. Such systems are not directly comparable with our models. In the closed-set setting, our models can achieve state-of-the-art performance Max. word length F 1 score Time (Days) 4 96.5 4 5 96.7 5 6 96.8 6 Table 7 : Results on MSR dataset with different maximum decoding word length settings.",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "(Mikolov et al., 2013",
"ref_id": "BIBREF19"
},
{
"start": 550,
"end": 570,
"text": "(Zhang et al., 2013)",
"ref_id": "BIBREF38"
},
{
"start": 573,
"end": 593,
"text": "(Chen et al., 2015b)",
"ref_id": "BIBREF3"
},
{
"start": 598,
"end": 617,
"text": "(Chen et al., 2015a",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 459,
"end": 466,
"text": "Table 6",
"ref_id": null
},
{
"start": 909,
"end": 916,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "PKU",
"sec_num": null
},
{
"text": "on PKU dataset but a competitive result on MSR dataset, which can attribute to too strict maximum word length setting for consistence as it is well known that MSR corpus has a much longer average word length (Zhao et al., 2010) . Table 7 demonstrates the results on MSR corpus with different maximum decoding word lengths, in which both F 1 scores and training time are given. The results show that the segmentation performance can indeed further be improved by allowing longer words during decoding, though longer training time are also needed. As 6character words are allowed, F 1 score on MSR can be furthermore improved 0.3%.",
"cite_spans": [
{
"start": 208,
"end": 227,
"text": "(Zhao et al., 2010)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "PKU",
"sec_num": null
},
{
"text": "For the running cost, we roughly report the current computation consuming on PKU dataset. 7 It takes about two days to finish 50 training epochs (for results in Figure 6 and the last row of Table 6) only with two cores of an Intel i7-5960X CPU. The requirement for RAM during training is less than 800MB. The trained model can be saved within 4MB on the hard disk.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 169,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "PKU",
"sec_num": null
},
{
"text": "Neural Network Models. Most modern CWS methods followed (Xue, 2003) treated CWS as a sequence labeling problems (Zhao et al., 2006b) . Recently, researchers have tended to explore neural network based approaches (Collobert et al., 2011) to reduce efforts of feature engineering (Zheng et al., 2013; Qi et al., 2014; Chen et al., 2015a; Chen et al., 2015b) . They modeled CWS as tagging problem as well, scoring tags on individual characters. In those models, tag scores are decided by context information within local windows and the sentence-level score is obtained via context-independently tag transitions. Pei et al. (2014) introduced the tag embedding as input to capture the combinations of context and tag history. However, in previous works, only the tag of previous one character was taken into consideration though theoretically the complete history of 7 Our code is released at https://github.com/jcyk/CWS. actions taken by the segmenter should be considered.",
"cite_spans": [
{
"start": 56,
"end": 67,
"text": "(Xue, 2003)",
"ref_id": "BIBREF34"
},
{
"start": 112,
"end": 132,
"text": "(Zhao et al., 2006b)",
"ref_id": "BIBREF46"
},
{
"start": 212,
"end": 236,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 278,
"end": 298,
"text": "(Zheng et al., 2013;",
"ref_id": "BIBREF48"
},
{
"start": 299,
"end": 315,
"text": "Qi et al., 2014;",
"ref_id": "BIBREF22"
},
{
"start": 316,
"end": 335,
"text": "Chen et al., 2015a;",
"ref_id": "BIBREF2"
},
{
"start": 336,
"end": 355,
"text": "Chen et al., 2015b)",
"ref_id": "BIBREF3"
},
{
"start": 610,
"end": 627,
"text": "Pei et al. (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Alternatives to Sequence Labeling. Besides sequence labeling schemes, Zhang and Clark (2007) proposed a word-based perceptron method. Zhang et al. (2012) used a linear-time incremental model which can also benefits from various kinds of features including word-based features. But both of them rely heavily on massive handcrafted features. Contemporary to this work, some neural models (Zhang et al., 2016a; Liu et al., 2016) also leverage word-level information. Specifically, Liu et al. (2016) use a semi-CRF taking segment-level embeddings as input and Zhang et al. (2016a) use a transition-based framework.",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "Zhang and Clark (2007)",
"ref_id": "BIBREF36"
},
{
"start": 134,
"end": 153,
"text": "Zhang et al. (2012)",
"ref_id": "BIBREF37"
},
{
"start": 386,
"end": 407,
"text": "(Zhang et al., 2016a;",
"ref_id": "BIBREF39"
},
{
"start": 408,
"end": 425,
"text": "Liu et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 478,
"end": 495,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Another notable exception is (Ma and Hinrichs, 2015) , which is also an embedding-based model, but models CWS as configuration-action matching. However, again, this method only uses the context information within limited sized windows.",
"cite_spans": [
{
"start": 29,
"end": 52,
"text": "(Ma and Hinrichs, 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Other Techniques. The proposed model might furthermore benefit from some techniques in recent state-of-the-art systems, such as semisupervised learning (Zhao and Kit, 2008b; Zhao and Kit, 2008a; Sun and Xu, 2011; Zhao and Kit, 2011; Zeng et al., 2013; Zhang et al., 2013) , incorporating global information (Zhao and Kit, 2007; Zhang et al., 2016b) , and joint models (Qian and Liu, 2012; Li and Zhou, 2012) .",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "(Zhao and Kit, 2008b;",
"ref_id": "BIBREF43"
},
{
"start": 174,
"end": 194,
"text": "Zhao and Kit, 2008a;",
"ref_id": "BIBREF42"
},
{
"start": 195,
"end": 212,
"text": "Sun and Xu, 2011;",
"ref_id": "BIBREF27"
},
{
"start": 213,
"end": 232,
"text": "Zhao and Kit, 2011;",
"ref_id": "BIBREF44"
},
{
"start": 233,
"end": 251,
"text": "Zeng et al., 2013;",
"ref_id": "BIBREF35"
},
{
"start": 252,
"end": 271,
"text": "Zhang et al., 2013)",
"ref_id": "BIBREF38"
},
{
"start": 307,
"end": 327,
"text": "(Zhao and Kit, 2007;",
"ref_id": "BIBREF41"
},
{
"start": 328,
"end": 348,
"text": "Zhang et al., 2016b)",
"ref_id": "BIBREF40"
},
{
"start": 368,
"end": 388,
"text": "(Qian and Liu, 2012;",
"ref_id": "BIBREF23"
},
{
"start": 389,
"end": 407,
"text": "Li and Zhou, 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "This paper presents a novel neural framework for the task of Chinese word segmentation, which contains three main components: (1) a factory to produce word representation when given its governed characters; (2) a sentence-level likelihood evaluation system for segmented sentence; (3) an efficient and effective algorithm to find the best segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The proposed framework makes a latest attempt to formalize word segmentation as a direct structured learning procedure in terms of the recent distributed representation framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Though our system outputs results that are better than the latest neural network segmenters but comparable to all previous state-of-the-art systems, the framework remains a great of potential that can be further investigated and improved in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "http://www.sighan.org/bakeoff2003/score",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This 4-character limitation is just for consistence for both datasets. We are aware that it is a too strict setting, especially which makes additional performance loss in a dataset with larger average word length, i.e., MSR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dictionary used in(Chen et al., 2015a;Chen et al., 2015b) is neither publicly released nor specified the exact source until now. We have to re-run their code using our selected dictionary to make a fair comparison. Our dictionary has been submitted along with this submission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://code.google.com/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. The Journal of Machine Learning Re- search, 3:1137-1155.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "L",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "Stephen A Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Computa- tional linguistics, 22(1):39-71.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Gated recursive neural network for chinese word segmentation",
"authors": [
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Chenxi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1744--1753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1744-1753.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Long short-term memory neural networks for chinese word segmentation",
"authors": [
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Chenxi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1197--1206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term memory neural networks for chinese word segmen- tation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 1197-1206.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, pages 1724-1734.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Ma- chine Learning Research, 12:2121-2159.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The second international chinese word segmentation bakeoff",
"authors": [
{
"first": "Thomas",
"middle": [
"Emerson"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the fourth SIGHAN workshop on Chinese language Processing",
"volume": "133",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Emerson. 2005. The second international chi- nese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, volume 133.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Which is essential for chinese word segmentation: Character versus word",
"authors": [
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2006,
"venue": "The 20th Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang-Ning Huang and Hai Zhao. 2006. Which is essential for chinese word segmentation: Character versus word. In The 20th Pacific Asia Conference on Language, Information and Computation, pages 1-12.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.06615"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2015. Character-aware neural lan- guage models. arXiv preprint arXiv:1508.06615.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An empirical analysis of optimization for max-margin nlp",
"authors": [
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "273--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan K. Kummerfeld, Taylor Berg-Kirkpatrick, and Dan Klein. 2015. An empirical analysis of opti- mization for max-margin nlp. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 273-279.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth Interntional Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- terntional Conference on Machine Learning.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unified dependency parsing of chinese morphological and syntactic structures",
"authors": [
{
"first": "Zhongguo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1445--1454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongguo Li and Guodong Zhou. 2012. Unified de- pendency parsing of chinese morphological and syn- tactic structures. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning, pages 1445-1454.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploring segment representations for neural segmentation models",
"authors": [
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.05499"
]
},
"num": null,
"urls": [],
"raw_text": "Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. arXiv preprint arXiv:1604.05499.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A maximum entropy approach to chinese word segmentation",
"authors": [
{
"first": "Jin",
"middle": [
"Kiat"
],
"last": "Low",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Wenyuan",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "1612164",
"issue": "",
"pages": "448--455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to chinese word seg- mentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 1612164, pages 448-455.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Accurate linear-time chinese word segmentation via embedding matching",
"authors": [
{
"first": "Jianqiang",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Erhard",
"middle": [],
"last": "Hinrichs",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1733--1743",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianqiang Ma and Erhard Hinrichs. 2015. Accurate linear-time chinese word segmentation via embed- ding matching. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing, pages 1733-1743.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "11th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Re- current neural network based language model. In 11th Annual Conference of the International Speech Communication Association, pages 1045-1048.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Maxmargin tensor neural network for chinese word segmentation",
"authors": [
{
"first": "Wenzhe",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "293--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max- margin tensor neural network for chinese word seg- mentation. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics, pages 293-303.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Chinese segmentation and new word detection using conditional random fields",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Fangfang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detec- tion using conditional random fields. In Proceed- ings of the 20th international conference on Compu- tational Linguistics, page 562.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep learning for character-based information extraction",
"authors": [
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sujatha",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "668--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanjun Qi, Sujatha G Das, Ronan Collobert, and Jason Weston. 2014. Deep learning for character-based information extraction. In Advances in Information Retrieval, pages 668-674.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Joint chinese word segmentation, pos tagging and parsing",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "501--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Qian and Yang Liu. 2012. Joint chinese word segmentation, pos tagging and parsing. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 501- 511.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "(approximate) subgradient methods for structured prediction",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Nathan D Ratliff",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Bagnell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zinkevich",
"suffix": ""
}
],
"year": 2007,
"venue": "International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "380--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan D Ratliff, J Andrew Bagnell, and Martin Zinke- vich. 2007. (approximate) subgradient methods for structured prediction. In International Conference on Artificial Intelligence and Statistics, pages 380- 387.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Parsing with compositional vector grammars",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ng",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013. Parsing with composi- tional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics, pages 455-465.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Enhancing chinese word segmentation using unlabeled data",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "970--979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 970-979.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A discriminative latent variable chinese segmenter with hybrid word/character information",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yaozhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "56--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshi- masa Tsuruoka, and Jun'ichi Tsujii. 2009. A dis- criminative latent variable chinese segmenter with hybrid word/character information. In Proceedings of Human Language Technologies: The 2009 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, pages 56-64.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "253--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast on- line training with frequency-adaptive learning rates for chinese word segmentation and new word de- tection. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics, pages 253-262.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Lstm neural networks for language modeling",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Sundermeyer",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "13th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Sundermeyer, Ralf Schl\u00fcter, and Hermann Ney. 2012. Lstm neural networks for language model- ing. In 13th Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning structured prediction models: A large margin approach",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Vassil",
"middle": [],
"last": "Chatalbashev",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "896--903",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured predic- tion models: A large margin approach. In Proceed- ings of the 22nd international conference on Ma- chine learning, pages 896-903.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A conditional random field word segmenter for sighan bakeoff",
"authors": [
{
"first": "Huihsin",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Pichuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the fourth SIGHAN workshop on Chinese language Processing",
"volume": "171",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A condi- tional random field word segmenter for sighan bake- off 2005. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, volume 171.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning distributed word representations for bidirectional lstm recurrent neural network",
"authors": [
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Frank",
"middle": [
"K"
],
"last": "Soong",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilu Wang, Yao Qian, Hai Zhao, Frank K. Soong, Lei He, and Ke Wu. 2016. Learning distributed word representations for bidirectional lstm recurrent neu- ral network. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Chinese word segmentation as character tagging",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "8",
"issue": "",
"pages": "29--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29-48.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Graph-based semi-supervised model for joint chinese word segmentation and partof-speech tagging",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Chao",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "770--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Is- abel Trancoso. 2013. Graph-based semi-supervised model for joint chinese word segmentation and part- of-speech tagging. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics, pages 770-779.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Chinese segmentation with a word-based perceptron algorithm",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "840--847",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2007. Chinese segmen- tation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the As- sociation of Computational Linguistics, pages 840- 847.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Word segmentation on chinese mirco-blog data with a linear-time incremental model",
"authors": [
{
"first": "Kaixu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Changle",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2012,
"venue": "Second CIPS-SIGHAN Joint Conference on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaixu Zhang, Maosong Sun, and Changle Zhou. 2012. Word segmentation on chinese mirco-blog data with a linear-time incremental model. In Second CIPS- SIGHAN Joint Conference on Chinese Language Processing, pages 41-46.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exploring representations from unlabeled data with co-training for Chinese word segmentation",
"authors": [
{
"first": "Longkai",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Mairgup",
"middle": [],
"last": "Mansur",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "311--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from un- labeled data with co-training for Chinese word seg- mentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing, pages 311-321.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Transition-based neural word segmentation",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016a. Transition-based neural word segmentation. In Pro- ceedings of the 54nd Annual Meeting of the Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Probabilistic graph-based dependency parsing with convolutional neural network",
"authors": [
{
"first": "Zhiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiong Zhang, Hai Zhao, and Lianhui Qin. 2016b. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54nd Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Incorporating global information into supervised learning for chinese word segmentation",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "66--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2007. Incorporating global information into supervised learning for chi- nese word segmentation. In Proceedings of the 10th Conference of the Pacific Association for Computa- tional Linguistics, pages 66-74.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Exploiting unlabeled text with different unsupervised segmentation criteria for chinese word segmentation",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "Research in Computing Science",
"volume": "33",
"issue": "",
"pages": "93--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2008a. Exploiting unla- beled text with different unsupervised segmentation criteria for chinese word segmentation. Research in Computing Science, 33:93-104.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "106--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2008b. Unsupervised segmentation helps supervised learning of charac- ter tagging for word segmentation and named entity recognition. In Proceedings of the Third Interna- tional Joint Conference on Natural Language Pro- cessing, pages 106-111.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Integrating unsupervised and supervised word segmentation: The role of goodness measures",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2011,
"venue": "Information Sciences",
"volume": "181",
"issue": "1",
"pages": "163--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2011. Integrating unsu- pervised and supervised word segmentation: The role of goodness measures. Information Sciences, 181(1):163-183.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "An improved chinese word segmentation system with conditional random field",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing",
"volume": "1082117",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chang-Ning Huang, and Mu Li. 2006a. An improved chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Process- ing, volume 1082117.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Effective tag set selection in chinese word segmentation via conditional random field modeling",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 9th Pacific Association for Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "87--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006b. Effective tag set selection in chinese word segmentation via conditional random field modeling. In Proceedings of the 9th Pacific Asso- ciation for Computational Linguistics, volume 20, pages 87-94.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A unified character-based tagging framework for chinese word segmentation",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "9",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2010. A unified character-based tagging frame- work for chinese word segmentation. ACM Trans- actions on Asian Language Information Processing, 9(2):5.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Deep learning for Chinese word segmentation and POS tagging",
"authors": [
{
"first": "Xiaoqing",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Hanyang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "647--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 647-657.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Our framework.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Link scores (dashed lines).",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Input: model parameters \u03b8 beam size k maximum word length w input character sequence c[1 : n] Output: Approx. k best segmentations 1:\u03c0[0] \u2190 {(score = 0, h = h 0 , c = c 0 )} 2: for i = 1 to n do j = max(1, i \u2212 w) to i do 6: w = GCNN-Procedure(c[j : i]) 7:X.add((index = j \u2212 1, word = w)) y.append(x) | y \u2208 \u03c0[x.index] and x",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Dropout rate on input layer p = 0.2 Maximum word length w = 4",
"num": null
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"text": "Performances of different score strategies on PKU dataset.",
"num": null
},
"TABREF0": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Feature windows of different models. i(j) indexes the current character(word) that is under scoring.",
"html": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Hyper-parameter settings.",
"html": null
},
"TABREF3": {
"content": "<table><tr><td/><td>PKU</td><td>MSR</td></tr><tr><td>+Dictionary</td><td colspan=\"2\">ours theirs ours theirs</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Performances of different models on PKU dataset.",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>: Comparison of using different Chinese</td></tr><tr><td>idiom dictionaries. 3</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF6": {
"content": "<table><tr><td colspan=\"2\">: Comparison with previous neural network models. Results with * are from our runs on their</td></tr><tr><td>released implementations. 5</td><td/></tr><tr><td>Models</td><td>PKU MSR PKU MSR</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
}
}
}
}