| { |
| "paper_id": "L16-1034", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:08:34.071022Z" |
| }, |
| "title": "LibN3L:A Lightweight Package for Neural NLP", |
| "authors": [ |
| { |
| "first": "Meishan", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Heilongjiang University", |
| "location": { |
| "settlement": "Harbin", |
| "country": "China" |
| } |
| }, |
| "email": "meishanzhang@sutd.edu.sg" |
| }, |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Singapore University of Technology", |
| "location": {} |
| }, |
| "email": "jieyang@mymail.sutd.edu.sg" |
| }, |
| { |
| "first": "Zhiyang", |
| "middle": [], |
| "last": "Teng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Singapore University of Technology", |
| "location": {} |
| }, |
| "email": "zhiyangteng@mymail.sutd.edu.sg" |
| }, |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Singapore University of Technology", |
| "location": {} |
| }, |
| "email": "yuezhang@sutd.edu.sg" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a lightweight machine learning tool for NLP research. The package supports operations on both discrete and dense vectors, facilitating implementation of linear models as well as neural models. It provides several basic layers which mainly aims for single-layer linear and non-linear transformations. By using these layers, we can conveniently implement linear models and simple neural models. Besides, this package also integrates several complex layers by composing those basic layers, such as RNN, Attention Pooling, LSTM and gated RNN. Those complex layers can be used to implement deep neural models directly.", |
| "pdf_parse": { |
| "paper_id": "L16-1034", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a lightweight machine learning tool for NLP research. The package supports operations on both discrete and dense vectors, facilitating implementation of linear models as well as neural models. It provides several basic layers which mainly aims for single-layer linear and non-linear transformations. By using these layers, we can conveniently implement linear models and simple neural models. Besides, this package also integrates several complex layers by composing those basic layers, such as RNN, Attention Pooling, LSTM and gated RNN. Those complex layers can be used to implement deep neural models directly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Deep learning methods have received increasing research attention in natural language processing (NLP), with neural models being built for classification (Kalchbrenner et al., 2014) , sequence labeling (Collobert et al., 2011) , parsing (Socher et al., 2013; Dyer et al., 2015; Zhou et al., 2015; Weiss et al., 2015) , machine translation (Cho et al., 2014) , fine-grained sentiment analysis and other tasks. This surge of the interest gives rise to a demand of software libraries, which can facilitate research by allowing fast prototyping and modeling for experimentation. For traditional methods such as conditional random fields (CRF) (Lafferty et al., 2001 ) and SVM (Vapnik, 1995) , there has been various software toolkits, implemented in different programming languages, including Java, Python and C++. These toolkits offer a large degree of variety for building NLP models by using or adapting the machine learning algorithms. For deep learning, a number of software tools have been developed, including theano 1 (Bergstra et al., 2010) , caffe 2 (Jia et al., 2014) , CNN 3 , torch 4 etc. These tools are based on different programming languages and design concepts. On the other hand, most of these libraries are not designed specifically for NLP tasks. In addition, many existing libraries define a complex class hierarchy, making it difficult for some users to use or adapt the modules. We present another deep learning toolkit in C++, designed specifically for NLP applications. The main objective is to make it extremely light-weight, so as to minimize the effort in building a neural model. We take a layered approach, offering high-level models for classification and sequence labeling, such as neural CRF (Do et al., 2010) , recurrent neural networks (RNN) (Graves, 2012) and longshort-term memories (LSTM) (Hochreiter and Schmidhuber, 1997) , which are frequently used in NLP. On the oth-er hand, we minimize encapsulation, implementing neural structures strictly abiding by their formal definitions, so as to make it easy to work directly with neural layers and facilitate extensions to existing network structures. Our design is centralized in the structure of a neural layer, which performs the standard feed-forward function and back-propagation. We provide a wide range of built-in neural activation functions, and common operations such as concatenation, pooling, window function and embedding lookup, which are needed by most NLP tasks. We support flexible objective functions and optimization methods, such as max-margin, max likelihood criterions and Ada-Grad (Duchi et al., 2011) , and also verification functions such as gradient check. One uniqueness of our toolkit is the support of both dense continuous features and sparse indicator features in neural layers, making it convenient also to build traditional discrete models such as the perceptron, logistic regression and CRF, and to combine discrete and continuous features (Ma et al., 2014; Durrett and Klein, 2015; . Taking word segmentation, POS-tagging and name entity recognition (NER) as typical examples, we show how stateof-the-art discrete, neural and hybrid models can be built using our toolkit. For example, we show how a bidirectional LSTM model can be built for POS tagging in only 23-lines (12 for inference and 11 for back-propagation) of codes, which gives highly competitive accuracies on standard benchmarks.", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 181, |
| "text": "(Kalchbrenner et al., 2014)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 202, |
| "end": 226, |
| "text": "(Collobert et al., 2011)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 237, |
| "end": 258, |
| "text": "(Socher et al., 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 259, |
| "end": 277, |
| "text": "Dyer et al., 2015;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 278, |
| "end": 296, |
| "text": "Zhou et al., 2015;", |
| "ref_id": null |
| }, |
| { |
| "start": 297, |
| "end": 316, |
| "text": "Weiss et al., 2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 339, |
| "end": 357, |
| "text": "(Cho et al., 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 639, |
| "end": 661, |
| "text": "(Lafferty et al., 2001", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 672, |
| "end": 686, |
| "text": "(Vapnik, 1995)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1022, |
| "end": 1045, |
| "text": "(Bergstra et al., 2010)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1056, |
| "end": 1074, |
| "text": "(Jia et al., 2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 1722, |
| "end": 1739, |
| "text": "(Do et al., 2010)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1774, |
| "end": 1788, |
| "text": "(Graves, 2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1824, |
| "end": 1858, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 2587, |
| "end": 2607, |
| "text": "(Duchi et al., 2011)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 2957, |
| "end": 2974, |
| "text": "(Ma et al., 2014;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 2975, |
| "end": 2999, |
| "text": "Durrett and Klein, 2015;", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Shown in Table 1 , we provide several basic classes, which are widely used in neural networks and discrete machine learning algorithms, including atomic layers, pooling functions, loss functions and others. All classes have three interfaces, one for obtaining forward outputs, one for computing backward losses, and the last for update parameters. Neural Layers The neural layers are single atomic layers used in neural networks, which support one, two or three input vectors. In Table 1 , f can be any activation function, ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 16, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 480, |
| "end": 487, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Base Layers", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "y = f (W x + b) bi-layer: y = f (W1x1 + W2x2 + b) tri-layer: y = f (W1x1 + W2x2 + W3x3 + b) tensor-layer: y = f (x1T x2 + b) Discrete uni-layer: y = f (W x) Pooling y = n i=1 \u03b1i xi \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Base Layers", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "max: \u03b1i,j = 1, when i = arg max s (xs,j ), otherwise 0; min: \u03b1i,j = 1, when i = arg min s (xs,j ), otherwise 0; average: \u03b1i,j = 1 n ; sum: \u03b1i,j = 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Base Layers", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "Classifier max entropy (MAXENT) : o, y \u2192 \u2202o :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "loss(o) = \u2212y log softmax(o); \u2202o = dloss(o) do Structural Learning CRF, max likelihood (CRFML) : o n 1 , y n 1 \u2192 \u2202o n 1 : loss(o n 1 ) = \u2212 log p(y n 1 |o n 1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": ", where p(.) can be computed via the forward-backward algorithm (Sutton and Mccallum, 2007) ;", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 91, |
| "text": "(Sutton and Mccallum, 2007)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2202o n 1 = dloss(o n 1 ) do n 1 CRF, max margin (CRFMM) : o n 1 , y n 1 \u2192 \u2202o n 1 : loss(o n 1 ) = max\u0177n 1 (s(\u0177 n 1 ) + \u03b4(\u0177 n 1 , y n 1 )) \u2212 s(y n 1 ), where\u0177 n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "1 is an answer sequence with one label for each position;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2202o n 1 = dloss(o n 1 ) do n 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "Others LookupTable: E, specifying vector representations for one vocabulary. Concatenation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "y = x1 \u2295 x2 \u2295 \u2022 \u2022 \u2022 \u2295 x M Dropout: y = m x,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "where m is a mask vector Window function: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "x n 1 \u2192 y n 1 , where yi = xi\u2212c \u2295 \u2022 \u2022 \u2022 \u2295 xi \u2295 \u2022 \u2022 \u2022 \u2295 xi+c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "RNN x n 1 \u2192 y n 1 : yj = f (W xj + U yj\u00b11 + b) GRNN x n 1 \u2192 y n 1 , where y n 1 is computed by: rj = \u03c3(W1xj + U1yj\u00b11 + b1) yj = f (W2xj + U2(rj yj\u00b11) + b2) zj = \u03c3(W3xj + U3yj\u00b11 + b3) yj = ( 1 \u2212 zj ) yj\u00b11 + zj \u1ef9j LSTM x n 1 \u2192 y n 1 , where y n 1 is computed by: ij = \u03c3(W1xj + U1yj\u00b11 + V1cj\u00b11 + b1) fj = \u03c3(W2xj + U2yj\u00b11 + V2cj\u00b11 + b2) cj = f (W3xj + U3yj\u00b11 + b3) cj = ij cj + fj cj\u00b11 oj = \u03c3(W4xj + U4yj\u00b11 + V4cj + b4) yj = oj f (cj )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "Attention Model x n 1 , a n 1 \u2192 y, where y is computed by: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "hj = f (W xj + U aj + b) \u03b1j = exp(hj ) z = n j=1 \u03b1j y = n j=1 \u03b1 j x j z", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Loss Function", |
| "sec_num": null |
| }, |
| { |
| "text": "Using basic classes, one can build advanced neural network structures in the literature. In this package, we implement four different neural networks, including a simple recurrent neural network (RNN), a gated recurrent neural network (GRNN), a long-short term memory neural network (LST-M) and an attention model. Their definitions are given in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 346, |
| "end": 353, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Network structures", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "We show how to apply the package to building neural network models for Chinese word segmentation, POS tagging and NER. All three tasks are formalized as sequence labeling problems. The general framework is shown in Figure 1 , where we collect input vectors (t n 1 ) at the bottom for each word, and then add a windowlized layer to exploit surrounding information, obtaining x n 1 . Then, we apply two LSTM neural networks, one being computed from left", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 215, |
| "end": 223, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u2022 \u2022 \u2022 x1 \u2022 \u2022 \u2022 lh1 \u2022 \u2022 \u2022 rh1 \u2022 \u2022 \u2022 h1 \u2022 \u2022 \u2022 x2 \u2022 \u2022 \u2022 lh2 \u2022 \u2022 \u2022 rh2 \u2022 \u2022 \u2022 h2 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 xn\u22121 \u2022 \u2022 \u2022 lhn\u22121 \u2022 \u2022 \u2022 rhn\u22121 \u2022 \u2022 \u2022 hn\u22121 \u2022 \u2022 \u2022 xn \u2022 \u2022 \u2022 lhn \u2022 \u2022 \u2022 rhn \u2022 \u2022 \u2022 hn \u2022 \u2022 \u2022 t1 \u2022 \u2022 \u2022 o1 \u2022 \u2022 \u2022 t2 \u2022 \u2022 \u2022 o2 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 tn\u22121 \u2022 \u2022 \u2022 on\u22121 \u2022 \u2022 \u2022 tn \u2022 \u2022 \u2022 on x n 1 =windowlized(t n 1 ) 7 \u2202t n 1 =windowlized backward(\u2202x n 1 ) 17 \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 rh n 1 =rlstm.forward(x n 1 ) 9 lh n 1 =llstm.forward(x n 1 ) 8 \u2202x n 1 +=rlstm.backward(\u2202rh n 1 ) 15 \u2202x n 1 +=llstm.backward(\u2202lh n 1 ) 16 \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe non-linear combination: h n 1 =nlcomb.forward(lh n 1 , rh n 1 ) 10", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "non-linear combination:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "(\u2202lh n 1 , \u2202rh n 1 )=nlcomb.backward(\u2202h n 1 ) 14", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "output layer (linear unigram):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "o n 1 =olayer.forward(h n 1 ) 11", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "output layer (linear unigram):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u2202h n 1 =olayer.backward(\u2202o n 1 ) 13 input vectors t n 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "losses of input vectors \u2202t n 1 loss layer: \u2202o n 1 =crflayer.backward(o n 1 , y n 1 ), where y n 1 denotes gold answers. 12 Figure 1 : Neural framework for word segmentation, POS tagging and named entity recognition. to right lh n 1 and the other being computed from right to left rh n 1 . These two kinds of features are combined using a nonlinear combination layer, giving h n 1 . Finally, we compute output vectors o n 1 , scoring different labels at each position. During training, we run standard back-propagation. We choose CRF max-margin loss to compute the output losses \u2202o n 1 . Then step by step, we compute the losses of h n 1 , lh n 1 , rh n 1 , x n 1 and t n 1 , aggregating losses for each parameter at each layer. Finally, we use Adagrad to update parameters for all layers.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 123, |
| "end": 131, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u2022 \u2022 \u2022 Pool \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 hm i \u22121 \u2022 \u2022 \u2022 hm i \u2022 \u2022 \u2022 h2 \u2022 \u2022 \u2022 h1 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 xm i \u22121 \u2022 \u2022 \u2022 xm i \u2022 \u2022 \u2022 x2 \u2022 \u2022 \u2022 x1 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 Ecm i \u22121 \u2022 \u2022 \u2022 Ecm i \u2022 \u2022 \u2022 Ec2 \u2022 \u2022 \u2022 Ec1 attention model: vci=attention.forward(h m i 1 , Ewi) 5 attention model: (\u2202h m i 1 , \u2202Ewi)+=attention.backward(\u2202vci) 19 non-linear combination: h m i 1 =nlcomb.forward(x m i 1 ) 4 non-linear combination: \u2202x m i 1 =nlcomb.backward(\u2202h m i 1 ) 20 x m i 1 =windowlized(Ec m i 1 ) 3 \u2202Ec m i 1 =windowlized backward(\u2202x m i 1 ) 21", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Between segmentation, POS tagging and NER, the differences lie mainly in the input vectors t n 1 . For Chinese word segmentation, we use the concatenation of character unigram embeddings Ec i and bigram embeddings Ec i\u22121 c i at each position as the input vector t i . The character unigram and bigram embeddings are pretrained separately. For POS tagging, t i consists of embedding Ew i of the word w i and its vector representation vc i derived from its character sequence c mi 1 (m i is the length of word w i ). vc i is constructed according to neural network structures shown in Figure 2 . For NER, t i consists of three parts, including Ew i , vc i and the word's POS tag embedding Ep i . The deep neural POS tagging model consists of only 23 lines of code, as marked by red superscripts in Table 3 , Figure 2 and Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 583, |
| "end": 591, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 796, |
| "end": 803, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 806, |
| "end": 814, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 819, |
| "end": 827, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Besides the neural models above, we also implement discrete models for the three tasks. The discrete features are extracted according to Liu et al. (2014) , Toutanova et al. (2003) and Che et al. (2013) for word segmentation, POS tagging and NER, respectively. We simply apply the sparse atomic layer and exploit the same CRF max-margin for training model parameters. Finally, we make combinations of the discrete and neural models by aggregating their output vectors.", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 154, |
| "text": "Liu et al. (2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 157, |
| "end": 180, |
| "text": "Toutanova et al. (2003)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 185, |
| "end": 202, |
| "text": "Che et al. (2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We conduct experiments on several datasets. For Chinese word segmentation, we exploit PKU, MSR and CTB60 datasets, where the training and testing corpus of PKU and MSR can be downloaded from BakeOff2005 website 5 . For POS tagging, we perform experiments on both English and Chinese datasets. For English, we follow Toutanova et al. (2003) , using WSJ sections of 0-18 as the training dataset, section 19-21 as the development corpus and section 22-24 as the testing dataset. For Chinese, we use the same data set as Li et al. (2015) . For NER, we follow Che et al. (2013) ", |
| "cite_spans": [ |
| { |
| "start": 316, |
| "end": 339, |
| "text": "Toutanova et al. (2003)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 517, |
| "end": 533, |
| "text": "Li et al. (2015)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 555, |
| "end": 572, |
| "text": "Che et al. (2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results.", |
| "sec_num": null |
| }, |
| { |
| "text": "to Operation Word Segmentation POS Tagging NER Forward Eci = uniCharE.lookup(ci) Ewi = wordE.lookup(wi) 1 Ewi = wordE.lookup(wi) Ecici\u22121 = biCharE.lookup(cici\u22121) vci = vector(c m i 1 ) Epi = wordE.lookup(pi) ti = concat(Eci, Ecici\u22121) ti = concat(Ewi, vci) 6 vci = vector(c m i 1 ) ti = concat(Ewi, Epi, vci) Backward (\u2202Eci, \u2202Ecici\u22121) = unconcat(\u2202ti) (\u2202Ewi, \u2202vci) = unconcat(\u2202ti) 18 (\u2202Ewi, \u2202pi, \u2202vci) = unconcat(\u2202ti) uniCharE.backloss(ci, \u2202Eci) \u2202c m i 1 = vector backward(\u2202vci) \u2202c m i 1 = vector backward(\u2202vci) biCharE.backloss(cici\u22121, \u2202Ecici\u22121)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results.", |
| "sec_num": null |
| }, |
| { |
| "text": "wordE.backloss(wi, \u2202Ewi) 23 posE.backloss(pi, \u2202Epi) wordE.backloss(wi, \u2202Ewi) split Ontonotes 4.0 to get the English and Chinese datasets. Our experimental results are shown in Table 4 . As can be seen for the table, our neural models give competitive results compared the state-of-the-art results on each task, which are Zhang and Clark (2007) for Chinese word segmentation, Toutanova et al. (2003) for English POS tagging, Li et al. (2015) for Chinese POS tagging and Che et al. (2013) for English and Chinese NER.", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 343, |
| "text": "Zhang and Clark (2007)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 375, |
| "end": 398, |
| "text": "Toutanova et al. (2003)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 424, |
| "end": 440, |
| "text": "Li et al. (2015)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 469, |
| "end": 486, |
| "text": "Che et al. (2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 176, |
| "end": 183, |
| "text": "Table 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results.", |
| "sec_num": null |
| }, |
| { |
| "text": "Our code and examples in this paper is available under GPL at https://github.com/SUTDNLP/, including repositories of LibN3L, NNSegmentation, NNPOSTagging and NNName-dEntity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Code", |
| "sec_num": "4." |
| }, |
| { |
| "text": "http://www.sighan.org/bakeoff2005/. We split 10% of the training corpus as the development corpus. The training, development and testing sections corpus of CTB60 is the same as(Zhang et al., 2014).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their constructive comments, which helped to improve the paper. This work is supported by the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301, SRG ISTD 2012 038 from Singapore University of Technology and Design, and National Natural Science Foundation of China (NSFC) under grant 61170148.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": "5." |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Theano: a cpu and gpu math expression compiler", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bergstra", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Breuleux", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Bastien", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Lamblin", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Pascanu", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Desjardins", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Warde-Farley", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Python for scientific computing conference (SciPy)", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pas- canu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010). Theano: a cpu and gpu math ex- pression compiler. In Proceedings of the Python for sci- entific computing conference (SciPy), volume 4, page 3. Austin, TX.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Named entity recognition with bilingual constraints", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "52--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Che, W., Wang, M., Manning, C. D., and Liu, T. (2013). Named entity recognition with bilingual constraints. In Proceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 52-62.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "On the properties of neural machine translation: Encoder-decoder approaches. Syntax, Semantics and Structure in Statistical Translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Van Merri\u00ebnboer", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cho, K., van Merri\u00ebnboer, B., Bahdanau, D., and Bengio, Y. (2014). On the properties of neural machine trans- lation: Encoder-decoder approaches. Syntax, Semantics and Structure in Statistical Translation, page 103.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Natural language processing (almost) from scratch", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., and Kuksa, P. (2011). Natural lan- guage processing (almost) from scratch. Journal of Ma- chine Learning Research, 12:2493-2537.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Neural conditional random fields", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Do", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Arti", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "International Conference on Artificial Intelligence and Statistics", |
| "volume": "", |
| "issue": "", |
| "pages": "177--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Do, T., Arti, T., et al. (2010). Neural conditional random fields. In International Conference on Artificial Intelli- gence and Statistics, pages 177-184.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Adaptive subgradient methods for online learning and stochastic optimization", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Duchi", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2121--2159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive sub- gradient methods for online learning and stochastic op- timization. The Journal of Machine Learning Research, 12:2121-2159.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Neural crf parsing", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "302--312", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Durrett, G. and Klein, D. (2015). Neural crf parsing. In Proceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302-312, Beijing, Chi- na, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Transition-based dependency parsing with stack long short-term memory", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Matthews", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "334--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dyer, C., Ballesteros, M., Ling, W., Matthews, A., and Smith, N. A. (2015). Transition-based dependency pars- ing with stack long short-term memory. In ACL, pages 334-343.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Supervised Sequence Labelling with Recurrent Neural Networks", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "385", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graves, A. (2012). Supervised Sequence Labelling with Recurrent Neural Networks, volume 385. Springer.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Long shortterm memory", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hochreiter, S. and Schmidhuber, J. (1997). Long short- term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Caffe: Convolutional architecture for fast feature embedding", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the ACM International Conference on Multimedia", |
| "volume": "", |
| "issue": "", |
| "pages": "675--678", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caffe: Convolutional architecture for fast feature embed- ding. In Proceedings of the ACM International Confer- ence on Multimedia, pages 675-678. ACM.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A convolutional neural network for modelling sentences", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "655--665", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kalchbrenner, N., Grefenstette, E., and Blunsom, P. (2014). A convolutional neural network for modelling sentences. In Proceedings of the 52nd ACL, pages 655- 665.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lafferty, J. D., McCallum, A., and Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for seg- menting and labeling sequence data. In ICML, pages 282-289.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Coupled sequence labeling on heterogeneous annotations: Pos tagging as a case study", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Chao", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1783--1792", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Z., Chao, J., Zhang, M., and Chen, W. (2015). Coupled sequence labeling on heterogeneous annotations: Pos tagging as a case study. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1783-1792, Beijing, China, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Domain adaptation for crf-based chinese word segmentation using free annotations", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "864--874", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liu, Y., Zhang, Y., Che, W., Liu, T., and Wu, F. (2014). Domain adaptation for crf-based chinese word segmen- tation using free annotations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 864-874, Doha, Qatar, Oc- tober. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Tagging the web: Building a robust web tagger with neural network", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "144--154", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ma, J., Zhang, Y., and Zhu, J. (2014). Tagging the we- b: Building a robust web tagger with neural network. In Proceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Pa- pers), pages 144-154.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Parsing with compositional vector grammars", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "455--465", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Socher, R., Bauer, J., Manning, C. D., and Andrew Y., N. (2013). Parsing with compositional vector grammars. In Proceedings of the 51st ACL, pages 455-465.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "An Introduction to Conditional Random Fields for Relational Learning", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Sutton", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Introduction to statistical relational learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sutton, C. and Mccallum, A. (2007). An Introduction to Conditional Random Fields for Relational Learning. In Lise Getoor et al., editors, Introduction to statistical re- lational learning, chapter 4, page 93. MIT Press.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Feature-rich part-of-speech tagging with a cyclic dependency network", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Toutanova, K., Klein, D., Manning, C., and Singer, Y. (2003). Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of HLT-NAACL 2003.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The Nature of Statistical Learning Theory", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [ |
| "N" |
| ], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vapnik, V. N. (1995). The Nature of Statistical Learning Theory. New York.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Structured training for neural network transition-based parsing", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Alberti", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "323--333", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiss, D., Alberti, C., Collins, M., and Petrov, S. (2015). Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 323-333, Beijing, China, July. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Chinese segmentation with a word-based perceptron algorithm", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "840--847", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, Y. and Clark, S. (2007). Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th ACL, pages 840-847.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Combining discrete and continuous features for deterministic transition-based dependency parsing", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1316--1321", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, M. and Zhang, Y. (2015). Combining discrete and continuous features for deterministic transition-based de- pendency parsing. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 1316-1321, Lisbon, Portugal, September. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Character-level chinese dependency parsing", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1326--1336", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, M., Zhang, Y., Che, W., and Liu, T. (2014). Character-level chinese dependency parsing. In Pro- ceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1326-1336, Baltimore, Maryland, June. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Neural networks for open domain targeted sentiment", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "D.-T", |
| "middle": [], |
| "last": "Vo", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, M., Zhang, Y., and Vo, D.-T. (2015). Neural net- works for open domain targeted sentiment. In Proceed- ings of the 2015 Conference on EMNLP.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A neural probabilistic structured-prediction model for transition-based dependency parsing", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 53rd ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1213--1222", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of the 53rd ACL, pages 1213-1222.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "vector representation derived from character sequences.", |
| "uris": null |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "text": "Base classes.", |
| "html": null, |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "text": "Classes of neural network structures.such as the simple id operation or non-linear functions including tanh, sigmoid and exp. For discrete features, we support only one vector input. A logistic regression classifier can be built using one single discrete layer. Pooling Pooling functions are widely used to obtain fixeddimensional output from sequential vectors of variable lengths. Commonly-used pooling techniques include max, min and averaged function. We implement sum pooling also.", |
| "html": null, |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "text": "The obtaining of word representation. 94.56 94.99 96.94 96.61 96.78 95.43 95.16 95.29 97.23 93.97 80.14 79.29 79.71 72.67 73.92 73.29 Neural 94.29 94.56 94.42 96.79 97.54 97.17 94.48 95.01 94.75 97.28 94.02 77.25 80.19 78.69 65.59 71.84 68.57 Hybrid 95.74 95.12 95.42 97.01 97.39 97.20 95.68 95.64 95.66 97.47 95.07 81.90 83.26 82.57 72.98 80.15 76.40 State-of-the-art N/A N/A 94.50 N/A N/A 97.20 N/A N/A 95.05 97.24 94.10 82.95 76.67 79.68 76.90 63.32 69.45", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td/><td/><td/><td colspan=\"5\">Chinese Word Segmentation</td><td/><td/><td colspan=\"2\">POS Tagging</td><td/><td/><td>NER</td><td/><td/></tr><tr><td>Model</td><td/><td>PKU</td><td/><td/><td>MSR</td><td/><td/><td>CTB60</td><td/><td colspan=\"2\">English Chinese</td><td/><td>English</td><td/><td/><td>Chinese</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>Acc</td><td>Acc</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td>Discrete</td><td>95.42</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>" |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "text": "Main results.", |
| "html": null, |
| "num": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |