{ "paper_id": "D14-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:56:30.613024Z" }, "title": "Revisiting Embedding Features for Simple Semi-supervised Learning", "authors": [ { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "jguo@ir.hit.edu.cn" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Baidu Inc", "location": { "settlement": "Beijing", "country": "China" } }, "email": "wanghaifeng@baidu.com" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "tliu@ir.hit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent work has shown success in using continuous word embeddings learned from unlabeled data as features to improve supervised NLP systems, which is regarded as a simple semi-supervised learning mechanism. However, fundamental problems on effectively incorporating the word embedding features within the framework of linear models remain. In this study, we investigate and analyze three different approaches, including a new proposed distributional prototype approach, for utilizing the embedding features. The presented approaches can be integrated into most of the classical linear models in NLP. Experiments on the task of named entity recognition show that each of the proposed approaches can better utilize the word embedding features, among which the distributional prototype approach performs the best. Moreover, the combination of the approaches provides additive improvements, outperforming the dense and continuous embedding features by nearly 2 points of F1 score.", "pdf_parse": { "paper_id": "D14-1012", "_pdf_hash": "", "abstract": [ { "text": "Recent work has shown success in using continuous word embeddings learned from unlabeled data as features to improve supervised NLP systems, which is regarded as a simple semi-supervised learning mechanism. However, fundamental problems on effectively incorporating the word embedding features within the framework of linear models remain. In this study, we investigate and analyze three different approaches, including a new proposed distributional prototype approach, for utilizing the embedding features. The presented approaches can be integrated into most of the classical linear models in NLP. Experiments on the task of named entity recognition show that each of the proposed approaches can better utilize the word embedding features, among which the distributional prototype approach performs the best. Moreover, the combination of the approaches provides additive improvements, outperforming the dense and continuous embedding features by nearly 2 points of F1 score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Learning generalized representation of words is an effective way of handling data sparsity caused by high-dimensional lexical features in NLP systems, such as named entity recognition (NER) and dependency parsing. As a typical lowdimensional and generalized word representation, Brown clustering of words has been studied for a long time. For example, Liang (2005) and Koo et al. (2008) used the Brown cluster features for semi-supervised learning of various NLP tasks and achieved significant improvements. * Email correspondence.", "cite_spans": [ { "start": 352, "end": 364, "text": "Liang (2005)", "ref_id": "BIBREF16" }, { "start": 369, "end": 386, "text": "Koo et al. (2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent research has focused on a special family of word representations, named \"word embeddings\". Word embeddings are conventionally defined as dense, continuous, and low-dimensional vector representations of words. Word embeddings can be learned from large-scale unlabeled texts through context-predicting models (e.g., neural network language models) or spectral methods (e.g., canonical correlation analysis) in an unsupervised setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Compared with the so-called one-hot representation where each word is represented as a sparse vector of the same size of the vocabulary and only one dimension is on, word embedding preserves rich linguistic regularities of words with each dimension hopefully representing a latent feature. Similar words are expected to be distributed close to one another in the embedding space. Consequently, word embeddings can be beneficial for a variety of NLP applications in different ways, among which the most simple and general way is to be fed as features to enhance existing supervised NLP systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work has demonstrated effectiveness of the continuous word embedding features in several tasks such as chunking and NER using generalized linear models (Turian et al., 2010) . 1 However, there still remain two fundamental problems that should be addressed:", "cite_spans": [ { "start": 161, "end": 182, "text": "(Turian et al., 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Are the continuous embedding features fit for the generalized linear models that are most widely adopted in NLP?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 How can the generalized linear models better utilize the embedding features?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "According to the results provided by Turian et al. (2010) , the embedding features brought significantly less improvement than Brown clustering features. This result is actually not reasonable because the expressing power of word embeddings is theoretically stronger than clustering-based representations which can be regarded as a kind of one-hot representation but over a low-dimensional vocabulary (Bengio et al., 2013) . Wang and Manning (2013) showed that linear architectures perform better in high-dimensional discrete feature space than non-linear ones, whereas non-linear architectures are more effective in low-dimensional and continuous feature space. Hence, the previous method that directly uses the continuous word embeddings as features in linear models (CRF) is inappropriate. Word embeddings may be better utilized in the linear modeling framework by smartly transforming the embeddings to some relatively higher dimensional and discrete representations.", "cite_spans": [ { "start": 37, "end": 57, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" }, { "start": 401, "end": 422, "text": "(Bengio et al., 2013)", "ref_id": "BIBREF2" }, { "start": 425, "end": 448, "text": "Wang and Manning (2013)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Driven by this motivation, we present three different approaches: binarization (Section 3.2), clustering (Section 3.3) and a new proposed distributional prototype method (Section 3.4) for better incorporating the embeddings features. In the binarization approach, we directly binarize the continuous word embeddings by dimension. In the clustering approach, we cluster words based on their embeddings and use the resulting word cluster features instead. In the distributional prototype approach, we derive task-specific features from word embeddings by utilizing a set of automatically extracted prototypes for each target label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We carefully compare and analyze these approaches in the task of NER. Experimental results are promising. With each of the three approaches, we achieve higher performance than directly using the continuous embedding features, among which the distributional prototype approach performs the best. Furthermore, by putting the most effective two of these features together, we finally outperform the continuous embedding features by nearly 2 points of F1 Score (86.21% vs. 88.11%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The major contribution of this paper is twofold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) We investigate various approaches that can better utilize word embeddings for semi-supervised learning. (2) We propose a novel distributional prototype approach that shows the great potential of word embedding features. All the presented approaches can be easily integrated into most of the classical linear NLP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical modeling has achieved great success in most NLP tasks. However, there still remain some major unsolved problems and challenges, among which the most widely concerned is the data sparsity problem. Data sparsity in NLP is mainly caused by two factors, namely, the lack of labeled training data and the Zipf distribution of words. On the one hand, large-scale labeled training data are typically difficult to obtain, especially for structure prediction tasks, such as syntactic parsing. Therefore, the supervised models can only see limited examples and thus make biased estimation. On the other hand, the natural language words are Zipf distributed, which means that most of the words appear a few times or are completely absent in our texts. For these low-frequency words, the corresponding parameters usually cannot be fully trained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised Learning with Word Embeddings", "sec_num": "2" }, { "text": "More foundationally, the reason for the above factors lies in the high-dimensional and sparse lexical feature representation, which completely ignores the similarity between features, especially word features. To overcome this weakness, an effective way is to learn more generalized representations of words by exploiting the numerous unlabeled data, in a semi-supervised manner. After which, the generalized word representations can be used as extra features to facilitate the supervised systems. Liang (2005) learned Brown clusters of words (Brown et al., 1992) from unlabeled data and use them as features to promote the supervised NER and Chinese word segmentation. Brown clusters of words can be seen as a generalized word representation distributed in a discrete and low-dimensional vocabulary space. Contextually similar words are grouped in the same cluster. The Brown clustering of words was also adopted in dependency parsing (Koo et al., 2008) and POS tagging for online conversational text (Owoputi et al., 2013) , demonstrating significant improvements.", "cite_spans": [ { "start": 498, "end": 510, "text": "Liang (2005)", "ref_id": "BIBREF16" }, { "start": 543, "end": 563, "text": "(Brown et al., 1992)", "ref_id": "BIBREF4" }, { "start": 936, "end": 954, "text": "(Koo et al., 2008)", "ref_id": "BIBREF14" }, { "start": 1002, "end": 1024, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised Learning with Word Embeddings", "sec_num": "2" }, { "text": "Recently, another kind of word representation named \"word embeddings\" has been widely studied (Bengio et al., 2003; Mnih and Hinton, 2008) . Using word embeddings, we can evaluate the similarity of two words straightforward by computing the dot-product of two numerical vectors in the Hilbert space. Two similar words are expected to be distributed close to each other. 2 Word embeddings can be useful as input to an NLP model (mostly non-linear) or as additional features to enhance existing systems. Collobert et al. (2011) used word embeddings as input to a deep neural network for multi-task learning. Despite of the effectiveness, such non-linear models are hard to build and optimize. Besides, these architectures are often specialized for a certain task and not scalable to general tasks. A simple and more general way is to feed word embeddings as augmented features to an existing supervised system, which is similar to the semi-supervised learning with Brown clusters.", "cite_spans": [ { "start": 94, "end": 115, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF1" }, { "start": 116, "end": 138, "text": "Mnih and Hinton, 2008)", "ref_id": "BIBREF21" }, { "start": 370, "end": 371, "text": "2", "ref_id": null }, { "start": 502, "end": 525, "text": "Collobert et al. (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised Learning with Word Embeddings", "sec_num": "2" }, { "text": "As discussed in Section 1, Turian et al. (2010) is the pioneering work on using word embedding features for semi-supervised learning. However, their approach cannot fully exploit the potential of word embeddings. We revisit this problem in this study and investigate three different approaches for better utilizing word embeddings in semi-supervised learning.", "cite_spans": [ { "start": 27, "end": 47, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised Learning with Word Embeddings", "sec_num": "2" }, { "text": "3 Approaches for Utilizing Embedding Features", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised Learning with Word Embeddings", "sec_num": "2" }, { "text": "In this paper, we will consider a contextpredicting model, more specifically, the Skip-gram model (Mikolov et al., 2013a; Mikolov et al., 2013b) for learning word embeddings, since it is much more efficient as well as memory-saving than other approaches. Let's denote the embedding matrix to be learned by C d\u00d7N , where N is the vocabulary size and d is the dimension of word embeddings. Each column of C represents the embedding of a word. The Skip-gram model takes the current word w as input, and predicts the probability distribution of its context words within a fixed window size. Concretely, w is first mapped to its embedding v w by selecting the corresponding column vector of C (or multiplying C with the one-hot vector of w). The probability of its context word c is then computed using a log-linear function:", "cite_spans": [ { "start": 98, "end": 121, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF17" }, { "start": 122, "end": 144, "text": "Mikolov et al., 2013b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (c|w; \u03b8) = exp(v c v w ) c \u2208V exp(v c v w )", "eq_num": "(1)" } ], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "where V is the vocabulary. The parameters \u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "are v w i , v c i for w, c \u2208 V and i = 1, ..., d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "Then, the log-likelihood over the entire training dataset D can be computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J(\u03b8) = (w,c)\u2208D log p(c|w; \u03b8)", "eq_num": "(2)" } ], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "The model can be trained by maximizing J(\u03b8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "Here, we suppose that the word embeddings have already been trained from large-scale unlabeled texts. We will introduce various approaches for utilizing the word embeddings as features for semi-supervised learning. The main idea, as introduced in Section 1, is to transform the continuous word embeddings to some relatively higher dimensional and discrete representations. The direct use of continuous embeddings as features (Turian et al., 2010) will serve as our baseline setting.", "cite_spans": [ { "start": 425, "end": 446, "text": "(Turian et al., 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Training", "sec_num": "3.1" }, { "text": "One fairly natural approach for converting the continuous-valued word embeddings to discrete values is binarization by dimension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization of Embeddings", "sec_num": "3.2" }, { "text": "Formally, we aim to convert the continuousvalued embedding matrix C d\u00d7N , to another matrix M d\u00d7N which is discrete-valued. There are various conversion functions. Here, we consider a simple one. For the i th dimension of the word embeddings, we divide the corresponding row vector C i into two halves for positive (C i+ ) and negative (C i\u2212 ), respectively. The conversion function is then defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization of Embeddings", "sec_num": "3.2" }, { "text": "M ij = \u03c6(C ij ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 U + , if C ij \u2265 mean(C i+ ) B \u2212 , if C ij \u2264 mean(C i\u2212 ) 0, otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization of Embeddings", "sec_num": "3.2" }, { "text": "where mean(v) is the mean value of vector v, U + is a string feature which turns on when the value (C ij ) falls into the upper part of the positive list. Similarly, B \u2212 refers to the bottom part of the negative list. The insight behind \u03c6 is that we only consider the features with strong opinions (i.e., positive or negative) on each dimension and omit the values close to zero. In this study, we again investigate this approach. Concretely, each word is treated as a single sample. The batch k-means clustering algorithm (Sculley, 2010) is used, 3 and each cluster is represented as the mean of the embeddings of words assigned to it. Similarities between words and clusters are measured by Euclidean distance.", "cite_spans": [ { "start": 523, "end": 538, "text": "(Sculley, 2010)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Binarization of Embeddings", "sec_num": "3.2" }, { "text": "Moreover, different number of clusters n contain information of different granularities. Therefore, we combine the cluster features of different ns to better utilize the embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering of Embeddings", "sec_num": "3.3" }, { "text": "We propose a novel kind of embedding features, named distributional prototype features for supervised models. This is mainly inspired by prototype-driven learning (Haghighi and Klein, 2006) which was originally introduced as a primarily unsupervised approach for sequence modeling. In prototype-driven learning, a few prototypical examples are specified for each target label, which can be treated as an injection of prior knowledge. This sparse prototype information is then propagated across an unlabeled corpus through distributional similarities.", "cite_spans": [ { "start": 163, "end": 189, "text": "(Haghighi and Klein, 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "The basic motivation of the distributional prototype features is that similar words are supposed to be tagged with the same label. This hypothesis makes great sense in tasks such as NER and POS tagging. For example, suppose Michael is a prototype of the named entity (NE) type PER. Using the distributional similarity, we could link similar words to the same prototypes, so the word David can be linked to Michael because the two words have high similarity (exceeds a threshold). Using this link feature, the model will push David closer to PER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "To derive the distributional prototype features, first, we need to construct a few canonical examples (prototypes) for each target annotation label. We use the normalized pointwise mutual information (NPMI) (Bouma, 2009) between the label and word, which is a smoothing version of the standard PMI, to decide the prototypes of each label. Given the annotated training corpus, the NPMI between a label and word is computed as follows: where,", "cite_spans": [ { "start": 207, "end": 220, "text": "(Bouma, 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb n (label, word) = \u03bb(label, word) \u2212 ln p(label, word)", "eq_num": "(3" } ], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb(label, word) = ln p(label, word) p(label)p(word)", "eq_num": "(4)" } ], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "is the standard PMI. For each target label l (e.g., PER, ORG, LOC), we compute the NPMI of l and all words in the vocabulary, and the top m words are chosen as the prototypes of l. We should note that the prototypes are extracted fully automatically, without introducing additional human prior knowledge. Table 1 shows the top four prototypes extracted from the NER training corpus of CoNLL-2003 shared task (Tjong Kim Sang and De Meulder, 2003) , which contains four NE types, namely, PER, ORG, LOC, and MISC. Non-NEs are denoted by O. We convert the original annotation to the standard BIO-style. Thus, the final corpus contains nine labels in total.", "cite_spans": [ { "start": 419, "end": 445, "text": "Sang and De Meulder, 2003)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 305, "end": 312, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "Next, we introduce the prototypes as features to our supervised model. We denote the set of prototypes for all target labels by S p . For each prototype z \u2208 S p , we add a predicate proto = z, which becomes active at each w if the distributional similarity between z and w (DistSim(z, w)) is above some threshold. DistSim(z, w) can be efficiently calculated through the cosine similarity of the embeddings of z and w. Figure 1 gives an illustration of the distributional prototype features. Unlike previous embedding features or Brown clusters, the distributional prototype features are taskspecific because the prototypes of each label are extracted from the training data.", "cite_spans": [], "ref_spans": [ { "start": 418, "end": 426, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "Moreover, each prototype word is also its own prototype (since a word has maximum similarity to itself). Thus, if the prototype is closely related to a label, all the words that are distributionally ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "i -1 x i x 1 \uf02d i y i y O B-LOC in /IN Hague /NNP O B-LOC 1 ( , ) \uf02d \uf03d \uf0d9 i i f y y ( , ) word = Hague pos = NNP proto = Britain B-LOC proto = England ... \uf03d \uf0ec \uf0fc \uf0ef \uf0ef \uf0ef \uf0ef \uf0ef \uf0ef \uf0d9 \uf0ed \uf0fd \uf0ef \uf0ef \uf0ef \uf0ef \uf0ef \uf0ef \uf0ee \uf0fe i i f x y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Prototype Features", "sec_num": "3.4" }, { "text": "Various tasks can be considered to compare and analyze the effectiveness of the above three approaches. In this study, we partly follow Turian et al. (2010) and , and take NER as the supervised evaluation task. NER identifies and classifies the named entities such as the names of persons, locations, and organizations in text. The state-of-the-art systems typically treat NER as a sequence labeling problem, where each word is tagged either as a BIO-style NE or a non-NE category.", "cite_spans": [ { "start": 136, "end": 156, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised Evaluation Task", "sec_num": "4" }, { "text": "Here, we use the linear chain CRF model, which is most widely used for sequence modeling in the field of NLP. The CoNLL-2003 shared task dataset from the Reuters, which was used by Turian et al. (2010) and , was chosen as our evaluation dataset. The training set contains 14,987 sentences, the development set contains 3,466 sentences and is used for parameter tuning, and the test set contains 3,684 sentences.", "cite_spans": [ { "start": 181, "end": 201, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised Evaluation Task", "sec_num": "4" }, { "text": "The baseline features are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 42, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Supervised Evaluation Task", "sec_num": "4" }, { "text": "In this section, we introduce the embedding features to the baseline NER system, turning the supervised approach into a semi-supervised one. Dense embedding features. The dense continuous embedding features can be fed directly to the CRF model. These embedding features can be seen as heterogeneous features from the existing baseline features, which are discrete. There is no effective way for dense embedding features to be combined internally or with other discrete features. So we only use the unigram embedding features following Turian et al. (2010) . Concretely, the embedding feature template is:", "cite_spans": [ { "start": 535, "end": 555, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "Baseline NER Feature Templates 00: Binarized embedding features. The binarized embedding feature template is similar to the dense one. The only difference is that the feature values are discrete and we omit dimensions with zero value. Therefore, the feature template becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "w i+k , \u22122 \u2264 k \u2264 2 01: w i+k \u2022 w i+k+1 , \u22122 \u2264 k \u2264 1 02: t i+k , \u22122 \u2264 k \u2264 2 03: t i+k \u2022 t i+k+1 , \u22122 \u2264 k \u2264 1 04: chk i+k , \u22122 \u2264 k \u2264 2 05: chk i+k \u2022 chk i+k+1 , \u22122 \u2264 k \u2264 1 06: Prefix (w i+k , l), \u22122 \u2264 k \u2264 2, 1 \u2264 l \u2264 4 07: Suffix (w i+k , l), \u22122 \u2264 k \u2264 2, 1 \u2264 l \u2264 4 08: Type(w i+k ), \u22122 \u2264 k \u2264 2 Unigram Features y i \u2022 00 \u2212 08 Bigram Features y i\u22121 \u2022 y i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "\u2022 bi i+k [d], \u22122 \u2264 k \u2264 2, where bi i+k [d] = 0,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "d ranges over the dimensions of the binarized vector bi of word embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "In this way, the dimension of the binarized embedding feature space becomes 2 \u00d7 d compared with the originally d of the dense embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "Compound cluster features. The advantage of the cluster features is that they can be combined internally or with other features to form compound features, which can be more discriminative. Furthermore, the number of resulting clusters n can be tuned, and different ns indicate different granularities. Concretely, the compound cluster feature template for each specific n is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "\u2022 c i+k , \u22122 \u2264 k \u2264 2. \u2022 c i+k \u2022 c i+k+1 , \u22122 \u2264 k \u2264 1. \u2022 c i\u22121 \u2022 c i+1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "Distributional prototype features. The set of prototypes is again denoted by S p , which is de-cided by selecting the top m (NPMI) words as prototypes of each label, where m is tuned on the development set. For each word w i in a sequence, we compute the distributional similarity between w i and each prototype in S p and select the prototypes zs that DistSim(z, w) \u2265 \u03b4. We set \u03b4 = 0.5 without manual tuning. The distributional prototype feature template is then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "\u2022 {proto i+k =z | DistSim(w i+k , z) \u2265 \u03b4 & z \u2208 S p }, \u22122 \u2264 k \u2264 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "We only use the unigram features, since the number of active distributional prototype features varies for different words (positions). Hence, these features cannot be combined effectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Feature Templates", "sec_num": "4.1" }, { "text": "Brown clustering has achieved great success in various NLP applications. At most time, it provides a strong baseline that is difficult to beat (Turian et al., 2010) . Consequently, in our study, we conduct comparisons among the embedding features and the Brown clustering features, along with further investigations of their combination.", "cite_spans": [ { "start": 143, "end": 164, "text": "(Turian et al., 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Brown Clustering", "sec_num": "4.2" }, { "text": "The Brown algorithm is a hierarchical clustering algorithm which optimizes a class-based bigram language model defined on the word clusters (Brown et al., 1992) . The output of the Brown algorithm is a binary tree, where each word is uniquely identified by its path from the root. Thus each word can be represented as a bit-string with a specific length.", "cite_spans": [ { "start": 140, "end": 160, "text": "(Brown et al., 1992)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Brown Clustering", "sec_num": "4.2" }, { "text": "Following the setting of Owoputi et al. (2013), we will use the prefix features of hierarchical clusters to take advantage of the word similarity in different granularities. Concretely, the Brown cluster feature template is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brown Clustering", "sec_num": "4.2" }, { "text": "\u2022 bc i+k , \u22122 \u2264 k \u2264 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brown Clustering", "sec_num": "4.2" }, { "text": "\u2022 prefix (bc i+k , p), p \u2208 {2,4,6,...,16}, \u22122 \u2264 k \u2264 2. prefix takes the p-length prefix of the Brown cluster coding bc i+k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brown Clustering", "sec_num": "4.2" }, { "text": "We take the English Wikipedia until August 2012 as our unlabeled data to train the word embeddings. 4 Little pre-processing is conducted for the 4 download.wikimedia.org.", "cite_spans": [ { "start": 100, "end": 101, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.1" }, { "text": "training of word embeddings. We remove paragraphs that contain non-roman characters and all MediaWiki markups. The resulting text is tokenized using the Stanford tokenizer, 5 and every word is converted to lowercase. The final dataset contains about 30 million sentences and 1.52 billion words. We use a dictionary that contains 212,779 most common words (frequency \u2265 80) in the dataset. An efficient open-source implementation of the Skip-gram model is adopted. 6 We apply the negative sampling 7 method for optimization, and the asynchronous stochastic gradient descent algorithm (Asynchronous SGD) for parallel weight updating. In this study, we set the dimension of the word embeddings to 50. Higher dimension is supposed to bring more improvements in semi-supervised learning, but its comparison is beyond the scope of this paper. For the cluster features, we tune the number of clusters n from 500 to 3000 on the development set, and finally use the combination of n = 500, 1000, 1500, 2000, 3000, which achieves the best results. For the distributional prototype features, we use a fixed number of prototype words (m) for each target label. m is tuned on the development set and is finally set to 40.", "cite_spans": [ { "start": 463, "end": 464, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.1" }, { "text": "We induce 1,000 brown clusters of words, the setting in prior work (Koo et al., 2008; Turian et al., 2010) . The training data of brown clustering is the same with that of training word embeddings. Table 3 shows the performances of NER on the test dataset. Our baseline is slightly lower than that of Turian et al. (2010) , because they use the BILOU encoding of NE types which outperforms BIO encoding (Ratinov and Roth, 2009) . 8 Nonetheless, our conclusions hold. As we can see, all of the three approaches we investigate in this study achieve better performance than the direct use of the dense continuous embedding features.", "cite_spans": [ { "start": 67, "end": 85, "text": "(Koo et al., 2008;", "ref_id": "BIBREF14" }, { "start": 86, "end": 106, "text": "Turian et al., 2010)", "ref_id": "BIBREF26" }, { "start": 301, "end": 321, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" }, { "start": 403, "end": 427, "text": "(Ratinov and Roth, 2009)", "ref_id": "BIBREF23" }, { "start": 430, "end": 431, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 198, "end": 205, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.1" }, { "text": "To our surprise, even the binarized embedding features (BinarizedEmb) outperform the continuous version (DenseEmb). This provides clear evidence that directly using the dense continuous embeddings as features in CRF indeed cannot fully Finkel et al. (2005) 86.86 Krishnan and Manning (2006) 87.24 Ando and Zhang (2005) 89.31 Collobert et al. (2011) 88.67 Turian et al. (2010) , i.e., the direct use of the dense and continuous embeddings.", "cite_spans": [ { "start": 236, "end": 256, "text": "Finkel et al. (2005)", "ref_id": "BIBREF8" }, { "start": 263, "end": 290, "text": "Krishnan and Manning (2006)", "ref_id": "BIBREF15" }, { "start": 297, "end": 318, "text": "Ando and Zhang (2005)", "ref_id": "BIBREF0" }, { "start": 325, "end": 348, "text": "Collobert et al. (2011)", "ref_id": "BIBREF5" }, { "start": 355, "end": 375, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "exploit the potential of word embeddings. The compound cluster features (ClusterEmb) also outperform the DenseEmb. The same result is also shown in . Further, the distributional prototype features (DistPrototype) achieve the best performance among the three approaches (1.23% higher than DenseEmb).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "We should note that the feature templates used for BinarizedEmb and DistPrototype are merely unigram features. However, for ClusterEmb, we form more complex features by combining the clusters of the context words. We also consider different number of clusters n, to take advantage of the different granularities. Consequently, the dimension of the cluster features is much higher than that of BinarizedEmb and DistPrototype.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "We further combine the proposed features to see if they are complementary to each other. As shown in Table 3 , the cluster and distributional prototype features are the most complementary, whereas the binarized embedding features seem to have large overlap with the distributional prototype features. By combining the cluster and distributional prototype features, we further push the performance to 88.11%, which is nearly two points higher than the performance of the dense embedding features (86.21%). 9 We also compare the proposed features with the Brown cluster features. As shown in Table 3 , the distributional prototype features alone achieve comparable performance with the Brown clusters. When the cluster and distributional prototype features are used together, we outperform the Brown clusters. This result is inspiring because we show that the embedding features indeed have stronger expressing power than the Brown clusters, as desired. Finally, by combining the Brown cluster features and the proposed embedding features, the performance can be improved further (88.58%). The binarized embedding features are not included in the final compound features because they are almost overlapped with the distributional prototype features in performance.", "cite_spans": [ { "start": 505, "end": 506, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 590, "end": 597, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "We also summarize some of the reported benchmarks that utilize unlabeled data (with no gazetteers used), including the Stanford NER tagger (Finkel et al. (2005) and Krishnan and Manning (2006) ) with distributional similarity features. Ando and Zhang (2005) use unlabeled data for constructing auxiliary problems that are expected to capture a good feature representation of the target problem. Collobert et al. (2011) adjust the feature embeddings according to the specific task in a deep neural network architecture. We can see that both Ando and Zhang (2005) and Collobert et al. (2011) learn task-specific lexical features, which is similar to the proposed distributional prototype method in our study. We suggest this to be the main reason for the superiority of these methods.", "cite_spans": [ { "start": 139, "end": 160, "text": "(Finkel et al. (2005)", "ref_id": "BIBREF8" }, { "start": 165, "end": 192, "text": "Krishnan and Manning (2006)", "ref_id": "BIBREF15" }, { "start": 236, "end": 257, "text": "Ando and Zhang (2005)", "ref_id": "BIBREF0" }, { "start": 395, "end": 418, "text": "Collobert et al. (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "Another advantage of the proposed discrete features over the dense continuous features is tagging efficiency. Table 4 shows the running time using different kinds of embedding features. We achieve a significant reduction of the tagging time per sentence when using the discrete features. This is mainly due to the dense/sparse battle. Although the dense embedding features are lowdimensional, the feature vector for each word is much denser than in the sparse and discrete feature space. Therefore, we actually need much more computation during decoding. Similar results can be observed in the comparison of the DistPrototype and ClusterEmb features, since the density of the DistPrototype features is higher. Table 4 : Running time of different features on a Intel(R) Xeon(R) E5620 2.40GHz machine.", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 4", "ref_id": null }, { "start": 710, "end": 717, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "to accelerate the DistPrototype, by increasing the threshold of DistSim(z, w). However, this is indeed an issue of trade-off between efficiency and accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In this section, we conduct analyses to show the reasons for the improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5.3" }, { "text": "As discussed by Turian et al. (2010) , much of the NER F1 is derived from decisions regarding rare words. Therefore, in order to show that the three proposed embedding features have stronger ability for handling rare words, we first conduct analysis for the tagging errors of words with different frequency in the unlabeled data. We assign the word frequencies to several buckets, and evaluate the per-token errors that occurred in each bucket. Results are shown in Figure 2 . In most cases, all three embedding features result in fewer errors on rare words than the direct use of dense continuous embedding features. Interestingly, we find that for words that are extremely rare (0-256), the binarized embedding features incur significantly fewer errors than other approaches. As we know, the embeddings for the rare words are close to their initial value, because they received few updates during training. Hence, these words are not fully trained. In this case, we would like to omit these features because their embeddings are not even trustable. However, all embedding features that we proposed except Bi-narizedEmb are unable to handle this.", "cite_spans": [ { "start": 16, "end": 36, "text": "Turian et al. (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 466, "end": 474, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Rare words", "sec_num": "5.3.1" }, { "text": "In order to see how much we have utilized the embedding features in BinarizedEmb, we calculate the sparsity of the binarized embedding vectors, i.e., the ratio of zero values in each vector (Section 3.2). As demonstrated in Figure 3, the sparsity-frequency curve has good properties: higher sparsity for very rare words and very frequent words, while lower sparsity for midfrequent words. It indicates that for words that are very rare or very frequent, BinarizedEmb just omit most of the features. This is reasonable also for the very frequent words, since they usually have rich and diverse context distributions and their embeddings cannot be well learned by our models (Huang et al., 2012) . Bina-rizedEmb also reduce much of the errors for the highly frequent words (32k-64k).", "cite_spans": [ { "start": 673, "end": 693, "text": "(Huang et al., 2012)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 224, "end": 230, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Rare words", "sec_num": "5.3.1" }, { "text": "As expected, the distributional prototype features produce fewest errors in most cases. The main reason is that the prototype features are taskspecific. The prototypes are extracted from the training data and contained indicative information of the target labels. By contrast, the other embedding features are simply derived from general word representations and are not specialized for certain tasks, such as NER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rare words", "sec_num": "5.3.1" }, { "text": "Another reason for the superiority of the proposed embedding features is that the high-dimensional discrete features are more linear separable than the low-dimensional continuous embeddings. To verify the hypothesis, we further carry out experiments to analyze the linear separability of the proposed discrete embedding features against dense continuous embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Separability", "sec_num": "5.3.2" }, { "text": "We formalize this problem as a binary classification task, to determine whether a word is an NE or not (NE identification). The linear support vector machine (SVM) is used to build the classifiers, using different embedding features respec- tively. We use the LIBLINEAR tool (Fan et al., 2008) as our SVM implementation. The penalty parameter C is tuned from 0.1 to 1.0 on the development dataset. The results are shown in Table 5 . As we can see, NEs and non-NEs can be better separated using ClusterEmb or DistPrototype features. However, the BinarizedEmb features perform worse than the direct use of word embedding features. The reason might be inferred from the third column of Table 5 . As demonstrated in Wang and Manning (2013) , linear models are more effective in high-dimensional and discrete feature space. The dimension of the BinarizedEmb features remains small (500), which is merely twice the DenseEmb. By contrast, feature dimensions are much higher for ClusterEmb and DistPrototype, leading to better linear separability and thus can be better utilized by linear models. We notice that the DistPrototype features perform significantly worse than ClusterEmb in NE identification. As described in Section 3.4, in previous experiments, we automatically extracted prototypes for each label, and propagated the in-formation via distributional similarities. Intuitively, the prototypes we used should be more effective in determining fine-grained NE types than identifying whether a word is an NE. To verify this, we extract new prototypes considering only two labels, namely, NE and non-NE, using the same metric in Section 3.4. As shown in the last row of Table 5 , higher performance is achieved.", "cite_spans": [ { "start": 275, "end": 293, "text": "(Fan et al., 2008)", "ref_id": "BIBREF7" }, { "start": 712, "end": 735, "text": "Wang and Manning (2013)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 423, "end": 430, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 683, "end": 690, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 1670, "end": 1677, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Linear Separability", "sec_num": "5.3.2" }, { "text": "Semi-supervised learning with generalized word representations is a simple and general way of improving supervised NLP systems. One common approach for inducing generalized word representations is to use clustering (e.g., Brown clustering) (Miller et al., 2004; Liang, 2005; Koo et al., 2008; Huang and Yates, 2009) .", "cite_spans": [ { "start": 240, "end": 261, "text": "(Miller et al., 2004;", "ref_id": "BIBREF20" }, { "start": 262, "end": 274, "text": "Liang, 2005;", "ref_id": "BIBREF16" }, { "start": 275, "end": 292, "text": "Koo et al., 2008;", "ref_id": "BIBREF14" }, { "start": 293, "end": 315, "text": "Huang and Yates, 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Studies", "sec_num": "6" }, { "text": "Aside from word clustering, word embeddings have been widely studied. Bengio et al. (2003) propose a feed-forward neural network based language model (NNLM), which uses an embedding layer to map each word to a dense continuousvalued and low-dimensional vector (parameters), and then use these vectors as the input to predict the probability distribution of the next word. The NNLM can be seen as a joint learning framework for language modeling and word representations.", "cite_spans": [ { "start": 70, "end": 90, "text": "Bengio et al. (2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Studies", "sec_num": "6" }, { "text": "Alternative models for learning word embeddings are mostly inspired by the feed-forward NNLM, including the Hierarchical Log-Bilinear Model (Mnih and Hinton, 2008) , the recurrent neural network language model (Mikolov, 2012) , the C&W model (Collobert et al., 2011) , the loglinear models such as the CBOW and the Skip-gram model (Mikolov et al., 2013a; Mikolov et al., 2013b) .", "cite_spans": [ { "start": 140, "end": 163, "text": "(Mnih and Hinton, 2008)", "ref_id": "BIBREF21" }, { "start": 210, "end": 225, "text": "(Mikolov, 2012)", "ref_id": "BIBREF19" }, { "start": 242, "end": 266, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF5" }, { "start": 331, "end": 354, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF17" }, { "start": 355, "end": 377, "text": "Mikolov et al., 2013b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Studies", "sec_num": "6" }, { "text": "Aside from the NNLMs, word embeddings can also be induced using spectral methods, such as latent semantic analysis and canonical correlation analysis (Dhillon et al., 2011) . The spectral methods are generally faster but much more memoryconsuming than NNLMs.", "cite_spans": [ { "start": 150, "end": 172, "text": "(Dhillon et al., 2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Studies", "sec_num": "6" }, { "text": "There has been a plenty of work that exploits word embeddings as features for semi-supervised learning, most of which take the continuous features directly in linear models (Turian et al., 2010; Guo et al., 2014) . propose compound k-means cluster features based on word embeddings. They show that the high-dimensional discrete cluster features can be better utilized by linear models such as CRF. Wu et al. (2013) further apply the cluster features to transition-based dependency parsing.", "cite_spans": [ { "start": 173, "end": 194, "text": "(Turian et al., 2010;", "ref_id": "BIBREF26" }, { "start": 195, "end": 212, "text": "Guo et al., 2014)", "ref_id": "BIBREF10" }, { "start": 398, "end": 414, "text": "Wu et al. (2013)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Studies", "sec_num": "6" }, { "text": "This paper revisits the problem of semi-supervised learning with word embeddings. We present three different approaches for a careful comparison and analysis. Using any of the three embedding features, we obtain higher performance than the direct use of continuous embeddings, among which the distributional prototype features perform the best, showing the great potential of word embeddings. Moreover, the combination of the proposed embedding features provides significant additive improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "We give detailed analysis about the experimental results. Analysis on rare words and linear separability provides convincing explanations for the performance of the embedding features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "For future work, we are exploring a novel and a theoretically more sounding approach of introducing embedding kernel into the linear models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "Generalized linear models refer to the models that describe the data as a combination of linear basis functions, either directly in the input variables space or through some transformation of the probability distributions (e.g., loglinear models).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The term similar should be viewed depending on the specific task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "code.google.com/p/sofia-ml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "nlp.stanford.edu/software/tokenizer. shtml.6 code.google.com/p/word2vec/. 7 More details are analyzed in(Goldberg and Levy, 2014).8 We use BIO encoding here in order to compare with most of the reported benchmarks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Statistical significant with p-value < 0.001 by two-tailed t-test.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to Mo Yu for the fruitful discussion on the implementation of the cluster-based embedding features. We also thank Ruiji Fu, Meishan Zhang, Sendong Zhao and the anonymous reviewers for their insightful comments and suggestions. This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the National Natural Science Foundation of China (NSFC) via grant 61133012 and 61370164.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A highperformance semi-supervised learning method for text chunking", "authors": [ { "first": "Rie", "middle": [], "last": "Kubota", "suffix": "" }, { "first": "Ando", "middle": [], "last": "", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rie Kubota Ando and Tong Zhang. 2005. A high- performance semi-supervised learning method for text chunking. In Proceedings of the 43rd annual meeting on association for computational linguis- tics, pages 1-9. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R", "middle": [ "E" ], "last": "Jean Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "The Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R. E. Jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3(Feb):1137-1155.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2013, "venue": "IEEE Transactions on", "volume": "35", "issue": "8", "pages": "1798--1828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelli- gence, IEEE Transactions on, 35(8):1798-1828.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Normalized (pointwise) mutual information in collocation extraction", "authors": [ { "first": "Gerlof", "middle": [], "last": "Bouma", "suffix": "" } ], "year": 2009, "venue": "Proceedings of GSCL", "volume": "", "issue": "", "pages": "31--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, pages 31-40.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Class-based n-gram models of natural language", "authors": [ { "first": "", "middle": [], "last": "Peter F Brown", "suffix": "" }, { "first": "V", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Desouza", "suffix": "" }, { "first": "L", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Vincent J Della", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "Jenifer C", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Computational linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [ "E" ], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L. E. On Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multi-view learning of word embeddings via cca", "authors": [ { "first": "S", "middle": [], "last": "Paramveer", "suffix": "" }, { "first": "Dean", "middle": [ "P" ], "last": "Dhillon", "suffix": "" }, { "first": "Lyle", "middle": [ "H" ], "last": "Foster", "suffix": "" }, { "first": "", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2011, "venue": "NIPS", "volume": "24", "issue": "", "pages": "199--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paramveer S. Dhillon, Dean P. Foster, and Lyle H. Un- gar. 2011. Multi-view learning of word embeddings via cca. In NIPS, volume 24 of NIPS, pages 199- 207.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Liblinear: A library for large linear classification", "authors": [ { "first": "Kai-Wei", "middle": [], "last": "Rong-En Fan", "suffix": "" }, { "first": "Cho-Jui", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xiang-Rui", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "The Journal of Machine Learning Research", "volume": "9", "issue": "", "pages": "1871--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871-1874.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, pages 363-370. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "word2vec explained: deriving mikolov et al.'s negative-sampling word-embedding method", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Omer Levy. 2014. word2vec ex- plained: deriving mikolov et al.'s negative-sampling word-embedding method. CoRR, abs/1402.3722.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning sense-specific word embeddings by exploiting bilingual resources", "authors": [ { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "497--507", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning sense-specific word embed- dings by exploiting bilingual resources. In Pro- ceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Techni- cal Papers, pages 497-507, Dublin, Ireland, August. Dublin City University and Association for Compu- tational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Prototype-driven learning for sequence models", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "320--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the main conference on Human Language Technol- ogy Conference of the North American Chapter of the Association of Computational Linguistics, pages 320-327. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Distributional representations for handling sparsity in supervised sequence-labeling", "authors": [ { "first": "Fei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "1", "issue": "", "pages": "495--503", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Huang and Alexander Yates. 2009. Distribu- tional representations for handling sparsity in super- vised sequence-labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP: Volume 1- Volume 1, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Inter- national Joint Conference on Natural Language Pro- cessing of the AFNLP: Volume 1-Volume 1, pages 495-503.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving word representations via global context and multiple word prototypes", "authors": [ { "first": "Eric", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL), Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "873--882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric H. Huang, Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL), Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 873-882, Jeju Island, Korea. ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Simple semi-supervised dependency parsing", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2008, "venue": "Proc. of ACL-08: HLT, Proc. of ACL-08: HLT", "volume": "", "issue": "", "pages": "595--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency pars- ing. In Kathleen McKeown, Johanna D. Moore, Si- mone Teufel, James Allan, and Sadaoki Furui, edi- tors, Proc. of ACL-08: HLT, Proc. of ACL-08: HLT, pages 595-603, Columbus, Ohio. ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An effective two-stage model for exploiting nonlocal dependencies in named entity recognition", "authors": [ { "first": "Vijay", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1121--1128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijay Krishnan and Christopher D Manning. 2006. An effective two-stage model for exploiting non- local dependencies in named entity recognition. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Lin- guistics, pages 1121-1128. Association for Compu- tational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semi-supervised learning for natural language", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang. 2005. Semi-supervised learning for natu- ral language. Master thesis, Massachusetts Institute of Technology.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proc. of Workshop at ICLR, Proc. of Workshop at ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word repre- sentations in vector space. In Proc. of Workshop at ICLR, Proc. of Workshop at ICLR, Arizona.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proc. of the NIPS, Proc. of the NIPS", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Proc. of the NIPS, Proc. of the NIPS, pages 3111-3119, Nevada. MIT Press.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Statistical Language Models Based on Neural Networks", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov. 2012. Statistical Language Models Based on Neural Networks. Ph. d. thesis, Brno Uni- versity of Technology.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Name tagging with word clusters and discriminative training", "authors": [ { "first": "Scott", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Jethran", "middle": [], "last": "Guinness", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Zamanian", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "4", "issue": "", "pages": "337--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discrim- inative training. In HLT-NAACL, volume 4, pages 337-342.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A scalable hierarchical distributed language model", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Proc. of the NIPS, Proc. of the NIPS", "volume": "", "issue": "", "pages": "1081--1088", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andriy Mnih and Geoffrey E. Hinton. 2008. A scal- able hierarchical distributed language model. In Proc. of the NIPS, Proc. of the NIPS, pages 1081- 1088, Vancouver. MIT Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improved part-of-speech tagging for online conversational text with word clusters", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "380--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL-HLT, pages 380-390.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Design challenges and misconceptions in named entity recognition", "authors": [ { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL '09", "volume": "", "issue": "", "pages": "147--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Com- putational Natural Language Learning, CoNLL '09, pages 147-155, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Combined regression and ranking", "authors": [ { "first": "", "middle": [], "last": "Sculley", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "979--988", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Sculley. 2010. Combined regression and ranking. In Proceedings of the 16th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, pages 979-988. ACM.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik F Tjong Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Fien", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003", "volume": "4", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142-147. Association for Computational Lin- guistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Word representations: a simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL), Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Jan Hajic, San- dra Carberry, and Stephen Clark, editors, Proc. of the Annual Meeting of the Association for Computa- tional Linguistics (ACL), Proc. of the Annual Meet- ing of the Association for Computational Linguistics (ACL), pages 384-394, Uppsala, Sweden. ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Effect of non-linear deep architecture in sequence labeling", "authors": [ { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proc. of the Sixth International Joint Conference on Natural Language Processing, Proc. of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1285--1291", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengqiu Wang and Christopher D. Manning. 2013. Effect of non-linear deep architecture in sequence la- beling. In Proc. of the Sixth International Joint Con- ference on Natural Language Processing, Proc. of the Sixth International Joint Conference on Natural Language Processing, pages 1285-1291, Nagoya, Japan. Asian Federation of Natural Language Pro- cessing.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Generalization of words for chinese dependency parsing", "authors": [ { "first": "Xianchao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhanyi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dianhai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xianchao Wu, Jie Zhou, Yu Sun, Zhanyi Liu, Dian- hai Yu, Hua Wu, and Haifeng Wang. 2013. Gener- alization of words for chinese dependency parsing. IWPT-2013, page 73.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Compound embedding features for semi-supervised learning", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Daxiang", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Dianhai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2013, "venue": "Proc. of the NAACL-HLT, Proc. of the NAACL-HLT", "volume": "", "issue": "", "pages": "563--568", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu, Tiejun Zhao, Daxiang Dong, Hao Tian, and Di- anhai Yu. 2013. Compound embedding features for semi-supervised learning. In Proc. of the NAACL- HLT, Proc. of the NAACL-HLT, pages 563-568, At- lanta. NAACL.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "An example of distributional prototype features for NER. similar to that prototype are pushed towards that label.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Sparsity (with confidence interval) of the binarized embedding vector w.r.t. word frequency in the unlabeled data.", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "(b) further supports our analysis.", "uris": null, "type_str": "figure" }, "FIGREF4": { "num": null, "text": "The number of per-token errors w.r.t. word frequency in the unlabeled data. (a) For rare words (frequency \u2264 2k). (b) For frequent words (frequency \u2265 4k).", "uris": null, "type_str": "figure" }, "FIGREF5": { "num": null, "text": "", "uris": null, "type_str": "figure" }, "TABREF2": { "html": null, "num": null, "content": "", "type_str": "table", "text": "Prototypes extracted from the CoNLL-2003 NER training data using NPMI." }, "TABREF3": { "html": null, "num": null, "content": "
", "type_str": "table", "text": "Features used in the NER system. t is the POS tag. chk is the chunking tag. Prefix and Suffix are the first and last l characters of a word. Type indicates if the word is all-capitalized, is-capitalized, all-digits, etc." }, "TABREF5": { "html": null, "num": null, "content": "
: The performance of semi-supervised
NER on the CoNLL-2003 test data, using vari-
ous embedding features. \u2020 DenseEmb refers to the
method used by
", "type_str": "table", "text": "" }, "TABREF8": { "html": null, "num": null, "content": "", "type_str": "table", "text": "Performance of the NE/non-NE classification on the CoNLL-2003 development dataset using different embedding features." } } } }