ACL-OCL / Base_JSON /prefixL /json /L16 /L16-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L16-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:08:48.340695Z"
},
"title": "Word Embeddings Evaluation and Combination",
"authors": [
{
"first": "Sahar",
"middle": [],
"last": "Ghannay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIUM -University of Le Mans",
"location": {
"postCode": "72000",
"settlement": "Le Mans",
"country": "France"
}
},
"email": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Favre",
"suffix": "",
"affiliation": {
"laboratory": "LIF UMR 7279",
"institution": "CNRS",
"location": {
"postCode": "13000",
"settlement": "Marseille",
"country": "France"
}
},
"email": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Est\u00e8ve",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIUM -University of Le Mans",
"location": {
"postCode": "72000",
"settlement": "Le Mans",
"country": "France"
}
},
"email": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Camelin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIUM -University of Le Mans",
"location": {
"postCode": "72000",
"settlement": "Le Mans",
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings have been successfully used in several natural language processing tasks (NLP) and speech processing. Different approaches have been introduced to calculate word embeddings through neural networks. In the literature, many studies focused on word embedding evaluation, but for our knowledge, there are still some gaps. This paper presents a study focusing on a rigorous comparison of the performances of different kinds of word embeddings. These performances are evaluated on different NLP and linguistic tasks, while all the word embeddings are estimated on the same training data using the same vocabulary, the same number of dimensions, and other similar characteristics. The evaluation results reported in this paper match those in the literature, since they point out that the improvements achieved by a word embedding in one task are not consistently observed across all tasks. For that reason, this paper investigates and evaluates approaches to combine word embeddings in order to take advantage of their complementarity, and to look for the effective word embeddings that can achieve good performances on all tasks. As a conclusion, this paper provides new perceptions of intrinsic qualities of the famous word embedding families, which can be different from the ones provided by works previously published in the scientific literature.",
"pdf_parse": {
"paper_id": "L16-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings have been successfully used in several natural language processing tasks (NLP) and speech processing. Different approaches have been introduced to calculate word embeddings through neural networks. In the literature, many studies focused on word embedding evaluation, but for our knowledge, there are still some gaps. This paper presents a study focusing on a rigorous comparison of the performances of different kinds of word embeddings. These performances are evaluated on different NLP and linguistic tasks, while all the word embeddings are estimated on the same training data using the same vocabulary, the same number of dimensions, and other similar characteristics. The evaluation results reported in this paper match those in the literature, since they point out that the improvements achieved by a word embedding in one task are not consistently observed across all tasks. For that reason, this paper investigates and evaluates approaches to combine word embeddings in order to take advantage of their complementarity, and to look for the effective word embeddings that can achieve good performances on all tasks. As a conclusion, this paper provides new perceptions of intrinsic qualities of the famous word embedding families, which can be different from the ones provided by works previously published in the scientific literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings are projections in a continuous space of words supposed to preserve the semantic and syntactic similarities between them. They have been shown to be a great asset for several Natural Language Processing (NLP) tasks, like part-of-speech tagging, chunking, named entity recognition, semantic role labeling, syntactic parsing (Bansal et al., 2014a; Turian et al., 2010; Collobert et al., 2011) , and also for speech processing: for instance, word embeddings were recently involved in spoken language understanding , in detection of errors in automatic transcriptions, and in calibration of confidence measures provided by an automatic speech recognition system (Ghannay et al., 2015) . These word representations were introduced through the construction of neural language models (Bengio et al., 2003; Schwenk, 2013) . Different approaches have been proposed to compute them from large corpora. They include neural networks (Collobert et al., 2011; Mikolov et al., 2013a; Pennington et al., 2014) , dimensionality reduction on the word co-occurrence matrix (Lebret and Collobert, 2013) , and explicit representation in terms of the context in which words appear (Levy and Goldberg, 2014) . One particular hypothesis behind word embeddings is that they are generic representations that shall suit most applications. Many studies have focused on the evaluation of word embeddings intrinsic quality, as well as their impact when they are used as input of systems. Turian et al. (Turian et al., 2010) evaluate different types of word representations and their concatenation on the chunking and named entity recognition tasks.",
"cite_spans": [
{
"start": 339,
"end": 361,
"text": "(Bansal et al., 2014a;",
"ref_id": null
},
{
"start": 362,
"end": 382,
"text": "Turian et al., 2010;",
"ref_id": "BIBREF22"
},
{
"start": 383,
"end": 406,
"text": "Collobert et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 674,
"end": 696,
"text": "(Ghannay et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 793,
"end": 814,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF1"
},
{
"start": 815,
"end": 829,
"text": "Schwenk, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 937,
"end": 961,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF2"
},
{
"start": 962,
"end": 984,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF15"
},
{
"start": 985,
"end": 1009,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 1070,
"end": 1098,
"text": "(Lebret and Collobert, 2013)",
"ref_id": "BIBREF8"
},
{
"start": 1175,
"end": 1200,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 1488,
"end": 1509,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This work was partially funded by the European Commission through the EUMSSI project, under the contract number 611057, in the framework of the FP7-ICT-2013-10 call, by the French National Research Agency (ANR) through the VERA project, under the contract number ANR-12-BS02-006-01, and by the R\u00e9gion Pays de la Loire.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The evaluation can be performed as well on the word similarity and analogical reasoning tasks, like in (Levy and Goldberg, 2014; Ji et al., 2015; Gao et al., 2014; Levy et al., 2015) . Recently, the study proposed by (Levy et al., 2015) , focuses on the evaluation of neural-network-inspired word embedding models (Skip-gram and GloVe) and traditional counted-based distributional models -pointwise mutual information (PMI) and Singular Value Decomposition (SVD) models-. This study reveals that the hyperparameter optimizations and certain system design choices have a considerable impact on the performance of word embeddings, rather than the embedding algorithms themselves. Moreover, it shows that, by adapting and transferring the hyperparameters into the traditional distributional models, they achieve similar gains as the neural-network word embeddings. In this paper, we present a rigorous comparison of the performances of different kinds of word embeddings coming from different available implementations: word2vec (Mikolov et al., 2013a) , GloVe (Pennington et al., 2014) , CSLM (Schwenk, 2007; Schwenk, 2013) and word2vecf on dependency trees (Levy and Goldberg, 2014) . Some of them were never compared; for instance, word2vec embeddings (Mikolov et al., 2013a) have been never compared to the CSLM toolkit, which is able to build deep feedforward neural network language models on large datasets because of an efficient code optimized for GPUs. Moreover, dependency-based word embeddings (Levy and Goldberg, 2014) have been never compared to CSLM, GloVe or Skip-gram (Mikolov et al., 2013a) embeddings. In order to measure the supposed semantic and syntactic information captured by word embeddings, we evaluate their performance for different NLP tasks as well as on linguistic tasks. In some state of the art studies (Mikolov et al., 2013a; Mikolov et al., 2013b; Bansal et al., 2014b) , the evaluated word embeddings were estimated on different training data, or with different dimensionality. In this study all the word embeddings are estimated on the same training data, using the same vocabulary, the same dimensionality, and the same window size. In addition to these word embeddings evaluation, we are interested on their combination through concatenation, Principal Component Analysis and ordinary autoencoder in order to look for an effective embedding that can achieve good performance on all tasks. The paper is organized along the following lines: section 2. presents the different types of word embeddings evaluated in this study. Section 3. describes the benchmark tasks. The experimental setup and results are described in section 4., and the conclusion in Section 5..",
"cite_spans": [
{
"start": 103,
"end": 128,
"text": "(Levy and Goldberg, 2014;",
"ref_id": "BIBREF9"
},
{
"start": 129,
"end": 145,
"text": "Ji et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 146,
"end": 163,
"text": "Gao et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 164,
"end": 182,
"text": "Levy et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 217,
"end": 236,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 1026,
"end": 1049,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 1058,
"end": 1083,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 1091,
"end": 1106,
"text": "(Schwenk, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 1107,
"end": 1121,
"text": "Schwenk, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 1156,
"end": 1181,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 1252,
"end": 1275,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 1503,
"end": 1528,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 1582,
"end": 1605,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 1834,
"end": 1857,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF15"
},
{
"start": 1858,
"end": 1880,
"text": "Mikolov et al., 2013b;",
"ref_id": "BIBREF16"
},
{
"start": 1881,
"end": 1902,
"text": "Bansal et al., 2014b)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Different approaches have been proposed to create word embeddings through neural networks. These approaches differ in the type of the architecture and the data used to train the model. In this study, we distinguish three categories of word embeddings: the ones estimated on unlabeled data based on simple or deep architectures, and others estimated from labeled data. These representations are detailed respectively in the next subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings",
"sec_num": "2."
},
{
"text": "This section presents three types of word embeddings coming from two available implementations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fast and simple estimation of word embeddings",
"sec_num": "2.1."
},
{
"text": "\u2022 CBOW: This architecture, proposed by (Mikolov et al., 2013a) , is similar to a feedforward Neural Network Language Model (NNLM) where the non-linear hidden layer is removed, and the contextual words are projected on the same position. It consists in predicting a word given its past and future context, by averaging the contextual word vectors and then running a log-linear classifier on the averaged vector to get the resultant word.",
"cite_spans": [
{
"start": 39,
"end": 62,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fast and simple estimation of word embeddings",
"sec_num": "2.1."
},
{
"text": "\u2022 Skip-gram: This second architecture from (Mikolov et al., 2013a ) is similar to CBOW, trained using the negative-sampling procedure. It consists in predicting the contextual words given the current word. Also, the context is not limited to the immediate context, and training instances can be created by skipping a constant number of words in its context, for instance,",
"cite_spans": [
{
"start": 43,
"end": 65,
"text": "(Mikolov et al., 2013a",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fast and simple estimation of word embeddings",
"sec_num": "2.1."
},
{
"text": "w i\u22123 ,w i\u22124 ,w i+3 ,w i+4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fast and simple estimation of word embeddings",
"sec_num": "2.1."
},
{
"text": ", hence the name skip-gram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fast and simple estimation of word embeddings",
"sec_num": "2.1."
},
{
"text": "\u2022 GloVe: This approach is introduced by (Pennington et al., 2014), and relies on constructing a global co-occurrence matrix of words in the corpus. The embedding vectors are based on the analysis of cooccurrences of words in a window.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fast and simple estimation of word embeddings",
"sec_num": "2.1."
},
{
"text": "CSLM word embeddings are computed from unlabeled data by the CSLM toolkit (Schwenk, 2013) , which estimates a feedforward neural language model. This approach projects the n\u22121 word indexes onto a continuous space and, from these word embeddings representations, computes the n-gram probabilities of each word in a short-list of the most frequent words as outputs of a the neural network. This architecture is more complex and more time-consuming to train than the three approaches presented above, but the computation time is reasonable due to the ability of the GPU implementations.",
"cite_spans": [
{
"start": 74,
"end": 89,
"text": "(Schwenk, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CSLM word embeddings",
"sec_num": "2.2."
},
{
"text": "2.3. Dependency-based word embeddings (Levy and Goldberg, 2014) proposed an extension of word2vec, called word2vecf and denoted w2vf-deps, which allows to replace linear bag-of-words contexts with arbitrary features. This model is a generalization of the skip-gram model with negative sampling introduced by (Mikolov et al., 2013a) , and it needs labeled data for training. As in (Levy and Goldberg, 2014) , we derive contexts from dependency trees: a word is used to predict its governor and dependents, jointly with their dependency labels. This effectively allows for variable-size.",
"cite_spans": [
{
"start": 38,
"end": 63,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 308,
"end": 331,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 380,
"end": 405,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CSLM word embeddings",
"sec_num": "2.2."
},
{
"text": "In this sub-section, we briefly introduce the NLP tasks on which we evaluate the performance of the different word embeddings: part-of-speech tagging (POS), syntactic chunking (CHK), named entity recognition (NER), and mention detection (MENT). For each of these tasks, a label has to be predicted for each word in context. Therefore we model the problem as feeding a neural network with the concatenation of the five word embeddings of a 5-gram as input. This 5-gram is centered on the word for which the prediction has to be made by the neural network. If an embedding does not exist for one of the words, it is replaced with 0. Words outside sentence boundaries are replaced with 0. We test word embeddings in the context of the following tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark tasks 3.1. NLP tasks",
"sec_num": "3."
},
{
"text": "\u2022 Part-Of-Speech Tagging (POS): categorizing words among 48 morpho-syntactic labels (noun, verb, adjective, etc.). The system is evaluated on the standard Penn Treebank benchmark train/dev/test split (Marcus et al., 1993) .",
"cite_spans": [
{
"start": 200,
"end": 221,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark tasks 3.1. NLP tasks",
"sec_num": "3."
},
{
"text": "\u2022 Chunking (CHK): segmenting sentences in protosyntactic constituents. There are 22 begin-insideoutside encoded word-level labels. The system is evaluated on the CoNLL 2000 benchmark (Tjong Kim Sang and Buchholz, 2000) .",
"cite_spans": [
{
"start": 194,
"end": 218,
"text": "Sang and Buchholz, 2000)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark tasks 3.1. NLP tasks",
"sec_num": "3."
},
{
"text": "\u2022 Named Entity Recognition (NER): recognizing named entities in the text, such as persons, locations and organizations. There are 21 begin-inside-outside encoded word-level labels. The system is evaluated on the CoNLL 2003 benchmark (Tjong Kim Sang and De Meulder, 2003) .",
"cite_spans": [
{
"start": 244,
"end": 270,
"text": "Sang and De Meulder, 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark tasks 3.1. NLP tasks",
"sec_num": "3."
},
{
"text": "\u2022 Mention detection (MENT): recognizing mentions of entities for coreference resolution. There are 3 labels (begin, inside, outside). The task is performed on the Ontonotes corpus (Hovy et al., 2006) with the CoNLL 2012 split.",
"cite_spans": [
{
"start": 180,
"end": 199,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark tasks 3.1. NLP tasks",
"sec_num": "3."
},
{
"text": "The description of the data split for each benchmark is summarized in table 1. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark tasks 3.1. NLP tasks",
"sec_num": "3."
},
{
"text": "In this study, we are interested as well on the analogical reasoning task for the purpose of testing the space substructures of the word embeddings. The tool provided by word2vec 1 and the Google analogy dataset (Mikolov et al., 2013a ) are used for this task. The evaluation set is composed of five types of semantic questions such as capital cities (Athens:Greece \u2192 Tehran:?) and family (boy:girl \u2192 brother:?), and nine types of syntactic questions such as adjective-to-adverb (amazing:amazingly \u2192 calm:?) and comparative (bad:worse \u2192 big:?). Overall, there are 8,869 semantic and 10,675 syntactic questions. A question is correctly answered if the proposed word is exactly the same as the correct one. The question is answered using Mikolov (Mikolov et al., 2013a) approach named 3CosAdd (addition and subtruction) in the literature. Finally, we want to evaluate the different word embeddings on a variety of word similarity tasks, based on corpora WordSim353 (Finkelstein et al., 2001) , rare words (RW) (Luong et al., 2013) and, MEN (Bruni et al., 2012) . These datasets contain word pairs with human similarity ratings. The evaluation of the word representations is performed by ranking the pairs according to their cosine similarities and measuring the Spearman's rank correlation coefficient with the human judgment.",
"cite_spans": [
{
"start": 212,
"end": 234,
"text": "(Mikolov et al., 2013a",
"ref_id": "BIBREF15"
},
{
"start": 744,
"end": 767,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 963,
"end": 989,
"text": "(Finkelstein et al., 2001)",
"ref_id": "BIBREF3"
},
{
"start": 1008,
"end": 1028,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 1034,
"end": 1058,
"text": "MEN (Bruni et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic tasks",
"sec_num": "3.2."
},
{
"text": "The word embeddings described in section 2. are estimated on the annotated Gigaword corpus, which is composed of over 4 billion words. It contains dependency parses used for training w2vf-deps embeddings, and the unlabeled version is used to train the other embeddings. Note that words occurring less than 100 times have been discarded, resulting in a vocabulary size of 239K words. The parameter settings used in our experiments are summarized in Table2, their values have been selected based on previous studies (Levy and Goldberg, 2014; Ji et al., 2015; Gao et al., 2014; Levy et al., 2015) . ",
"cite_spans": [
{
"start": 514,
"end": 539,
"text": "(Levy and Goldberg, 2014;",
"ref_id": "BIBREF9"
},
{
"start": 540,
"end": 556,
"text": "Ji et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 557,
"end": 574,
"text": "Gao et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 575,
"end": 593,
"text": "Levy et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1."
},
{
"text": "W i-2 W i-1 W i W i+1 W i+2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1."
},
{
"text": "Figure 1: Architecture of the NN used for experiment on NLP tasks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1."
},
{
"text": "The first hidden layer has 300, 100 and 300 units for H1left, H1-current and H1-right respectively. The second hidden layer has 300 units. Activation functions are rectified linear units (relu) for the first layer and tanh for the second one. The hyper-parameters, learning rate and batch size, are tuned over the validation set available for each task. The CHK, NER and MENT tasks are evaluated by computing F1 scores over segments produced by our models. The POS task is evaluated by computing per-word accuracy. The conlleval script is used for evaluation (Mesnil, 2015) . Last, the significance of our results is measured using the 95% confidence interval. Experimental results are summarized in Table 3 . We observe that w2vf-deps embeddings reach the highest score for all tasks. This performance is related to the use of dependency based syntactic contexts, which capture different information more than the bag-of-word contexts. Nevertheless, the estimation of this embeddings require labeled data, which can be difficult to provide for resource-scarce languages which do not have dependency parsers.",
"cite_spans": [
{
"start": 559,
"end": 573,
"text": "(Mesnil, 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 700,
"end": 707,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1."
},
{
"text": "Considering the simple embeddings, we observe that Skipgrams performs significantly better than CBOW and GloVe on POS and MENT tasks. However, for the other tasks CBOW achieves the best results. Lastly, these embeddings outperforms CSLM for all tasks. Table 4 : % Accuracy of various word embeddings on the evaluation set of analogical reasoning tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1."
},
{
"text": "We observe in table 4 that the different word embeddings yield a large range of accuracy on this task. The word embeddings ranking obtained in the previous evaluation task is not preserved. Globally, GloVe achieves the best accuracy, followed by Skip-gram and CBOW embeddings. They achieve 65.5%, 62.3% and 57.2% of accuracy respectively. Thus, this result match those presented by (Pennington et al., 2014; Levy et al., 2015) . While w2vf-deps and CSLM have respectively 43.1% and 27.4% of accuracy. Table 5 : Performance of word embeddings on word similarity tasks. Table 5 summarizes the performance of word embeddings on similarity tasks. As we can see, the results are in favor of Skip-grams. In fact, it reaches the best results in two tasks, and based on confidence interval evaluation, it achieves nearly the same results as CBOW in WS353 task.",
"cite_spans": [
{
"start": 382,
"end": 407,
"text": "(Pennington et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 408,
"end": 426,
"text": "Levy et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 5",
"ref_id": null
},
{
"start": 568,
"end": 575,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1."
},
{
"text": "The evaluation of the different word embeddings reported in section 4.2., shows that the best embbedings are w2vfdeps, Skip-gram and GloVe. Each of them is efficient on one task. However, building an effective word embedding remains an ultimate goal, which can be achieved by the combination of embeddings. Based on state of-the-art studies, the combination of different word embeddings takes advantage of their complementarity and yields an improvement on different tasks: chunking, and named entity recognition as in (Turian et al., 2010) . For instance, as shown above , the simple concatenation of Brown clusters and word embeddings resulted in an improvement on chunking and named entity recognition. Moreover, in (Ghannay et al., 2015) , we have investigated the use of different approaches to combine 100dimensional word embeddings: concatenation (Concat), PCA and auto-encoders (AutoE). In that work, we have shown that the combination with auto-encoders yields significant improvement on the ASR error detection task.",
"cite_spans": [
{
"start": 519,
"end": 540,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF22"
},
{
"start": 719,
"end": 741,
"text": "(Ghannay et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of combined word embeddings",
"sec_num": "4.3."
},
{
"text": "Here, we propose to combine the simple word embeddings (CBOW, Skip-gram and GloVe) and the ones achieving the best results reported in section 4.2. (w2vf-deps, Skip-gram and GloVe), using the same approaches as in (Ghannay et al., 2015) . The two combination sets are called Simple and Best respectively in the remainder of the paper. The combination approaches are briefly detailed as follow:",
"cite_spans": [
{
"start": 214,
"end": 236,
"text": "(Ghannay et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of combined word embeddings",
"sec_num": "4.3."
},
{
"text": "Concat: For the first approach, we simply use the concatenation of the three word embeddings types from each combination set. As a consequence, each word is represented by a 600-dimensional vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of combined word embeddings",
"sec_num": "4.3."
},
{
"text": "PCA: For the second approach, the PCA technique is applied to Concat embeddings. According to these embeddings, the matrix composed of all words is first mean centering using Z-scores. The new coordinate system is then obtained computing PCA using the correlation method. The data is then projected onto the new basis considering only the first 200 components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of combined word embeddings",
"sec_num": "4.3."
},
{
"text": "AutoE: Lastly, we investigate the use of ordinary autoencoder (Vincent et al., 2008) . This auto-encoder is composed of one hidden layer with 200 hidden units each. It takes as input the Concat embeddings and as output a vector of 600 nodes. For each word, the vector of numerical values produced by the hidden layer will be used as the combined word embedding.",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "(Vincent et al., 2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of combined word embeddings",
"sec_num": "4.3."
},
{
"text": "The performance of the combined word embeddings are compared to the individual embeddings that it contains. Furthermore, the autoencoder is tuned on the dev corpus of NER task. In the following sections, the improvements are indicated in bold, whereas, based on confidence interval evaluation the significant ones are underlined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of combined word embeddings",
"sec_num": "4.3."
},
{
"text": "As shown in table 6, the combination of word embeddings is helpful and yields significant improvement in CHK, NER and MENT tasks in almost all cases. For the POS task, the Best-Concat and Best-AutoE combined embeddings results are nearly the same as the best individual ones. Moreover, they achieve the best results on most NLP tasks. Table 6 : Performance of combined word embeddings on the NLP tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "NLP tasks",
"sec_num": "4.3.1."
},
{
"text": "As shown in table 7, the significant improvements for this task are achieved by the combination of the best embeddings, through the concatenation and PCA. These embeddings achieve respectively 71.4% and 70.7% of overall accuracy. However, this is not the case for the Autoencoder combined word embeddings, which, achieve the lowest accuracy. Table 7 : % Accuracy of various combined word embeddings on the evaluation set of analogical reasoning tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic task 1: Analogical reasoning task",
"sec_num": "4.3.2."
},
{
"text": "Results on this task, as shown in table 8, are again in favor of the combination of the best embeddings. The concatenation and PCA combined embeddings yield results as good as the individual embeddings they contain on both WS353 and MEN tasks. However, among the combinations of the simple ones, the concatenation and PCA combined embed-dings achieve an improvement respectively on WS353 and MEN tasks. As in the analogical reasoning task, autoencoders result in lower performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic task 2: Similarity task",
"sec_num": "4.3.3."
},
{
"text": "We have yet to find a definitive explanation to this behavior, but one conjecture is that the combination through autoencoders do not preserve the linear structure of the embeddings which allow translations to represent linguistic and semantic properties. Table 8 : Performance of combined word embeddings on word similarity tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic task 2: Similarity task",
"sec_num": "4.3.3."
},
{
"text": "In this paper, we perform a systematic comparison of major word embeddings impact on typical NLP tasks, as well as semantic and syntactic similarity tasks. The evaluation results reported in this paper match those in the literature, since improvements achieved by one word embedding in a specific task are not observed in other tasks. We have confirmed that embeddings trained given dependency parses give the best performance on the NLP tasks. Thus, it is interesting to evaluate the performance of such embedding on the ASR error detection task. For the linguistic tasks, the results are in favor of the basic embeddings especially Skip-gram and GloVe. More, the basic embeddings outperform CSLM on all tasks. Furthermore, we have proven, that the combination of the embeddings yields significant improvement. This result corroborates a previous observation made in recent work on embeddings combination for ASR error detection (Ghannay et al., 2015) . In addition, results obtained by Best-PCA show that building an effective word embedding that achieve good performance in almost all tasks, can be reached by the combination of the efficient embeddings in each task through PCA. Finally, such combination performs poorly on intrinsic analogical reasoning tasks. This peculiar aspect, which seems to indicate that NLP systems do not make use of semantic regularities presented in embeddings, remains to be explored in future work.",
"cite_spans": [
{
"start": 930,
"end": 952,
"text": "(Ghannay et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014a. Tailoring continuous word representations for dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 809-815. Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "References",
"sec_num": "6."
},
{
"text": "https://code.google.com/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (2)",
"volume": "",
"issue": "",
"pages": "809--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014b. Tailoring continuous word representations for depen- dency parsing. In ACL (2), pages 809-815.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Distributional semantics in technicolor",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Jmlr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Org",
"suffix": ""
},
{
"first": "",
"middle": [
"Elia"
],
"last": "March",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Nam Khanh",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. volume 3, pages 1137-1155. JMLR.org, March. Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. 2012. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136-145, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural Language Processing",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. volume 12, pages 2493-2537. JMLR.org.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406-414. ACM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Wordrep: A benchmark for research on learning word representations. CoRR",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Gao, Jiang Bian, and Tie-Yan Liu. 2014. Wordrep: A benchmark for research on learning word representa- tions. CoRR, abs/1407.1640.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Combining continous word representation and prosodic features for asr error prediction",
"authors": [
{
"first": "Sahar",
"middle": [],
"last": "Ghannay",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Camelin",
"suffix": ""
},
{
"first": "Camille",
"middle": [],
"last": "Dutrey",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Santiago",
"suffix": ""
},
{
"first": "Martine",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Statistical Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "24--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahar Ghannay, Yannick Est\u00e8ve, Nathalie Camelin, Camille Dutrey, Fabian Santiago, and Martine Adda- Decker. 2015. Combining continous word representa- tion and prosodic features for asr error prediction. In 3rd International Conference on Statistical Language and Speech Processing (SLSP 2015), Budapest (Hungary), November 24-26.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ontonotes: the 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human language technology conference of the NAACL, Companion Vol- ume: Short Papers, pages 57-60. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Wordrank: Learning word embeddings via robust ranking. CoRR",
"authors": [
{
"first": "Shihao",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Hyokun",
"middle": [],
"last": "Yun",
"suffix": ""
},
{
"first": "Pinar",
"middle": [],
"last": "Yanardag",
"suffix": ""
},
{
"first": "Shin",
"middle": [],
"last": "Matsushima",
"suffix": ""
},
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanathan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shihao Ji, Hyokun Yun, Pinar Yanardag, Shin Mat- sushima, and S. V. N. Vishwanathan. 2015. Wordrank: Learning word embeddings via robust ranking. CoRR, abs/1506.02761.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Word emdeddings through hellinger pca",
"authors": [
{
"first": "R\u00e9mi",
"middle": [],
"last": "Lebret",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.5542"
]
},
"num": null,
"urls": [],
"raw_text": "R\u00e9mi Lebret and Ronan Collobert. 2013. Word emdeddings through hellinger pca. arXiv preprint arXiv:1312.5542.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguis- tics, volume 2, pages 302-308.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211-225.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recur- sive neural networks for morphology. CoNLL-2013, 104.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding. Audio, Speech, and Language Processing",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE/ACM Transactions on",
"volume": "23",
"issue": "3",
"pages": "530--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Us- ing Recurrent Neural Networks for Slot Filling in Spoken Language Understanding. Audio, Speech, and Language Processing, IEEE/ACM Transactions on, 23(3):530-539.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recurrent Neural Networks with Word Embeddings DeepLearning 0.1 documentation",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Mesnil. 2015. Recurrent Neural Networks with Word Embeddings DeepLearning 0.1 documentation. http://www.deeplearning.net/tutorial/rnnslu.html#rnnslu.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. In Proceedings of Workshop at ICLR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Ad- vances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Empiricial Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), vol- ume 12.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Continuous space language models",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2007,
"venue": "Computer Speech & Language",
"volume": "21",
"issue": "3",
"pages": "492--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk. 2007. Continuous space language mod- els. Computer Speech & Language, 21(3):492-518.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "CSLM-a modular open-source continuous space language modeling toolkit",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2013,
"venue": "INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "1198--1202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk. 2013. CSLM-a modular open-source continuous space language modeling toolkit. In INTER- SPEECH, pages 1198-1202.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Introduction to the conll-2000 shared task: Chunking",
"authors": [
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning",
"volume": "7",
"issue": "",
"pages": "127--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Tjong Kim Sang and Sabine Buchholz. 2000. In- troduction to the conll-2000 shared task: Chunking. In Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natu- ral language learning-Volume 7, pages 127-132. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition",
"authors": [
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003",
"volume": "4",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142-147. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th an- nual meeting of the association for computational lin- guistics, pages 384-394. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Extracting and composing robust features with denoising autoencoders",
"authors": [
{
"first": "P",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Manzagol",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Vincent, H. Larochelle, Y. Bengio, and P.A. Manzagol. 2008. Extracting and composing robust features with de- noising autoencoders. In Proceedings of the 25th inter- national conference on Machine learning.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"text": "Data split for each benchmark.",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"3\">16 hours and 30 minutes on a computer equipped with a</td></tr><tr><td colspan=\"3\">NVIDIA Tesla K40 GPU card, while 8h was necessary for</td></tr><tr><td colspan=\"3\">GloVe embeddings, 7h for Skip-gram, and about 3 hours</td></tr><tr><td colspan=\"2\">and 30 minutes for CBOW.</td><td/></tr><tr><td colspan=\"3\">4.2. Experimental results of individual word</td></tr><tr><td>embeddings</td><td/><td/></tr><tr><td>4.2.1. NLP tasks</td><td/><td/></tr><tr><td colspan=\"3\">In this section, we report the performance of the different</td></tr><tr><td colspan=\"3\">word embeddings on the four NLP tasks. A neural net-</td></tr><tr><td colspan=\"3\">work classifier based on a multi-stream strategy is used to</td></tr><tr><td colspan=\"3\">train the models. This architecture depicted in figure 1, was</td></tr><tr><td colspan=\"3\">introduced by (Ghannay et al., 2015) for the ASR error</td></tr><tr><td>detection task.</td><td/><td/></tr><tr><td/><td>output</td><td/></tr><tr><td/><td>H2</td><td/></tr><tr><td>H1-left</td><td>H1-current</td><td>H1-right</td></tr></table>",
"type_str": "table",
"text": "The 5-gram NNLM used to compute the CSLM word embeddings is composed of a projection layer of 800 units, corresponding to 200-dimensional word embeddings, two hidden layers of 1024 units each, and an output layer providing probabilities for a short-list composed of the 16,384 most frequent words. The CSLM training process needed",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "Performance of word embeddings on the NLP tasks.",
"num": null,
"html": null
}
}
}
}