| { |
| "paper_id": "W17-0212", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T04:24:04.484103Z" |
| }, |
| "title": "Linear Ensembles of Word Embedding Models", |
| "authors": [ |
| { |
| "first": "Avo", |
| "middle": [], |
| "last": "Murom\u00e4gi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Tartu Tartu", |
| "location": { |
| "country": "Estonia" |
| } |
| }, |
| "email": "avom@ut.ee" |
| }, |
| { |
| "first": "Kairit", |
| "middle": [], |
| "last": "Sirts", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Tartu Tartu", |
| "location": { |
| "country": "Estonia" |
| } |
| }, |
| "email": "kairit.sirts@ut.ee" |
| }, |
| { |
| "first": "Sven", |
| "middle": [], |
| "last": "Laur", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Tartu Tartu", |
| "location": { |
| "country": "Estonia" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper explores linear methods for combining several word embedding models into an ensemble. We construct the combined models using an iterative method based on either ordinary least squares regression or the solution to the orthogonal Procrustes problem. We evaluate the proposed approaches on Estonian-a morphologically complex language, for which the available corpora for training word embeddings are relatively small. We compare both combined models with each other and with the input word embedding models using synonym and analogy tests. The results show that while using the ordinary least squares regression performs poorly in our experiments, using orthogonal Procrustes to combine several word embedding models into an ensemble model leads to 7-10% relative improvements over the mean result of the initial models in synonym tests and 19-47% in analogy tests.", |
| "pdf_parse": { |
| "paper_id": "W17-0212", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper explores linear methods for combining several word embedding models into an ensemble. We construct the combined models using an iterative method based on either ordinary least squares regression or the solution to the orthogonal Procrustes problem. We evaluate the proposed approaches on Estonian-a morphologically complex language, for which the available corpora for training word embeddings are relatively small. We compare both combined models with each other and with the input word embedding models using synonym and analogy tests. The results show that while using the ordinary least squares regression performs poorly in our experiments, using orthogonal Procrustes to combine several word embedding models into an ensemble model leads to 7-10% relative improvements over the mean result of the initial models in synonym tests and 19-47% in analogy tests.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Word embeddings-dense low-dimensional vector representations of words-have become very popular in recent years in the field of natural language processing (NLP). Various methods have been proposed to train word embeddings from unannoted text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Al-Rfou et al., 2013; Turian et al., 2010; Levy and Goldberg, 2014) , most well-known of them being perhaps Word2Vec (Mikolov et al., 2013b) . Embedding learning systems essentially train a model from a corpus of text and the word embeddings are the model parameters. These systems contain a randomized component and so the trained models are not directly comparable, even when they have been trained on exactly the same data. This random behaviour provides an opportunity to combine several embedding models into an ensemble which, hopefully, results in a better set of word embeddings. Although model ensembles have been often used in various NLP systems to improve the overall accuracy, the idea of combining several word embedding models into an ensemble has not been explored before.", |
| "cite_spans": [ |
| { |
| "start": 250, |
| "end": 273, |
| "text": "(Mikolov et al., 2013b;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 274, |
| "end": 298, |
| "text": "Pennington et al., 2014;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 299, |
| "end": 320, |
| "text": "Al-Rfou et al., 2013;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 321, |
| "end": 341, |
| "text": "Turian et al., 2010;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 342, |
| "end": 366, |
| "text": "Levy and Goldberg, 2014)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 416, |
| "end": 439, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The main contribution of this paper is to show that word embeddings can benefit from ensemble learning, too. We study two methods for combining word embedding models into an ensemble. Both methods use a simple linear transformation. First of them is based on the standard ordinary least squares solution (OLS) for linear regression, the second uses the solution to the orthogonal Procrustes problem (OPP) (Sch\u00f6nemann, 1966) , which essentially also solves the OLS but adds the orthogonality constraint that keeps the angles between vectors and their distances unchanged.", |
| "cite_spans": [ |
| { |
| "start": 405, |
| "end": 423, |
| "text": "(Sch\u00f6nemann, 1966)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are several reasons why using an ensemble of word embedding models could be useful. First is the typical ensemble learning argumentthe ensemble simply is better because it enables to cancel out random noise of individual models and reinforce the useful patterns expressed by several input models. Secondly, word embedding systems require a lot of training data to learn reliable word representations. While there is a lot of textual data available for English, there are many smaller languages for which even obtaining enough plain unannotated text for training reliable embeddings is a problem. Thus, an ensemble approach that would enable to use the available data more effectively would be beneficial.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "According to our knowledge, this is the first work that attempts to leverage the data by combining several word embedding models into a new improved model. Linear methods for combin-ing two embedding models for some task-specific purpose have been used previously. Mikolov et al. (2013a) optimized the linear regression with stochastic gradient descent to learn linear transformations between the embeddings in two languages for machine translation. Mogadala and Rettinger (2016) used OPP to translate embeddings between two languages to perform cross-lingual document classification. Hamilton et al. (2016) aligned a series of embedding models with OPP to detect changes in word meanings over time. The same problem was addressed by Kulkarni et al. (2015) who aligned the embedding models using piecewise linear regression based on a set of nearest neighboring words for each word.", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 287, |
| "text": "Mikolov et al. (2013a)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 450, |
| "end": 479, |
| "text": "Mogadala and Rettinger (2016)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 585, |
| "end": 607, |
| "text": "Hamilton et al. (2016)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 734, |
| "end": 756, |
| "text": "Kulkarni et al. (2015)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, Yin and Sch\u00fctze (2016) experimented with several methods to learn meta-embeddings by combining different word embedding sets. Our work differs from theirs in two important aspects. First, in their work each initial model is trained with a different word embedding system and on a different data set, while we propose to combine the models trained with the same system and on the same dataset, albeit using different random initialisation. Secondly, although the 1toN model proposed in (Yin and Sch\u00fctze, 2016) is very similar to the linear models studied in this paper, it doesn't involve the orthogonality constraint included in the OPP method, which in our experiments, as shown later, proves to be crucial.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 32, |
| "text": "Yin and Sch\u00fctze (2016)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 495, |
| "end": 518, |
| "text": "(Yin and Sch\u00fctze, 2016)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We conduct experiments on Estonian and construct ensembles from ten different embedding models trained with Word2Vec. We compare the initial and combined models in synonym and analogy tests and find that the ensemble embeddings combined with orthogonal Procrustes method indeed perform significantly better in both tests, leading to a relative improvement of 7-10% over the mean result of the initial models in synonym tests and 19-47% in analogy tests.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A word embedding model is a matrix W \u2208 R |V |\u00d7d , where |V | is the number of words in the model lexicon and d is the dimensionality of the vectors. Each row in the matrix W is the continuous representation of a word in a vector space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given r embedding models W 1 , . . . ,W r we want to combine them into a target model Y . We define a linear objective function that is the sum of r linear regression optimization goals:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "J = r \u2211 i=1 Y \u2212W i P i 2 ,", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where P 1 , . . . , P r are transformation matrices that translate W 1 , . . . ,W r , respectively, into the common vector space containing Y . We use an iterative algorithm to find matrices P 1 , . . . , P r and Y . During each iteration the algorithm performs two steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1. Solve r linear regression problems with respect to the current target model Y , which results in updated values for matrices P 1 , . . . P r ;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2. Update Y to be the mean of the translations of all r models:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Y = 1 r r \u2211 i=1 W i P i .", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This procedure is continued until the change in the average normalised residual error, computed as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "1 r r \u2211 i=0 Y \u2212W i P i |V | \u2022 d ,", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "will become smaller than a predefined threshold value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We experiment with two different methods for computing the translation matrices P 1 , . . . , P r . The first is based on the standard least squares solution to the linear regression problem, the second method is known as solution to the Orthogonal Procrustes problem (Sch\u00f6nemann, 1966) .", |
| "cite_spans": [ |
| { |
| "start": 268, |
| "end": 286, |
| "text": "(Sch\u00f6nemann, 1966)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining word embedding models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The analytical solution for a linear regression problem Y = PW for finding the transformation matrix P, given the input data matrix W and the result Y is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution with the ordinary least squares (SOLS)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P = (W T W ) \u22121 W T Y", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Solution with the ordinary least squares (SOLS)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We can use this formula to update all matrices P i at each iteration. The problem with this approach is that because Y is also unknown and will be updated repeatedly in the second step of the iterative algorithm, the OLS might lead to solutions where both W i P i and Y are optimized towards 0 which is not a useful solution. In order to counteract this effect we rescale Y at the start of each iteration. This is done by scaling the elements of Y so that the variance of each column of Y would be equal to 1. Table 1 : Final errors and the number of iterations until convergence for both SOLS and SOPP. The first column shows the embedding size. .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 510, |
| "end": 517, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Solution with the ordinary least squares (SOLS)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Orthogonal Procrustes is a linear regression problem of transforming the input matrix W to the output matrix Y using an orthogonal transformation matrix P (Sch\u00f6nemann, 1966) . The orthogonality constraint is specified as", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 173, |
| "text": "(Sch\u00f6nemann, 1966)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "PP T = P T P = I", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The solution to the Orthogonal Procrustes can be computed analytically using singular value decomposition (SVD). First compute:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "S = W T Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Then diagonalize using SVD:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "S T S = V D S V T SS T = UD S U T", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Finally compute:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "P = UV T", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "This has to be done for each P i during each iteration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "This approach is very similar to SOLS. The only difference is the additional orthogonality constraint that gives a potential advantage to this method as in the translated word embeddings W i P i the lengths of the vectors and the angles between the vectors are preserved. Additionally, we no longer need to worry about the trivial solution where P 1 , . . . , P r and Y all converge towards 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution to the Orthogonal Procrustes problem (SOPP)", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We tested both methods on a number of Word2Vec models (Mikolov et al., 2013b) Corpus is the largest text corpus available for Estonian. Its size is approximately 240M word tokens, which may seem like a lot but compared to for instance English Gigaword corpus, which is often used to train word embeddings for English words and which contains more than 4B words, it is quite small. All models were trained using a window size 10 and the skip-gram architecture.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 77, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We experimented with models of 6 different embedding sizes: 50, 100, 150, 200, 250 and 300. For each dimensionality we had 10 models available. The number of distinct words in each model is 816757. During training the iterative algorithm was run until the convergence threshold th = 0.001 was reached. The number of iterations needed for convergence for both methods and for models with different embedding size are given in Table 1. It can be seen that the convergence with SOPP took significantly fewer iterations than with SOLS. This difference is probably due to two aspects: 1) SOPP has the additional orthogonality constraint which reduces the space of feasible solutions; 2) although SOLS uses the exact analytical solutions for the least squares problem, the final solution for Y does not move directly to the direction pointed to by the analytical solutions due to the variance rescaling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We evaluate the goodness of the combined models using synonym and analogy tests.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "One of the common ways to evaluate word embeddings is to use relatedness datasets to measure the correlation between the human and model judge- ments (Schnabel et al., 2015) . In those datasets, there are word pairs and each pair is human annotated with a relatedness score. The evaluation is then performed by correlating the cosine similarities between word pairs with the relatedness scores. As there are no annotated relatedness datasets for Estonian, we opted to use a synonym test instead. We rely on the assumption that the relatedness between a pair of synonyms is high and thus we expect the cosine similarity between the synonymous words to be high as well.", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 173, |
| "text": "(Schnabel et al., 2015)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We obtained the synonyms from the Estonian synonym dictionary. 2 We queried each word in our vocabulary and when the exact match for this word was found then we looked at the first synonym offered by the dictionary. If this synonym was present in our vocabulary then the synonym pair was stored. In this manner we obtained a total of 7579 synonym pairs. We ordered those pairs according to the frequency of the first word in the pair and chose the 1000 most frequent words with their synonyms for the synonym test.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 64, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For each first word in the synonym pair, we computed its cosine similarity with every other word in the vocabulary, ordered those similarities in the descending order and found the rank of the second word of the synonym pair in this resulting list. Then we computed the mean rank over all 1000 synonym pairs. We performed these steps on both types of combined models-Y SOLS and Y SOPP -and also on all input models W i . Finally we also computed the mean of the mean ranks of all 10 input models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The results as shown in Table 2 reveal that the synonym similarities tend to be ranked lower in the combined model obtained with SOLS when compared to the input models. SOPP, on the other hand, produces a combined model where the synonym similarities are ranked higher than in initial models. This means that the SOPP combined models pull the synonymous words closer together than they were in the initial models. The differences in mean ranks were tested using paired Wilcoxon signed-rank test at 95% confidence level and the differences were statistically significant with p-value being less than 2.2\u202210 \u221216 in all cases. In overall, the SOPP ranks are on average 10% lower than the mean ranks of the initial models. The absolute improvement on average between SOPP and mean of W is 3476.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 31, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Although we assumed that the automatically extracted synonym pairs should be ranked closely together, looking at the average mean ranks in Table 2 reveals that it is not necessarily the casethe average rank of the best-performing SOPP model is over 31K. In order to understand those results better we looked at the rank histogram of the SOPP model and one of the initial models, shown on Figure 1 . Although the first bin covering the rank range from 1 to 10 contains the most words for both models and the number of synonym pairs falling to further rank bins decreases the curve is not steep and close to 100 words (87 in case of SOPP and 94 in case of the initial model) belong to the last bin counting ranks higher than 100000. Looking at the farthest synonym pairs revealed that one word in these pairs is typically polysemous and its sense in the synonym pair is a relatively rarely used sense of this word, while there are other more common senses of this word with a completely different meaning. We give some examples of such synonym pairs:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 388, |
| "end": 396, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 kaks (two) -puudulik (insufficient): the sense of this pair is the insufficient grade in high school, while the most common sense of the word kaks is the number two;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 ida (east) -ost (loan word from German also meaning east): the most common sense of the word ost is purchase;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 rubla (rouble) -kull (bank note in slang): the most common sense of the word kull is hawk.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonym ranks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Analogy tests are another common intrinsic method for evaluating word embeddings (Mikolov et al., 2013c) . A famous and typical example of an analogy question is \"a man is to a king like a woman is to a ?\". The correct answer to this question is \"queen\". For an analogy tuple a : b, x : y (a is to b as x is to y) the following is expected in an embedding space to hold:", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 104, |
| "text": "(Mikolov et al., 2013c)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "w b \u2212 w a + w x \u2248 w y ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where the vectors w are word embeddings. For the above example with \"man\", \"king\", \"woman\" and \"queen\" this would be computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "w king \u2212 w man + w woman \u2248 w queen", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Given the vector representations for the three words in the analogy question-w a , w b and w xthe goal is to maximize (Mikolov et al., 2013b) ", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 141, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "cos(w y , w b \u2212 w a + w x )", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "over all words y in the vocabulary. We used an Estonian analogy data set with 259 word quartets. Each quartet contains two pairs of words. The word pairs in the data set belong into three different groups where the two pairs contain either:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 a positive and a comparative adjective form, e.g. pime : pimedam, j\u00f5ukas : j\u00f5ukam (in English dark : darker, wealthy : wealthier);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 the nominative singular and plural forms of a noun, e.g. vajadus : vajadused, v\u00f5istlus : v\u00f5istlused (in English need : needs , competition : competitions);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 The lemma and the 3rd person past form of a verb, e.g. aitama : aitas, katsuma : katsus (in English help : helped, touch : touched).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We evaluate the results of the analogy test using prediction accuracy. A prediction is considered correct if and only if the vector w y that maximizes (5) represents the word expected by the test case. We call this accuracy Hit@1. Hit@1 can be quite a noisy measurement as there could be several word vectors in a very close range to each other competing for the highest rank. Therefore, we also compute Hit@10, which considers the prediction correct if the word expected by the test case is among the ten closest words. As a common practice, the question words represented by the vectors w a , w b and w x were excluded from the set of possible predictions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The Hit@1 and Hit@10 results in Table 3 show similar dynamics: combining models with SOPP is much better than SOLS in all cases. The SOPP combined model is better than the mean of the initial models in all six cases. Furthermore, it is consistently above the maximum of the best initial models. The average accuracy of SOPP is better than the average of the mean accuracies of initial models by 41%, relatively (7.7% in absolute) in terms of Hit@1 and 27% relatively (10.5% in absolute) in terms of Hit@10.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analogy tests", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In order to gain more understanding how the words are located in the combined model space in comparison to the initial models we performed two additional analyses. First, we computed the distribution of mean squared errors of the words to see how the translated embeddings scatter around the word embedding of the combined model. Secondly, we looked at how both of the methods affect the pairwise similarities of words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We computed the squared Euclidean distance for each word in vocabulary between the combined model Y and all the input embedding models. The distance e i j for a jth word and the ith input model is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distribution of mean squared distances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "d i j = Y j \u2212 T i j 2 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distribution of mean squared distances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where T i = W i P i is the ith translated embedding model. Then we found the mean squared distance for the jth word by calculating:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distribution of mean squared distances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "d j = 1 r r \u2211 i=0 d i j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distribution of mean squared distances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "These distances are plotted on Figure 2 . The words on the horizontal axis are ordered by their frequency-the most frequent words coming first. We show these results for models with 100 dimensions but the results with other embedding sizes were similar.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 39, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distribution of mean squared distances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Notice that the distances for less frequent words are similarly small for both SOLS and SOPP methods. However, the distribution of distances for frequent words is quite different-while the distances go up with both methods, the frequent words are much more scattered when using the SOPP approach. Figure 3 shows the mean squared distances of a random sample of 1000 words. These plots reveal another difference between the SOLS and SOPP methods. While for SOPP, the distances tend to decrease monotonically with the increase in word frequency rank, with SOLS the distances first increase and only then they start to get smaller.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 297, |
| "end": 305, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distribution of mean squared distances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Our vocabulary also includes punctuation marks and function words, which are among the most frequent tokens and which occur in many different contexts. Thus, the individual models have a lot of freedom to position them in the word embedding space. The SOLS combined model is able to bring those words more close to each other in the aligned space, while SOPP has less freedom to do that because of the orthogonality constraint. When looking at the words with largest distances under SOPP in the 1000 word random sample then we see that the word with the highest mean squared distance refers to the proper name of a well-known Estonian politician who has been probably mentioned often and in various contexts in the training corpus. Other words with a large distance in this sample include for instance a name of a month and a few quantifying modifiers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distribution of mean squared distances", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In this analysis we looked at how the cosine similarities between pairs of words change in the combined model compared to their similarities in the input embedding models. For that, we chose a total of 1000 word pairs randomly from the vocabulary. For each pair we calculated the following values:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word pair similarities", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 cosine similarity under the combined model;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word pair similarities", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 maximum and minimum cosine similarity in the initial models W i ;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word pair similarities", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 mean cosine similarity over the initial models W i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word pair similarities", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The results are plotted in Figure 4 . These results are obtained using the word embeddings with size 100, using different embedding sizes revealed the same patterns. In figures, the word pairs are ordered on the horizontal axis in the ascending order of their similarities in the combined model Y .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 35, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word pair similarities", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The plots reveal that 1) the words that are similar in initial models W i are even more similar in the combined model Y ; and 2) distant words in initial models become even more distant in the combined model. Although these trends are visible in cases of both SOLS and SOPP, this behaviour of the combined models to bring more similar words closer together and place less similar words farther away is more emphasized in the combined model obtained with SOLS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word pair similarities", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In Figure 4 , the red, green and blue \"bands\", representing the maximum, mean and minimum similarities of the initial models, respectively, are wider on the SOLS plot. This indicates that SOPP preserves more the original order of word pairs in terms of their similarities. However, some of this difference may be explained by the fact that SOPP has an overall smaller effect on the similarity compared to SOLS, which is due to the property of SOPP to preserve the angles and distances between the vectors during the transformation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word pair similarities", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "From the two linear methods used to combine the models, SOPP was performing consistently better in both synonym and analogy tests. Although, as shown in Figures 2 and 3 , the word embeddings of the aligned initial models were more closely clus-tered around the embeddings of the SOLS combined model, this seemingly better fit is obtained at the cost of distorting the relations between the individual word embeddings. Thus, we have provided evidence that adding the orthogonality constraint to the linear transformation objective is important to retain the quality of the translated word embeddings. This observation is relevant both in the context of producing model ensembles as well as in other contexts where translating one embedding space to another could be relevant, such as when working with semantic time series or multilingual embeddings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 153, |
| "end": 168, |
| "text": "Figures 2 and 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion and future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In addition to combining several models trained on the same dataset with the same configuration as demonstrated in this paper, there are other possible use cases for the model ensembles which could be explored in future work. For instance, currently all our input models had the same dimensionality and the same embedding size was also used in the combined model. In future it would be interesting to experiment with combining models with different dimensionality, in this way marginalising out the embedding size hyperparameter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our experiments showed that the SOPP approach performs well in both synonym and analogy tests when combining the models trained on the relatively small Estonian corpus. In future we plan to conduct similar experiments on more languages that, similar to Estonian, have limited resources for training reliable word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Another idea would be to combine embeddings trained with different models. As all word embedding systems learn slightly different embeddings, combining for instance Word2Vec (Mikolov et al., 2013b) , Glove (Pennington et al., 2014) and dependency based vectors (Levy and Goldberg, 2014) could lead to a model that combines the strengths of all the input models. Yin and Sch\u00fctze (2016) demonstrated that the combination of different word embeddings can be useful. However, their results showed that the model combination is less beneficial when one of the input models (Glove vectors in their example) is trained on a huge text corpus. Thus, we predict that the ensemble of word embeddings constructed based on different embedding models also has the most effect in the setting of limited training resources.", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 197, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 206, |
| "end": 231, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 261, |
| "end": 286, |
| "text": "(Levy and Goldberg, 2014)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 362, |
| "end": 384, |
| "text": "Yin and Sch\u00fctze (2016)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, it would be interesting to explore the domain adaptation approach by combining for instance the embeddings learned from the large gen-eral domain with the embeddings trained on a smaller domain specific corpus. This could be of interest because there are many pretrained word embedding sets available for English that can be freely downloaded from the internet, while the corpora they were trained on (English Gigaword, for instance) are not freely available. The model combination approach would enable to adapt those embeddings to the domain data by making use of the pretrained models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Although model ensembles have been often used to improve the results of various natural language processing tasks, the ensembles of word embedding models have been rarely studied so far. Our main contribution in this paper was to combine several word embedding models trained on the same dataset via linear transformation into an ensemble and demonstrate the usefulness of this approach experimentally.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We experimented with two linear methods to combine the input embedding models-the ordinary least squares solution to the linear regression problem and the orthogonal Procrustes which adds an additional orthogonality constraint to the least squares objective function. Experiments on synonym and analogy tests on Estonian showed that the combination with orthogonal Procrustes was consistently better than the ordinary least squares, meaning that preserving the distances and angles between vectors with the orthogonality constraint is crucial for model combination. Also, the orthogonal Procrustes combined model performed better than the average of the individual initial models in all synonym tests and analogy tests suggesting that combining several embedding models is a simple and useful approach for improving the quality of the word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The Institute of the Estonian Language, http://www. eki.ee/dict/sys/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank Alexander Tkachenko for providing the pretrained input models and the analogy test questions. We also thank the anonymous reviewers for their helpful suggestions and comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Polyglot: Distributed word representations for multilingual NLP", |
| "authors": [ |
| { |
| "first": "Rami", |
| "middle": [], |
| "last": "Al-Rfou", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Perozzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Skiena", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "183--192", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of the Seven- teenth Conference on Computational Natural Lan- guage Learning, pages 183-192.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Diachronic word embeddings reveal statistical laws of semantic change", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [ |
| "L" |
| ], |
| "last": "Hamilton", |
| "suffix": "" |
| }, |
| { |
| "first": "Jure", |
| "middle": [], |
| "last": "Leskovec", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1489--1501", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statisti- cal laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1489-1501.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Statistically significant detection of linguistic change", |
| "authors": [ |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "Rami", |
| "middle": [], |
| "last": "Al-Rfou", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Perozzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Skiena", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 24th International Conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "625--635", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant de- tection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625-635.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Dependencybased word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "302--308", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 302-308.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Exploiting similarities among languages for machine translation", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "26", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems 26, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Linguistic regularities in continuous space word representations", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Wen-Tau", |
| "suffix": "" |
| }, |
| { |
| "first": "Yih", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Zweig", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "746--751", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Scott Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 746-751.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Bilingual word embeddings from parallel and nonparallel corpora for cross-language text classification", |
| "authors": [ |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Mogadala", |
| "suffix": "" |
| }, |
| { |
| "first": "Achim", |
| "middle": [], |
| "last": "Rettinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "692--702", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aditya Mogadala and Achim Rettinger. 2016. Bilin- gual word embeddings from parallel and non- parallel corpora for cross-language text classifica- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 692-702.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Evaluation methods for unsupervised word embeddings", |
| "authors": [ |
| { |
| "first": "Tobias", |
| "middle": [], |
| "last": "Schnabel", |
| "suffix": "" |
| }, |
| { |
| "first": "Igor", |
| "middle": [], |
| "last": "Labutov", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mimno", |
| "suffix": "" |
| }, |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "298--307", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 298-307.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A generalized solution of the orthogonal procrustes problem", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sch\u00f6nemann", |
| "suffix": "" |
| } |
| ], |
| "year": 1966, |
| "venue": "Psychometrika", |
| "volume": "31", |
| "issue": "1", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter H. Sch\u00f6nemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Word representations: A simple and general method for semi-supervised learning", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "384--394", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 384-394.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Learning Word Meta-Embeddings", |
| "authors": [ |
| { |
| "first": "Wenpeng", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1351--1360", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2016. Learn- ing Word Meta-Embeddings. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, pages 1351-1360.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Histogram of the synonym ranks of the 100 dimensional vectors. Dark left columns show the rank frequencies of the SOPP model, light right columns present the rank frequencies of one of the initial models." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Mean squared distances describing the scattering of the translated word embeddings around the combined model embedding for every word in the vocabulary. The words in the horizontal axis are ordered by the frequency with most frequent words plotted first." |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Mean squared distances describing the scattering of the translated word embeddings around the combined model embedding for a random sample of 1000 words. The words in the horizontal axis are ordered by the frequency with most frequent words plotted first." |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Cosine similarities of 1000 randomly chosen word pairs ordered by their similarity in the combined model Y . Red, blue and green bands represent the maximum, mean and minimum similarities in the initial models, respectively." |
| }, |
| "TABREF2": { |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Average mean ranks of the synonym test, smaller values are better. The best result in each row is in bold. All differences are statistically significant: with p < 2.2 \u2022 10 \u221216 for all cases.", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>Hit@1 Dim SOLS SOPP Mean W 50 0.058 0.193 0.144 100 0.116 0.255 0.185 150 0.085 0.278 0.198 200 0.066 0.290 0.197 250 0.093 0.282 0.200 300 0.069 0.286 0.197 Avg 0.081 0.264 0.187</td><td>0.124 0.170 0.170 0.178 0.181 0.162</td><td>0.170 0.197 0.228 0.224 0.224 0.228</td><td>0.158 0.390 0.239 0.475 0.224 0.502 0.205 0.541 0.193 0.517 0.212 0.533 0.205 0.493</td><td>Hit@10 0.329 0.388 0.398 0.408 0.406 0.401 0.388</td><td>0.305 0.371 0.378 0.390 0.394 0.359</td><td>0.347 0.409 0.417 0.425 0.421 0.440</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "Min W Max W SOLS SOPP Mean W Min W Max W", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Hit@1 and Hit@10 accuracies of the analogy test. SOLS and SOPP columns show the accuracies of the combined models. Mean W , Min W and Max W show the mean, minimum and maximum accuracies of the initial models W i , respectively. The best accuracy among the combined models and the mean of the initial models is given in bold. The last row shows the average accuracies over all embedding sizes.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |