| { |
| "paper_id": "D17-1033", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:17:17.469979Z" |
| }, |
| "title": "Word Re-Embedding via Manifold Dimensionality Retention", |
| "authors": [ |
| { |
| "first": "Souleiman", |
| "middle": [], |
| "last": "Hasan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Lero-The Irish Software Research Centre", |
| "institution": "National University of Ireland", |
| "location": { |
| "settlement": "Galway" |
| } |
| }, |
| "email": "souleiman.hasan@lero.ie" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Curry", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Lero-The Irish Software Research Centre", |
| "institution": "National University of Ireland", |
| "location": { |
| "settlement": "Galway" |
| } |
| }, |
| "email": "edward.curry@lero.ie" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Word embeddings seek to recover a Euclidean metric space by mapping words into vectors, starting from words cooccurrences in a corpus. Word embeddings may underestimate the similarity between nearby words, and overestimate it between distant words in the Euclidean metric space. In this paper, we re-embed pre-trained word embeddings with a stage of manifold learning which retains dimensionality. We show that this approach is theoretically founded in the metric recovery paradigm, and empirically show that it can improve on state-of-the-art embeddings in word similarity tasks 0.5 \u2212 5.0% points depending on the original space.", |
| "pdf_parse": { |
| "paper_id": "D17-1033", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Word embeddings seek to recover a Euclidean metric space by mapping words into vectors, starting from words cooccurrences in a corpus. Word embeddings may underestimate the similarity between nearby words, and overestimate it between distant words in the Euclidean metric space. In this paper, we re-embed pre-trained word embeddings with a stage of manifold learning which retains dimensionality. We show that this approach is theoretically founded in the metric recovery paradigm, and empirically show that it can improve on state-of-the-art embeddings in word similarity tasks 0.5 \u2212 5.0% points depending on the original space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Concepts have been hypothesized in the cognitive psychometric literature as points in a Euclidean metric space, with empirical support from human judgement experiments (Rumelhart and Abrahamson, 1973; Sternberg and Gardner, 1983) . Word embeddings, such as GloVe (Pennington et al., 2014a) and Word2Vec (Mikolov et al., 2013) , harvest observed features of the latent Euclidean space such as words co-occurrence counts in a corpus and turn words into dense vectors of a few hundred dimensions. Word embeddings have proved useful in downstream NLP tasks such as Part of Speech Tagging (Collobert, 2011) , Named Entity Recognition (Turian et al., 2010) , and Machine Translation (Devlin et al., 2014) . However, the potential of word embeddings and further improvements remain a research question.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 200, |
| "text": "(Rumelhart and Abrahamson, 1973;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 201, |
| "end": 229, |
| "text": "Sternberg and Gardner, 1983)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 263, |
| "end": 289, |
| "text": "(Pennington et al., 2014a)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 303, |
| "end": 325, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 584, |
| "end": 601, |
| "text": "(Collobert, 2011)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 629, |
| "end": 650, |
| "text": "(Turian et al., 2010)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 677, |
| "end": 698, |
| "text": "(Devlin et al., 2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "When comparing word pairs similarities obtained from word embeddings, to word pairs similarities obtained from human judgement, it is ob-served that word embeddings slightly underestimate the similarity between similar words, and overestimate the similarity between distant words. For example, in the WS353 (Finkelstein et al., 2001 ) word similarity ground truth: sim(\"shore\", \"woodland\") = 3.08 < sim(\"physics\", \"proton\") = 8.12", |
| "cite_spans": [ |
| { |
| "start": 307, |
| "end": 332, |
| "text": "(Finkelstein et al., 2001", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, the use of GloVe 42B 300d embedding with cosine similarity (see Section 4) yields the opposite order: sim(\"shore\", \"woodland\") = 0.36 > sim(\"physics\", \"proton\") = 0.33", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Re-embedding the space using a manifold learning stage can rectify this. Manifold learning works by estimating the distance between nearby words using direct similarity assignment in a local neighbourhood, while distance between faraway words is approximated by multiple neighbourhoods based on the manifold shape. This observation forms the basis for the rest of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For instance, using Locally Linear Embedding (LLE) (Roweis and Saul, 2000) on top of GloVe, as described in this paper, can recover the right pairs order yielding: sim(\"shore\", \"woodland\") = 0.08 < sim(\"physics\", \"proton\") = 0.25 Hashimoto et al. (Hashimoto et al., 2016) put word embeddings under a paradigm which seeks to recover the underlying Euclidean metric semantic space. In this paradigm, word embeddings land into a space where a Euclidean metric can be used. They show that co-occurrence counts are the results of random walk sequences in the metric space, corresponding to sentences in a corpus.", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 271, |
| "text": "Hashimoto et al. (Hashimoto et al., 2016)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Hashimoto et al. link this to manifold learning which also seeks to recover a Euclidean space (Human Judgement) Euclidean Metric Space e.g. (Rumelhart & Abrahamson, 1973; Sternberg & Gardner, 1983) Word Embedding Start: words co-occurrence e.g. GloVe (Pennington et al., 2014) , Word2Vec (Mikolov et al., 2013) . but starting from local neighbourhoods of objects, such as images or words. Global distances are built by adding up small local neighbourhoods. The authors show that word embedding algorithms can be used to solve manifold learning by generating random walks, aka sentences, on the manifold neighbourhood graph, and then embedding them. In this work we follow a methodology which adheres to this paradigm and adopt a different angle, as per Figure 1 . We start from an off-the-shelf word embedding, then we take a sample of it and feed it into manifold learning which leverages local word neighbourhoods formed in the original embedding space, learns the manifold, and embeds it into a new Euclidean space. The resulting re-embedding space is a recovery of a Euclidean metric space that is empirically better than the original word embedding when tested on word similarity tasks.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 170, |
| "text": "(Rumelhart & Abrahamson, 1973;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 171, |
| "end": 197, |
| "text": "Sternberg & Gardner, 1983)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 251, |
| "end": 276, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 288, |
| "end": 310, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 753, |
| "end": 761, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "These results show that word embeddings can be improved in estimating the latent metric. Such an approach can provide new opportunities to improve our understanding of embedding methods, their properties, and limits. It also allows us to reuse and re-embed off-the-shelf pre-trained embeddings, saving time on training, while aiming at improved results in downstream NLP tasks, and other data processing tasks (Hasan and Curry, 2014; Hasan, 2017; Freitas and Curry, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 433, |
| "text": "(Hasan and Curry, 2014;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 434, |
| "end": 446, |
| "text": "Hasan, 2017;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 447, |
| "end": 471, |
| "text": "Freitas and Curry, 2014)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Manifold Learning", |
| "sec_num": null |
| }, |
| { |
| "text": "Section 2 discusses the related literature to this work. Section 3 details the proposed approach. Sections 4 and 5 discuss the experiments and results. The paper concludes with Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Manifold Learning", |
| "sec_num": null |
| }, |
| { |
| "text": "The relationship to related work is depicted in Figure 1. Word embeddings are unsupervised methods based on word co-occurrence counts which can be directly observed in a corpus. Mikolov et al. presents a neural network-based architecture which learns a word representation by learning to predict its context words (Mikolov et al., 2013) . Pennington et al. proposed GloVe, which directly leverages nonzero word-word co-occurrences in a global manner (Pennington et al., 2014a) .", |
| "cite_spans": [ |
| { |
| "start": 314, |
| "end": 336, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 339, |
| "end": 356, |
| "text": "Pennington et al.", |
| "ref_id": null |
| }, |
| { |
| "start": 450, |
| "end": 476, |
| "text": "(Pennington et al., 2014a)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 48, |
| "end": 54, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The idea of embedding objects from a high dimensional space, e.g. images, into a smaller dimensional space constitute the area of manifold learning. For instance, Roweis and Saul present the Locally Linear Embedding (LLE) algorithm and show that pixel-based distance between images is meaningful only at a local neighbourhood scale . Reconstructions can capture the underlying manifold of the data, and can embed the high dimensional objects, into a lower dimensional Euclidean space while preserving neighbourhoods. Other methods exist such as Isomap (Balasubramanian and Schwartz, 2002) and t-SNE (Maaten and Hinton, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 552, |
| "end": 588, |
| "text": "(Balasubramanian and Schwartz, 2002)", |
| "ref_id": null |
| }, |
| { |
| "start": 599, |
| "end": 624, |
| "text": "(Maaten and Hinton, 2008)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Hashimoto et al. show that word embeddings and manifold learning are both methods to recover a Euclidean metric using co-occurrence counts and high dimensional features respectively (Hashimoto et al., 2016) . They show that word embeddings can be used to solve manifold learning when starting from a high dimensional space. In this paper we start from a trained word embedding space, and learn a manifold from it to improve results. We do not use manifold learning to reduce dimensionality, but to transform between two equally-dimensional coordinate systems.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 206, |
| "text": "(Hashimoto et al., 2016)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Other related work comes from word embedding post-processing. Labutov and Lipson use a supervised model to re-embed words for a target task (Labutov and Lipson, 2013) . Lee et al. filter out abnormal dimensions from a GloVe space according to their histograms and show a slight improvement in performance (Lee et al., 2016) . Mu at al. perform similar post-processing through the removal of the mean vector and vectors re-projection (Mu et al., 2017) . We see manifold learning as a generic, unsupervised, nonlinear, and theoretically-founded model for postprocessing that can cover linear post-processing such as PCA and normalization of vectors. 3 Approach Figure 2 illustrates our re-embedding method. We start from an original embedding space with vectors ordered by words frequencies. In step (a), we pick a sample window of vectors from this space to be used for learning the manifold. In step (b), we fit the manifold learning model to the selected sample using an algorithm such as LLE. We retain the dimensionality at this stage. In step (c), an arbitrary test vector can be selected from the original space. In step (d), the resulting fitted model serves as a transformation which can be used to transform the test vector into a vector which lives in the new re-embedding space, and used in downstream tasks.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 166, |
| "text": "(Labutov and Lipson, 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 305, |
| "end": 323, |
| "text": "(Lee et al., 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 433, |
| "end": 450, |
| "text": "(Mu et al., 2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 659, |
| "end": 667, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In step (a), a sample subset of the words is used based on word frequency rank. The rational is that word embedding attempts to recover a metric space and frequent words co-occurrences can represent a better sampling of the underlying space due to their frequent usage, rather than being handled equally with other points, thus can better recover the manifold shape. Experimenting with subsets from all the vocabulary or non-frequent words, may yield no improvement. Additionally, manifold learning on all points is computationally expensive. The sampling used here follows a sliding sample window to study the effect of its start position and size. Various ways to choose a sample, e.g. random sampling, can be followed, but word frequency should remain a factor in where the sample is taken from.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In step (b), the sample is used to fit a manifold. For LLE , that is done through learning the weights which can re-construct each word vector from the sample X through its K-nearest neighbours in the sample, by minimizing the error function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "E(W ) = i X i \u2212 j W ij X j 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(1) such that W ij = 0 if X j is not in the K-nearest neighbours of X i . The weights are then used to construct a new embedding Y of the sample X via a neighbourhood-preserving mapping through minimizing the cost function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u03a6(Y ) = i Y i \u2212 j W ij Y j 2 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In steps (c) and (d), to transform an arbitrary vector x, the weights are first constructed from only the K-nearest neighbours of x in the sample X, by minimizing the function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "E(W x ) = x \u2212 j W x j X j 2", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "such that W x j = 0 if X j is not in the K-nearest neighbours of x. The weights are then used along with the new embedding Y to transform x into y which lives in the new embedding space through the equation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y = j W x j Y j", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where Y j is the transform, from step (b), of X j that is in the K-nearest neighbours of x.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Original Embedding Spaces. The original word embeddings used are pre-trained GloVe models: Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, 50d, 100d, 200d, & 300d vectors) , and Common Crawl (42B tokens, 1.9M vocab, 300d vectors) (Pennington et al., 2014b) . The vectors are ordered by the frequency of their corresponding words, so the vector representing the word 'the' comes first in the space. Task. We use similarity tasks WS353 (Finkelstein et al., 2001 ) and RG65 (Rubenstein and Goodenough, 1965) . Baseline. We use the performance by the original word embeddings on the tasks. For each original space, we normalize features using their minimum and maximum values to [\u22121, +1], and then normalize vectors to unit norms. For each pair of words in the similarity task, we get the normalized vectors and measure the cosine similarity. We finally compute the Spearman Rank Correlation with human judgements.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 175, |
| "text": "(6B tokens, 400K vocab, 50d, 100d, 200d, & 300d vectors)", |
| "ref_id": null |
| }, |
| { |
| "start": 234, |
| "end": 260, |
| "text": "(Pennington et al., 2014b)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 438, |
| "end": 463, |
| "text": "(Finkelstein et al., 2001", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 475, |
| "end": 508, |
| "text": "(Rubenstein and Goodenough, 1965)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Approach. For a given original embedding, we normalize vectors to unit norms, then we conduct Manifold (Mfd) Re-Embedding using LLE as explained in Section 3. For each similarity task, we transform the vectors of test words into the re-embedding space before computing the cosine similarity, and the final Spearman score. We vary relevant parameters and see what effect they have on the performance, so we can understand the effectiveness of the approach and its limits.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Average Performance. Table 1 shows that the re-embedding method outperforms the baseline in most cases with improvements from 0.5% to 5.0%. These results are achieved for effective manifold training windows which start anywhere between 5000 and 15000. The table also shows that improvements are over spaces with underlying bigger corpora and vectors, i.e. good quality vectors which facilitate the embedding.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 28, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Manifold Dimensionality Retention. Figure 3 shows that for a given window, the re-embedding performs better when the dimensionality of the learned manifold is chosen to be closer to the original space dimensionality. In other words, dimensional reduction on the original space will bare a cost in performance. Manifold learning typically starts from a highdimensional raw space, such as pixels, and aims to reduce the dimensionality. In our method we start from a word embedding which is already a good embedding of the raw word co-occurrences. So, dimensionality shall be retained, as suggested by Figure 3 , or otherwise information can be lost during eigenvectors computation and selection in the manifold learning.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 35, |
| "end": 43, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 599, |
| "end": 607, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Effect of Window Length. Figure 4 shows that the best window length to choose is as close as possible to the number of local neighbours used by the manifold learning. Performance drops slightly with higher values of window length, but becomes stable after an initial drop.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 25, |
| "end": 33, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Effect of Window Start. Figure 5 shows that the performance is first modest when the manifold is trained on the most frequent word vectors (i.e. stop words), but then picks up and outperforms the baseline for most cases. Performance drops grad- ually as the manifold is trained on relatively less frequent word vectors.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 32, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Effect of the Number of Local Neighbours. Figure 6 shows that the performance is generally stable with variation in the number of local neighbours that the manifold is learned upon. Generally lower numbers of local neighbours mean faster manifold learning.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 42, |
| "end": 50, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Discussion. The above results show that word re-embedding based on manifold learning can help the original space recover the Euclidean metric, and thus improves performance on word similarity tasks. The ability of re-embedding to achieve improved results depends on the quality of the vectors in the original space. It also depends on the choice of the window used to learn the manifold. The window start is the most influential variable, and it should be chosen just after the stop words in the original space. The choice of other param-eters is relatively easier: the length of the window should be close or equal to the number of local neighbours, which in turn can be chosen from a wide range with no significant difference. The dimensionality of the original embedding space should be retained and used for learning the manifold to guarantee the best re-embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this paper we presented a new method to re-embed words from off-the-shelf embeddings based on manifold learning. We showed that such an approach is theoretically founded in the metric recovery paradigm and can empirically improve the performance of state-of-the-art embeddings in word similarity tasks. In future work we intend to extend the experiments to include other original pre-trained embeddings, and other algorithms for manifold learning. We also intend to extend the experiments to other NLP tasks in addition to word similarity such as word analogies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported with the financial support of the Science Foundation Ireland grant 13/RC/2094 and co-funded under the European Regional Development Fund through the Southern & Eastern Regional Operational Programme to Lero -the Irish Software Research Centre (www.lero.ie).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Deep learning for efficient discriminative parsing", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "AISTATS", |
| "volume": "15", |
| "issue": "", |
| "pages": "224--232", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert. 2011. Deep learning for efficient dis- criminative parsing. In AISTATS, volume 15, pages 224-232.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Fast and robust neural network joint models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Rabih", |
| "middle": [], |
| "last": "Zbib", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhongqiang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Lamar", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Richard", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Makhoul", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1370--1380", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul. 2014. Fast and robust neural network joint mod- els for statistical machine translation. In ACL, pages 1370-1380. Citeseer.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "406--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414. ACM.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Natural language queries over heterogeneous linked data graphs: A distributional-compositional semantics approach", |
| "authors": [ |
| { |
| "first": "Andr\u00e9", |
| "middle": [], |
| "last": "Freitas", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Curry", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 19th international conference on Intelligent User Interfaces", |
| "volume": "", |
| "issue": "", |
| "pages": "279--288", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andr\u00e9 Freitas and Edward Curry. 2014. Natural language queries over heterogeneous linked data graphs: A distributional-compositional semantics approach. In Proceedings of the 19th international conference on Intelligent User Interfaces, pages 279-288. ACM.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Nosym: Non-symbolic databases for data decoupling", |
| "authors": [ |
| { |
| "first": "Souleiman", |
| "middle": [], |
| "last": "Hasan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "the Conference on Innovative Data Systems Research (CIDR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Souleiman Hasan. 2017. Nosym: Non-symbolic databases for data decoupling. In the Conference on Innovative Data Systems Research (CIDR).", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Thematic event processing", |
| "authors": [ |
| { |
| "first": "Souleiman", |
| "middle": [], |
| "last": "Hasan", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Curry", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 15th International Middleware Conference, Middleware '14", |
| "volume": "", |
| "issue": "", |
| "pages": "109--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Souleiman Hasan and Edward Curry. 2014. Thematic event processing. In Proceedings of the 15th Inter- national Middleware Conference, Middleware '14, pages 109-120, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Word embeddings as metric recovery in semantic spaces", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Tatsunori", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hashimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [ |
| "S" |
| ], |
| "last": "Alvarez-Melis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "273--286", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tatsunori B Hashimoto, David Alvarez-Melis, and Tommi S Jaakkola. 2016. Word embeddings as met- ric recovery in semantic spaces. Transactions of the Association for Computational Linguistics, 4:273- 286.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Re-embedding words", |
| "authors": [ |
| { |
| "first": "Igor", |
| "middle": [], |
| "last": "Labutov", |
| "suffix": "" |
| }, |
| { |
| "first": "Hod", |
| "middle": [], |
| "last": "Lipson", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "489--493", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Igor Labutov and Hod Lipson. 2013. Re-embedding words. In ACL, pages 489-493.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Less is more: Filtering abnormal dimensions in glove", |
| "authors": [ |
| { |
| "first": "Yang-Yin", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Ke", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hen-Hsen", |
| "suffix": "" |
| }, |
| { |
| "first": "Hsin-Hsi", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 25th International Conference Companion on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "71--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang-Yin Lee, Hao Ke, Hen-Hsen Huang, and Hsin- Hsi Chen. 2016. Less is more: Filtering abnormal dimensions in glove. In Proceedings of the 25th In- ternational Conference Companion on World Wide Web, pages 71-72. International World Wide Web Conferences Steering Committee.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Visualizing data using t-sne", |
| "authors": [ |
| { |
| "first": "Laurens", |
| "middle": [], |
| "last": "Van Der Maaten", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "9", |
| "issue": "", |
| "pages": "2579--2605", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579-2605.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Simple and effective postprocessing for word representations", |
| "authors": [ |
| { |
| "first": "Jiaqi", |
| "middle": [], |
| "last": "Mu", |
| "suffix": "" |
| }, |
| { |
| "first": "Suma", |
| "middle": [], |
| "last": "Bhat", |
| "suffix": "" |
| }, |
| { |
| "first": "Pramod", |
| "middle": [], |
| "last": "Viswanath", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1702.01417" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. All-but-the-top: Simple and effective postprocess- ing for word representations. arXiv preprint arXiv:1702.01417.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "14", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014a. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532- 1543.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Glove resources. Available at", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014b. Glove resources. Available at: http://nlp.stanford.edu/projects/glove/.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Nonlinear dimensionality reduction by locally linear embedding. science", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Sam", |
| "suffix": "" |
| }, |
| { |
| "first": "Lawrence K", |
| "middle": [], |
| "last": "Roweis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saul", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "290", |
| "issue": "", |
| "pages": "2323--2326", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sam T Roweis and Lawrence K Saul. 2000. Nonlin- ear dimensionality reduction by locally linear em- bedding. science, 290(5500):2323-2326.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Contextual correlates of synonymy", |
| "authors": [ |
| { |
| "first": "Herbert", |
| "middle": [], |
| "last": "Rubenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodenough", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Communications of the ACM", |
| "volume": "8", |
| "issue": "10", |
| "pages": "627--633", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A model for analogical reasoning", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Adele", |
| "middle": [ |
| "A" |
| ], |
| "last": "Rumelhart", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Abrahamson", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "Cognitive Psychology", |
| "volume": "5", |
| "issue": "1", |
| "pages": "1--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David E Rumelhart and Adele A Abrahamson. 1973. A model for analogical reasoning. Cognitive Psy- chology, 5(1):1-28.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Unities in inductive reasoning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael K", |
| "middle": [], |
| "last": "Sternberg", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "Journal of Experimental Psychology: General", |
| "volume": "112", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert J Sternberg and Michael K Gardner. 1983. Uni- ties in inductive reasoning. Journal of Experimental Psychology: General, 112(1):80.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Word representations: a simple and general method for semi-supervised learning", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "384--394", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for compu- tational linguistics, pages 384-394. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Methodology and Related Work." |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Re-Embedding via Manifold Learning." |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Accuracy on WS353 similarity task as a function of manifold dimensionality. (Space is GloVe 42B 300d. Window start = 7000, LLE local neighbours =1000, Window length = 1001.) Accuracy on WS353 as a function of window length. (GloVe 42B 300d, LLE local neighbours =1000. Manifold dimensions =300.)" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Accuracy on similarity tasks as a function of window start. (a) Original space GloVe 42B 300d, with WS353. (b) 42B 300d, with RG65. (c) 6B 300d, with WS353. (LLE local neighbours =1000, Window length = 1001, Manifold dimensionality = 300.) Accuracy on WS353 as a function of the number of manifold local neighbours. (42B 300d, Window start = 7000, Manifold dimensionality = 300, Window length = local neighbours+1." |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "content": "<table><tr><td>(Window start \u2208 [5000, 15000], Number of LLE</td></tr><tr><td>local neighbours =1000, Window length = 1001,</td></tr><tr><td>Manifold dimensionality = Space dimensionality.)</td></tr></table>", |
| "html": null, |
| "text": "Average performance on similarity tasks.", |
| "num": null |
| } |
| } |
| } |
| } |