| { |
| "paper_id": "D17-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:14:10.044533Z" |
| }, |
| "title": "Word Embeddings based on Fixed-Size Ordinally Forgetting Encoding", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Sanu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "York University", |
| "location": { |
| "addrLine": "4700 Keele Street", |
| "postCode": "M3J 1P3", |
| "settlement": "Toronto", |
| "region": "Ontario", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Mingbin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "York University", |
| "location": { |
| "addrLine": "4700 Keele Street", |
| "postCode": "M3J 1P3", |
| "settlement": "Toronto", |
| "region": "Ontario", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "York University", |
| "location": { |
| "addrLine": "4700 Keele Street", |
| "postCode": "M3J 1P3", |
| "settlement": "Toronto", |
| "region": "Ontario", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Quan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Science and Technology of China", |
| "location": { |
| "settlement": "Hefei", |
| "country": "China" |
| } |
| }, |
| "email": "quanliu@mail.ustc.edu.cn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we propose to learn word embeddings based on the recent fixedsize ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation. We use FOFE to fully encode the left and right context of each word in a corpus to construct a novel word-context matrix, which is further weighted and factorized using truncated SVD to generate low-dimension word embedding vectors. We have evaluated this alternative method in encoding word-context statistics and show the new FOFE method has a notable effect on the resulting word embeddings. Experimental results on several popular word similarity tasks have demonstrated that the proposed method outperforms many recently popular neural prediction methods as well as the conventional SVD models that use canonical count based techniques to generate word context matrices.", |
| "pdf_parse": { |
| "paper_id": "D17-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we propose to learn word embeddings based on the recent fixedsize ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation. We use FOFE to fully encode the left and right context of each word in a corpus to construct a novel word-context matrix, which is further weighted and factorized using truncated SVD to generate low-dimension word embedding vectors. We have evaluated this alternative method in encoding word-context statistics and show the new FOFE method has a notable effect on the resulting word embeddings. Experimental results on several popular word similarity tasks have demonstrated that the proposed method outperforms many recently popular neural prediction methods as well as the conventional SVD models that use canonical count based techniques to generate word context matrices.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Low dimensional vectors as word representations are very popular in NLP tasks such as inferring semantic similarity and relatedness. Most of these representations are based on either matrix factorization or context sampling described by (Baroni et al., 2014) as count or predict models. The basis for both models is the distributional hypothesis (Harris, 1954) , which states that words that appear in similar contexts have similar meaning. Traditional context representations have been obtained by capturing co-occurrences of words from a fixed-size window relative to the focus word. This representation however does not encompass the entirety of the context surrounding the focus word. Therefore, the distributional hypothesis is not being taken advantage of to the fullest extent. In this work, we seek to capture these contexts through the fixed-size ordinally forgetting encoding (FOFE) method, recently proposed in (Zhang et al., 2015b) . In addition to just capturing word co-occurrences, we attempt to use the FOFE to encode the full contexts of each focus word, including the order information of the context sequences. We believe the full encoding of contexts can enhance the resulting word embedding vectors, derived by factoring the corresponding word-context matrix. As argued in (Zhang et al., 2015b) , the FOFE method can almost uniquely encode discrete sequences of varying lengths into a fixed-size code, and this encoding method was used to address the challenges of a limited size window when using deep neural networks for language modeling. The resulting algorithm fulfills the needs of keeping long term dependency while being fast. The word order in a sequence is modeled by FOFE using an ordinally-forgetting mechanism which encodes each position of every word in the sequence.", |
| "cite_spans": [ |
| { |
| "start": 237, |
| "end": 258, |
| "text": "(Baroni et al., 2014)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 346, |
| "end": 360, |
| "text": "(Harris, 1954)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 922, |
| "end": 943, |
| "text": "(Zhang et al., 2015b)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1294, |
| "end": 1315, |
| "text": "(Zhang et al., 2015b)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we elaborate how to use the FOFE to fully encode context information of each focus word in text corpora, and present a new method to construct the word-context matrix for word embedding, which may be weighted and factorized as in traditional vector space models (Turney and Pantel, 2010) . Next, we report our experimental results on several popular word similarity tasks, which demonstrate that the proposed FOFE-based approach leads to significantly better performance in these tasks, comparing with the conventional vector space models as well as the popular neural prediction methods, such as word2vec, GloVe and more recent Swivel. Finally, this paper will conclude with the analysis and prospects of com-bining this approach with other methods.", |
| "cite_spans": [ |
| { |
| "start": 289, |
| "end": 302, |
| "text": "Pantel, 2010)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There has been some debate as to what the optimal length of a text should be for measuring word similarity. Word occurrences from a fixed context window of words can be used to represent a context (Lund and Burgess, 1996) . The word co-occurrence frequencies are based on fixed windows spanning in both directions from the focus word. This is then used to create a word-context matrix from which row vectors can be used to measure word similarity. A weighting step is usually applied to highlight words with close association in the co-occurrence matrix, and the truncated SVD is used to factorize the weighted matrix to generate low-dimension word vectors. Recently, (Mikolov et al., 2013a) has introduced an alternative way to generate word embeddings using the skipgram model trained with stochastic gradient descent and negative sampling, named as SGNS. SGNS tries to maximize the dot product between w \u2022 c where both a word w and a context c are obtained from observed word-context pairs, and meanwhile it also tries to minimize the dot product between w \u2022 c where c is a negative sample representing some contexts that are not observed in the corpus. More recently, (Levy and Goldberg, 2014) has showed that the objective function of SGNS is essentially seeking to minimize the difference between the models estimate and the log of cooccurrence count. Their finding has shown that the optimal solution is a weighted factorization of a pointwise mutual information matrix shifted by the log of the number of negative samples. SGNS and GloVe (Pennington et al., 2014) select a fixed window of usually 5 words or less around a focus word to encode its context and the word order information within the window is completely ignored. Other attempts to fully capture the contexts have been successful with the use of recurrent neural networks (RNNs) but these methods are much more expensive to run over large corpora when comparing with the proposed FOFE method in this paper. Some previous approaches to encode order information, such as such as BEAGLE (Jones and Mewhort, 2007) and Random Permutations (Sahlgren et al., 2008) , typically require the use of expensive operations such as convolution and permutation to process all n-grams within a context window to memorize order information for a given word. On the contrary, the FOFE methods only use a simple recursion to process a sentence once to memorize both context and order information for all words in the sentence.", |
| "cite_spans": [ |
| { |
| "start": 197, |
| "end": 221, |
| "text": "(Lund and Burgess, 1996)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 668, |
| "end": 691, |
| "text": "(Mikolov et al., 2013a)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1172, |
| "end": 1197, |
| "text": "(Levy and Goldberg, 2014)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1546, |
| "end": 1571, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 2055, |
| "end": 2080, |
| "text": "(Jones and Mewhort, 2007)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 2105, |
| "end": 2128, |
| "text": "(Sahlgren et al., 2008)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To capture the full essence of the distributional hypothesis, we need to fully encode the left and right context of each focus word in the text, and further take into accounts that words closer to the focus word should play a bigger role in representing the context relevant to the focus word than other words locating much farther away. Traditional cooccurrence word-context matrixes fail to address these concerns of context representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this work, we propose to make use of the fixed-size ordinally-forgetting encoding (FOFE) method, proposed in (Zhang et al., 2015b) as a unique encoding method for any variable-length sequence of discrete words.", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 133, |
| "text": "(Zhang et al., 2015b)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Given a vocabulary of size K, FOFE uses 1-of-K one-hot representation to represent each word. To encode any variable-length sequence of words, FOFE generates the code using a simple recursive formula from the first word (w 1 ) to the last one (w T ) of the sequence: (assume z 0 = 0)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z t = \u03b1 \u2022 z t\u22121 + e t (1 \u2264 t \u2264 T )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where z t denotes the FOFE code for the partial sequence up to word w t , \u03b1 is a constant forgetting factor, and e t denotes the one-hot vector representation of word w t . In this case, the code z T may be viewed as a fixed-size representation of any sequence of ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "{w 1 , w 2 , \u2022 \u2022 \u2022 , w T }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "is [\u03b1 4 , \u03b1 + \u03b1 3 , 1 + \u03b1 2 ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The uniqueness of the FOFE code is made evident if the original sequence can be unequivocally recovered from the given FOFE code. According to (Zhang et al., 2015b) , FOFE codes have some nice theoretical properties to ensure the uniqueness, as exemplified by the following two theorems 1 :", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 164, |
| "text": "(Zhang et al., 2015b)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Theorem 1 If the forgetting factor \u03b1 satisfies 0 < \u03b1 \u2264 0.5, FOFE is unique for any K and T .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Theorem 2 For 0.5 < \u03b1 < 1, given any finite values of K and T , FOFE is almost unique everywhere for \u03b1 \u2208 (0.5, 1.0), except only a finite set of countable choices of \u03b1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Finally, for alpha values less than or equal to 0.5 and greater than 0, the FOFE is unique for any sequence. For alpha values greater than 0.5, the chance of collision is extremely low and the FOFE is unique in almost all cases. Too find more about the theoretical correctness of FOFE, please refer to (Zhang et al., 2015b) . In other words, the FOFE codes can almost uniquely encode any sequences, serving as a fixed-size but theoretically lossless representation for any variable-length sequences.", |
| "cite_spans": [ |
| { |
| "start": 302, |
| "end": 323, |
| "text": "(Zhang et al., 2015b)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this work, we propose to use FOFE to encode the full context where each focus word appears in text. As shown in Figure 1 , the left context of a focus word, i.e., bank, may be viewed as a sequence and encoded as a FOFE code L from the left to right while its right context is encoded as another FOFE code R from right to left. When a proper forgetter factor \u03b1 is chosen, the two FOFE codes can almost fully represent the context of the focus word. If the focus word appears multiple times in text, a pair of FOFE codes [L, R] is generated for each occurrence. Next, a mean vector is calculated for each word from all of its occurrences in text. Finally, as shown in Figure 1 , we may line up these mean vectors (one word per row) to form a new word-context matrix, called the FOFE matrix here.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 115, |
| "end": 123, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 669, |
| "end": 677, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "FOFE based Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We further weight the above FOFE matrix using the standard positive pointwise mutual information (PMI) (Church and Hanks, 1990) which has been shown to be of benefit for regular wordcontext matrices (Pantel and Lin, 2002) . PMI is used as a measure of association between a word and a context. PMI tries to compute the association probabilities based on co-occurrence frequencies. Positive pointwise mutual information is a commonly adopted approach where all negative values in the PMI matrix are replaced with zero. The PMI-based weighting function is critical here since it helps to highlight the more surprising events in original word-context matrix. There are significant benefits in working with low-dimensional dense vectors, as noted by (Deer-wester et al., 1990) with the use of truncated singular value decomposition (SVD). Here, we also use truncated SVD to factorize the above weighted FOFE matrix as the product of three dense matrices U, \u03a3, V T , where U and V T have orthonormal columns and \u03a3 is a diagonal matrix consisting of singular values. If we select \u03a3 to be of rank d, its diagonal values represent the top d singular values, and U d can be used to represent all word embeddings with d dimensions where each row represents a word vector.", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 221, |
| "text": "(Pantel and Lin, 2002)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 746, |
| "end": 772, |
| "text": "(Deer-wester et al., 1990)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PMI-based Weighting and SVD-based Matrix Factorization", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We conducted experiments on several popular word similarity data sets and compare our FOFE method with other existing word embedding models in these tasks. In this work, we opt to use five data sets: WordSim353 (Finkelstein et al., 2001 ), MEN (Bruni et al., 2012) , Mechanical Turk (Radinsky et al., 2011) , Rare Words (Luong et al., 2013) and SimLex-999 (Hill et al., 2015) . The word similarity performance is evaluated based on the Spearman rank correlation coefficient obtained by comparing cosine distance between word vectors and human assigned similarity scores.", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 236, |
| "text": "(Finkelstein et al., 2001", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 244, |
| "end": 264, |
| "text": "(Bruni et al., 2012)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 283, |
| "end": 306, |
| "text": "(Radinsky et al., 2011)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 320, |
| "end": 340, |
| "text": "(Luong et al., 2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 356, |
| "end": 375, |
| "text": "(Hill et al., 2015)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For our training data, we use the standard en-wiki9 corpus which contains 130 million words. The pre-processing stage includes discarding extremely long sentences, tokenizing, lowercasing and splitting each sentence as a context. Our vocabulary size is chosen to be 80,000 for the most frequent words in the corpus. All words not in the vocabulary are replaced with the token <unk>. In this work, we use a python-based library, called scipy 2 , to perform truncated SVD to factorize all word-context matrices.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our first baseline is the conventional vector space model (VSM) (Turney and Pantel, 2010) , relying on the PMI-weighted co-occurrence matrix with dimensionality reduction performed using truncated SVD. The dimension of word vectors is chosen to be 300 and this number is kept the same for all models examined in this paper. Our main goal is to outperform VSM as the model proposed in this paper also uses SVD based matrix factorization. This allows for appropriate comparisons between the different word encoding methods. For the purpose of completeness, the other non-SVD based embedding models, mainly the more recent neural prediction methods, are also compared in our experiments. As a result, we build the second baseline using the skip-gram model provided by the word2vec software package (Mikolov et al., 2013a) , denoted as SGNS. The word embeddings are generated using the recommended hyper-parameters from (Levy et al., 2015) . Their findings show a larger number of negative samples is preferable and increments on the window size have minimal improvements on word similarity tasks. In our experiments the number of negative samples is set to 5 and the window size is set to 5. In addition, we set the subsampling rate to \u22124 and run 3 iterations for training. In adition to SGNS, we also obtained results for CBOW, GloVe (Pennington et al., 2014) and Swivel (Shazeer et al., 2016 ) models using similar recommended settings. While the window size has a fixed limit in the baseline models, our model does not have a window size parameter as the entire sentence is fully captured as well as distinctions between left and right contexts when generating the FOFE codes. The impact of closer context words is further highlighted by the use of the forgetting factor which is unique to the FOFE based word embedding.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 89, |
| "text": "Pantel, 2010)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 795, |
| "end": 818, |
| "text": "(Mikolov et al., 2013a)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 916, |
| "end": 935, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1332, |
| "end": 1357, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1369, |
| "end": 1390, |
| "text": "(Shazeer et al., 2016", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Finally, we use the FOFE codes to construct the word-context matrix and generate word embedding as described in sections 3 and 4. Throughout our experiments, we have chosen to use a constant forgetting factor \u03b1 = 0.7. There is no significant difference in word similarity scores after experimenting with different \u03b1 values between [0.6, 0.9] when generating FOFE codes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We have applied the same hyperparameters to both VSM and FOFE methods and fine-tune them based on the recommended settings provided in (Levy et al., 2015) . Although it has been previously reported that context distribution smoothing (Mikolov et al., 2013b) can provide a net positive effect, it did not yield significant gains in our experiments. On the other hand, the eigenvalue (Caron, 2001) proved to be incredibly effective for some datasets but ineffectual in others. The net benefit however is palpable and we include it for both VSM and FOFE methods.", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 154, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 234, |
| "end": 257, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 382, |
| "end": 395, |
| "text": "(Caron, 2001)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The best results of all word embedding models are summarized in Table 1 for all five examined data sets, which include the the traditional count based VSM with SVD alongside SGNS using word2vec and our proposed FOFE word embeddings. The most discernible piece of information from the table is that the FOFE method significantly outperforms the traditional count based VSM method on most of these word similarity tasks. The results in Table 1 show that substantial gains are obtained by FOFE in WordSim353, MEN and Rare Words data sets. The MEN dataset shows a 7% relative improvement over the conventional VSM. Among all of these five data sets, the proposed FOFE word embedding significantly outperforms VSM in four tasks while yielding similar performance as VSM in the last data set, i.e. SimLex-999. FOFE also outperforms all the other models except Swivel in the Mech Turk dataset. It is important to note that this paper does not state that SVD is obligatory to obtain the best model. The FOFE method can be complemented with other models such as Swivel in place of count based encoding methods. It is also theoretically guaranteed that the original sentence is perfectly recoverable from this FOFE code. This theoretical guarantee is clearly missing in previous methods to encode word order information, such as both BEAGLE and Random Permutations. It is evident that overall the FOFE encoding method does achieve significant gains in performance in these word similarity tests over the traditional VSM method that applies the same factorization method. This is sub-stantial as (Levy et al., 2015) demonstrates that larger window sizes when using SVD does not payoff and the optimal context window is 2. We establish that we can indeed encode more information into our embedding with the FOFE codes.", |
| "cite_spans": [ |
| { |
| "start": 1585, |
| "end": 1604, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 64, |
| "end": 71, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 434, |
| "end": 441, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In summary, our experimental results show great promise in using the FOFE encoding to represent word contexts for traditional matrix factorization methods. As for future work, the FOFE encoding method may be combined with other popular algorithms, such as Swivel, to replace the cooccurrence statistics based on a fixed window size.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The ability to capture the full context without restriction can play a crucial factor in generating superior word embeddings that excel in NLP tasks. The fixed-size ordinally forgetting encoding (FOFE) has the ability to seize large contexts while discriminating contexts that are farther away as being less significant. Conventional embeddings are derived from ambiguous co-occurrence statistics that fail to adequately discriminate contexts words even within the fixed-size window. The FOFE encoding technique trumps other approaches in its ability to procure the state of the art results in several word similarity tasks when combined with prominent factorization practices.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "See(Zhang et al., 2015a) for the proof of these two theorems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See http://docs.scipy.org/doc/scipy/ reference/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work is partially supported by a Discovery Grant from Natural Sciences and Engineering Research Council (NSERC) of Canada, and a research donation from iFLYTEK Co., Hefei, China.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "238--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL (1), pages 238-247.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Distributional semantics in technicolor", |
| "authors": [ |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Gemma", |
| "middle": [], |
| "last": "Boleda", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Nam-Khanh", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "136--145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics, pages 136-145. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Experiments with lsa scoring: Optimal rank and basis", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Caron", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the SIAM Computational Information Retrieval Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "157--169", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Caron. 2001. Experiments with lsa scoring: Optimal rank and basis. In Proceedings of the SIAM Computational Information Retrieval Work- shop, pages 157-169.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Word association norms, mutual information, and lexicography", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Kenneth", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational linguistics", |
| "volume": "16", |
| "issue": "1", |
| "pages": "22--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational linguistics, 16(1):22-29.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Journal of the American society for information science", |
| "authors": [ |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Deerwester", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Susan", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [ |
| "W" |
| ], |
| "last": "Dumais", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Furnas", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Landauer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harshman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "41", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott Deerwester, Susan T Dumais, George W Fur- nas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American society for information science, 41(6):391.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "406--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414. ACM.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Distributional structure. Word", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Zellig", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harris", |
| "suffix": "" |
| } |
| ], |
| "year": 1954, |
| "venue": "", |
| "volume": "10", |
| "issue": "", |
| "pages": "146--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Representing word meaning and order information in a composite holographic lexicon", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mewhort", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Psychological Review", |
| "volume": "114", |
| "issue": "1", |
| "pages": "1--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M.N Jones and D.J.K Mewhort. 2007. Represent- ing word meaning and order information in a com- posite holographic lexicon. Psychological Review, 114(1):1-37.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Neural word embedding as implicit matrix factorization", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2177--2185", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Ad- vances in Neural Information Processing Systems, pages 2177-2185.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Improving distributional similarity with lessons learned from word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "211--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Producing high-dimensional semantic spaces from lexical cooccurrence", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Lund", |
| "suffix": "" |
| }, |
| { |
| "first": "Curt", |
| "middle": [], |
| "last": "Burgess", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Behavior Research Methods, Instruments, and Computers", |
| "volume": "28", |
| "issue": "", |
| "pages": "203--208", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical co- occurrence. Behavior Research Methods, Instru- ments, and Computers, 28(2):203-208.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Better word representations with recursive neural networks for morphology", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "104--113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104-113. Citeseer.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1301.3781" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Discovering word senses from text", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "613--619", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the eighth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 613-619.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A word at a time: computing word relatedness using temporal semantic analysis", |
| "authors": [ |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Radinsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Agichtein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaul", |
| "middle": [], |
| "last": "Markovitch", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 20th international conference on World wide web", |
| "volume": "", |
| "issue": "", |
| "pages": "337--346", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337-346. ACM.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Permutations as a means to encode order in word space", |
| "authors": [ |
| { |
| "first": "Magnus", |
| "middle": [], |
| "last": "Sahlgren", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Holst", |
| "suffix": "" |
| }, |
| { |
| "first": "Kanerva", |
| "middle": [], |
| "last": "Pentti", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 30th Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "1300--1305", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Magnus Sahlgren, Anders Holst, and Kanerva Pentti. 2008. Permutations as a means to encode order in word space. In In Proceedings of the 30th Annual Conference of the Cognitive Science Society, pages 1300-1305.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Swivel: Improving embeddings by noticing whats missing", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Doherty", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Evans", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Waterson", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1602.02215" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Shazeer, Ryan Doherty, Colin Evans, and Chris Waterson. 2016. Swivel: Improving embed- dings by noticing whats missing. arXiv preprint arXiv:1602.02215.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "From frequency to meaning: Vector space models of semantics", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "37", |
| "issue": "", |
| "pages": "141--188", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37:141-188.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A fixed-size encoding method for variable-length sequences with its application to neural network language models", |
| "authors": [ |
| { |
| "first": "Shiliang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mingbin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Junfeng", |
| "middle": [], |
| "last": "Hou", |
| "suffix": "" |
| }, |
| { |
| "first": "Lirong", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1505.01504" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiliang Zhang, Hui Jiang, Mingbin Xu, Junfeng Hou, and Lirong Dai. 2015a. A fixed-size encoding method for variable-length sequences with its ap- plication to neural network language models. arXiv preprint arXiv:1505.01504.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "The fixed-size ordinallyforgetting encoding method for neural network language models", |
| "authors": [ |
| { |
| "first": "Shiliang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mingbin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Junfeng", |
| "middle": [], |
| "last": "Hou", |
| "suffix": "" |
| }, |
| { |
| "first": "Lirong", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "495--500", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiliang Zhang, Hui Jiang, Mingbin Xu, Junfeng Hou, and Lirong Dai. 2015b. The fixed-size ordinally- forgetting encoding method for neural network lan- guage models. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing, pages 495-500, Beijing, China. Association for Computational Lin- guistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "i) encoding left and right contexts of each focus word with FOFE and ii) forming the FOFE word-context matrix.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>For ex-</td></tr><tr><td>ample, assume we have three symbols in vocabu-</td></tr><tr><td>lary, e.g., A, B, C, whose 1-of-K codes are [1, 0, 0],</td></tr><tr><td>[0, 1, 0] and [0, 0, 1]</td></tr></table>", |
| "num": null, |
| "text": "respectively. When calculating from left to right, the FOFE code for the sequence {ABC} is [\u03b1 2 , \u03b1, 1], and that of {ABCBC}" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td/><td>left FOFE code L</td><td>right FOFE code R</td></tr><tr><td colspan=\"3\">i) encoding left and right context for one occurrence of the focus word, i.e. bank</td></tr><tr><td>w 1</td><td>left FOFE code L w1</td><td>right FOFE code R w1</td></tr><tr><td>w 2</td><td>left FOFE code L w2</td><td>right FOFE code R w2</td></tr><tr><td>w K</td><td>left FOFE code L wK</td><td>right FOFE code R wK</td></tr><tr><td/><td/><td>K x 2K</td></tr><tr><td/><td colspan=\"2\">ii) forming the FOFE word-context matrix for all words</td></tr></table>", |
| "num": null, |
| "text": "Back in the day, we had an entire bank of computers devoted to this problem." |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Method</td><td>WordSim353</td><td>MEN</td><td colspan=\"3\">Mech Turk Rare Words SimLex-999</td></tr><tr><td>VSM+SVD</td><td>0.7109</td><td>0.7130</td><td>0.6258</td><td>0.4813</td><td>0.3866</td></tr><tr><td>CBOW</td><td>0.6763</td><td>0.6768</td><td>0.6621</td><td>0.4280</td><td>0.3549</td></tr><tr><td>GloVe</td><td>0.5873</td><td>0.6350</td><td>0.5831</td><td>0.3934</td><td>0.2883</td></tr><tr><td>SGNS</td><td>0.7028</td><td>0.6689</td><td>0.6187</td><td>0.4360</td><td>0.3709</td></tr><tr><td>Swivel</td><td>0.7303</td><td>0.7246</td><td>0.7024</td><td>0.4430</td><td>0.3323</td></tr><tr><td>FOFE+SVD</td><td>0.7580</td><td>0.7637</td><td>0.6525</td><td>0.5002</td><td>0.3866</td></tr><tr><td colspan=\"2\">weighting parameter tuning</td><td/><td/><td/><td/></tr></table>", |
| "num": null, |
| "text": "The best achieved performance of various word embedding models on all five examined word similarity tasks." |
| } |
| } |
| } |
| } |