ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:16:08.058542Z"
},
"title": "Incremental Skip-gram Model with Negative Sampling",
"authors": [
{
"first": "Nobuhiro",
"middle": [],
"last": "Kaji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yahoo Japan Corporation",
"location": {}
},
"email": "nkaji@yahoo-corp.jp"
},
{
"first": "Hayato",
"middle": [],
"last": "Kobayashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yahoo Japan Corporation",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.",
"pdf_parse": {
"paper_id": "D17-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Existing methods of neural word embeddings are typically designed to go through the entire training data multiple times. For example, negative sampling (Mikolov et al., 2013b) needs to precompute the noise distribution from the entire training data before performing Stochastic Gradient Descent (SGD). It thus needs to go through the training data at least twice. Similarly, hierarchical soft-max (Mikolov et al., 2013b) has to determine the tree structure and GloVe (Pennington et al., 2014) has to count co-occurrence frequencies before performing SGD.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF11"
},
{
"start": 397,
"end": 420,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF11"
},
{
"start": 467,
"end": 492,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The fact that those existing methods are multipass algorithms means that they cannot perform incremental model update when additional training data is provided. Instead, they have to re-train the model on the old and new training data from scratch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the re-training is obviously inefficient since it has to process the entire training data received thus far whenever new training data is provided. This is especially problematic when the amount of the new training data is relatively smaller than the old one. One such situation is that the embedding model is updated on a small amount of training data that includes newly emerged words for instantly adding them to the vocabulary set. Another situation is that the word embeddings are learned from ever-evolving data such as news articles and microblogs (Peng et al., 2017) and the embedding model is periodically updated on newly generated data (e.g., once in a week or month).",
"cite_spans": [
{
"start": 564,
"end": 583,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper investigates an incremental training method of word embeddings with a focus on the skip-gram model with negative sampling (SGNS) (Mikolov et al., 2013b) for its popularity. We present a simple incremental extension of SGNS, referred to as incremental SGNS, and provide a thorough theoretical analysis to demonstrate its validity. Our analysis reveals that, under a mild assumption, the optimal solution of incremental SGNS agrees with the original SGNS when the training data size is infinitely large. See Section 4 for the formal and strict statement. Additionally, we present techniques for the efficient implementation of incremental SGNS.",
"cite_spans": [
{
"start": 140,
"end": 163,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Three experiments were conducted to assess the correctness of the theoretical analysis as well as the practical usefulness of incremental SGNS. The first experiment empirically investigates the validity of the theoretical analysis result. The second experiment compares the word embeddings learned by incremental SGNS and the original SGNS across five benchmark datasets, and demonstrates that those word embeddings are of comparable quality. The last experiment explores the training time of incremental SGNS, demonstrating that it is able to save much training time by avoiding expensive re-training when additional training data is provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a preliminary, this section provides a brief overview of SGNS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "Given a word sequence, w 1 , w 2 , . . . , w n , for training, the skip-gram model seeks to minimize the following objective to learn word embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "L SG = \u2212 1 n n \u2211 i=1 \u2211 |j|\u2264c j\u0338 =0 log p(w i+j | w i ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "where w i is a target word and w i+j is a context word within a window of size c. p(w i+j | w i ) represents the probability that w i+j appears within the neighbor of w i , and is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w i+j | w i ) = exp(t w i \u2022 c w i+j ) \u2211 w\u2208W exp(t w i \u2022 c w ) ,",
"eq_num": "(1)"
}
],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "where t w and c w are w's embeddings when it behaves as a target and context, respectively. W represents the vocabulary set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "Since it is too expensive to optimize the above objective, Mikolov et al. (2013b) proposed negative sampling to speed up skip-gram training. This approximates Eq. (1) using sigmoid functions and k randomly-sampled words, called negative samples. The resulting objective is given as",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "L SGNS = \u2212 1 n n \u2211 i=1 \u2211 |j|\u2264c j\u0338 =0 \u03c8 + w i ,w i+j +kE v\u223cq(v) [\u03c8 \u2212 w i ,v ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "\u03c8 + w,v = log \u03c3(t w \u2022 c v ), \u03c8 \u2212 w,v = log \u03c3(\u2212t w \u2022 c v )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": ", and \u03c3(x) is the sigmoid function. The negative sample v is drawn from a smoothed unigram probability distribution referred to as noise distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "q(v) \u221d f (v) \u03b1 , where f (v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "represents the frequency of a word v in the training data and \u03b1 is a smoothing parameter (0 < \u03b1 \u2264 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "The objective is optimized by SGD. Given a target-context word pair (w i and w i+j ) and k negative samples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "(v 1 , v 2 , . . . , v k ) drawn from the noise distribution, the gradient of \u2212\u03c8 + w i ,w i+j \u2212 kE v\u223cq(v) [\u03c8 \u2212 w i ,v ] \u2248 \u2212\u03c8 + w i ,w i+j \u2212 \u2211 k k \u2032 =1 \u03c8 \u2212 w i ,v k \u2032",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "is computed. Then, the gradient descent is performed to update t w i , c w i+j , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "c v 1 , . . . , c v k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "SGNS training needs to go over the entire training data to pre-compute the noise distribution q(v) before performing SGD. This makes it difficult to perform incremental model update when additional training data is provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SGNS Overview",
"sec_num": "2"
},
{
"text": "This section explores incremental training of SGNS. The incremental training algorithm (Section 3.1), its efficient implementation (Section 3.2), and the computational complexity (Section 3.3) are discussed in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental SGNS",
"sec_num": "3"
},
{
"text": "Algorithm 1 presents incremental SGNS, which goes through the training data in a single-pass to update word embeddings incrementally. Unlike the original SGNS, it does not pre-compute the noise distribution. Instead, it reads the training data word by word 1 to incrementally update the word frequency distribution and the noise distribution while performing SGD. Hereafter, the original SGNS (c.f., Section 2) is referred to as batch SGNS to emphasize that the noise distribution is computed in a batch fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "The learning rate for SGD is adjusted by using AdaGrad (Duchi et al., 2011) . Although the linear decay function has widely been used for training batch SGNS (Mikolov, 2013) , adaptive methods such as AdaGrad are more suitable for the incremental training since the amount of training data is unknown in advance or can increase unboundedly.",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 158,
"end": 173,
"text": "(Mikolov, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "It is straightforward to extend the incremental SGNS to the mini-batch setting by reading a subset of the training data (or mini-batch), rather than a single word, at a time to update the noise distribution and perform SGD (Algorithm 2). Although this paper primarily focuses on the incremental SGNS, the mini-batch algorithm is also important in practical terms because it is easier to be multithreaded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "Alternatives to Algorithms 2 might be possible. Other possible approaches include computing the noise distribution separately on each subset of the training data, fixing the noise distribution after computing it from the first (possibly large) subset, and so on. We exclude such alternatives from our investigation because it is considered difficult to provide them with theoretical justification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "Although the incremental SGNS is conceptually simple, implementation issues are involved. 1 In practice, Algorithm 1 buffers a sequence of words wi\u2212c, . . . , wi+c (rather than a single word wi) at each step, as it requires an access to the context words wi+j in line 7. This is not a practical problem because the window size c is usually small and independent from the training data size n.",
"cite_spans": [
{
"start": 90,
"end": 91,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient implementation",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 Incremental SGNS 1: f (w) \u2190 0 for all w \u2208 W 2: for i = 1, . . . , n do 3: f (wi) \u2190 f (wi) + 1 4: q(w) \u2190 f (w) \u03b1 \u2211 w \u2032 \u2208W f (w \u2032 ) \u03b1 for all w \u2208 W 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient implementation",
"sec_num": "3.2"
},
{
"text": "for j = \u2212c, . . . , \u22121, 1, . . . , c do 6: draw k negative samples from q(w): v1, . . . , v k 7: use SGD to update tw i , cw i+j , and cv 1 , . . . , cv k 8: end for 9: end for Algorithm 2 Mini-batch SGNS 1: for each subset D of the training data do 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient implementation",
"sec_num": "3.2"
},
{
"text": "update the noise distribution using D 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient implementation",
"sec_num": "3.2"
},
{
"text": "perform SGD over D 4: end for 3.2.1 Dynamic vocabulary One problem that arises when training incremental SGNS is how to maintain the vocabulary set. Since new words emerge endlessly in the training data, the vocabulary set can grow unboundedly and exhaust a memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient implementation",
"sec_num": "3.2"
},
{
"text": "We address this problem by dynamically changing the vocabulary set. The Misra-Gries algorithm (Misra and Gries, 1982) is used to approximately keep track of top-m frequent words during training, and those words are used as the dynamic vocabulary set. This method allows the maximum vocabulary size to be explicitly limited to m, while being able to dynamically change the vocabulary set.",
"cite_spans": [
{
"start": 94,
"end": 117,
"text": "(Misra and Gries, 1982)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient implementation",
"sec_num": "3.2"
},
{
"text": "Another problem is how to generate negative samples efficiently. Since k negative samples per target-context pair have to be generated by the noise distribution, the sampling speed has a significant effect on the overall training efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "Let us first examine how negative samples are generated in batch SGNS. In a popular implementation (Mikolov, 2013) , a word array (referred to as a unigram table) is constructed such that the number of a word w in it is proportional to q(w). See Table 1 for an example. Using the unigram table, negative samples can be efficiently generated by sampling the table elements uniformly at random. It takes only O(1) time to generate one negative sample.",
"cite_spans": [
{
"start": 99,
"end": 114,
"text": "(Mikolov, 2013)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "The above method assumes that the noise distribution is fixed and thus cannot be used directly for the incremental training. One simple solution is to reconstruct the unigram table whenever new training data is provided. However, such a method 1: f (w) \u2190 0 for all w \u2208 W 2: z \u2190 0 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "for i = 1, . . . , n do 4: f (wi) \u2190 f (wi) + 1 5: F \u2190 f (wi) \u03b1 \u2212 (f (wi) \u2212 1) \u03b1 6: z \u2190 z + F 7: if |T | < \u03c4 then 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "add F copies of wi to T 9: else 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "for j = 1, . . . , \u03c4 do 11: We propose a reservoir-based algorithm for efficiently updating the unigram table (Vitter, 1985; Efraimidis, 2015) (Algorithm 3). The algorithm incrementally update the unigram table T while limiting its maximum size to \u03c4 . In case |T | < \u03c4 , it can be easily confirmed that the number of a word w in T is f (w) \u03b1 (\u221d q(w)). In case |T | = \u03c4 , since z = \u2211 w\u2208W f (w) \u03b1 is equal to the normalization factor of the noise distribution, it can be proven by induction that, for all j, T [j] is a word w with probability q(w). See (Vitter, 1985; Efraimidis, 2015) for reference.",
"cite_spans": [
{
"start": 110,
"end": 124,
"text": "(Vitter, 1985;",
"ref_id": "BIBREF19"
},
{
"start": 125,
"end": 125,
"text": "",
"ref_id": null
},
{
"start": 552,
"end": 566,
"text": "(Vitter, 1985;",
"ref_id": "BIBREF19"
},
{
"start": 567,
"end": 584,
"text": "Efraimidis, 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "T [j] \u2190 wi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "Note on implementation In line 8, F copies of w i are added to T . When F is not an integer, the copies are generated so that their expected number becomes F . Specifically, \u2308F \u2309 copies are added to T with probability F \u2212 \u230aF \u230b, and \u230aF \u230b copies are added otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "The loop from line 10 to 12 becomes expensive if implemented straightforwardly because the maximum table size \u03c4 is typically set large (e.g., \u03c4 = 10 8 in word2vec (Mikolov, 2013) ). For acceleration, instead of checking all elements in the unigram table, randomly chosen \u03c4 F z elements are substituted with w i . Note that \u03c4 F z is the expected number of table elements to be substituted in the original algorithm. This approximation achieves great speed-up because we usually have F \u226a z.",
"cite_spans": [
{
"start": 163,
"end": 178,
"text": "(Mikolov, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "In fact, it can be proven that it takes O(1) time when \u03b1 = 1.0. See Appendix 3 A for more discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive unigram table",
"sec_num": "3.2.2"
},
{
"text": "Both incremental and batch SGNS have the same space complexity, which is independent of the training data size n. Both require O(|W|) space to store the word embeddings and the word frequency counts, and O(|T |) space to store the unigram table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity",
"sec_num": "3.3"
},
{
"text": "The two algorithms also have the same time complexity. Both require O(n) training time when the training data size is n. Although incremental SGNS requires extra time for updating the dynamic vocabulary and adaptive unigram table, these costs are practically negligible, as will be demonstrated in Section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity",
"sec_num": "3.3"
},
{
"text": "Although the extension from batch to incremental SGNS is simple and intuitive, it is not readily clear whether incremental SGNS can learn word embeddings as well as the batch counterpart. To answer this question, in this section we examine incremental SGNS from a theoretical point of view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "4"
},
{
"text": "The analysis begins by examining the difference between the objectives optimized by batch and incremental SGNS (Section 4.1). Then, probabilistic properties of their difference are investigated to demonstrate the relationship between batch and incremental SGNS (Sections 4.2 and 4.3). We shortly touch the mini-batch SGNS at the end of this section (Section 4.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "4"
},
{
"text": "As discussed in Section 2, batch SGNS optimizes the following objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "L B (\u03b8) = \u2212 1 n n \u2211 i=1 \u2211 |j|\u2264c j\u0338 =0 \u03c8 + w i ,w i+j +kE v\u223cqn(v) [\u03c8 \u2212 w i ,v ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "\u03b8 = (t 1 , t 2 , . . . , t |W| , c 1 , c 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": ". . , c |W| ) collectively represents the model parameters 4 (i.e., word embeddings) and q n (v) represents the noise distribution. Note that the noise distribution is represented in a different notation than Section 2 to make its dependence on the whole training data explicit. The function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "q i (v) is defined as q i (v) = f i (v) \u03b1 \u2211 v \u2032 \u2208W f i (v \u2032 ) \u03b1 , where f i (v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "represents the word frequency in the first i words of the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "In contrast, incremental SGNS computes the gradient of \u2212\u03c8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "+ w i ,w i+j \u2212 kE v\u223cq i (v) [\u03c8 \u2212 w i ,v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "] at each step to perform gradient descent. Note that the noise distribution does not depend on n but rather on i. Because it can be seen as a sample approximation of the gradient of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "L I (\u03b8) = \u2212 1 n n \u2211 i=1 \u2211 |j|\u2264c j\u0338 =0 \u03c8 + w i ,w i+j +kE v\u223cq i (v) [\u03c8 \u2212 w i ,v ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "incremental SGNS can be interpreted as optimizing L I (\u03b8) with SGD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "Since the expectation terms in the objectives can be rewritten as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "E v\u223cq i (v) [\u03c8 \u2212 w i ,v ] = \u2211 v\u2208W q i (v)\u03c8 \u2212 w i ,v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": ", the difference between the two objectives can be given as ) is the delta function.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 61,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "\u2206L(\u03b8) = L B (\u03b8) \u2212 L I (\u03b8) = 1 n n \u2211 i=1 \u2211 |j|\u2264c j\u0338 =0 k \u2211 v\u2208W (q i (v)\u2212q n (v))\u03c8 \u2212 w i ,v = 2ck n n \u2211 i=1 \u2211 v\u2208W (q i (v) \u2212 q n (v))\u03c8 \u2212 w i ,v = 2ck n \u2211 w,v\u2208W n \u2211 i=1 \u03b4 w i ,w (q i (v) \u2212 q n (v))\u03c8 \u2212 w,v where \u03b4 w,v = \u03b4(w = v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective difference",
"sec_num": "4.1"
},
{
"text": "Let us begin by examining the objective difference \u2206L(\u03b8) in the unsmoothed case, \u03b1 = 1.0. The technical difficulty in analyzing \u2206L(\u03b8) is that it is dependent on the word order in the training data. To address this difficulty, we assume that the words in the training data are generated from some stationary distribution. This assumption allows us to investigate the property of \u2206L(\u03b8) from a probabilistic perspective. Regarding the validity of this assumption, we want to note that this assumption is already taken by the original SGNS: the probability that the target and context words co-occur is assumed to be independent of their position in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "We below introduce some definitions and notations as the preparation of the analysis. Definition 1. Let X i,w be a random variable that represents \u03b4 w i ,w . It takes 1 when the i-th word in the training data is w \u2208 W and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "Remind that we assume that the words in the training data are generated from a stationary distribution. This assumption means that the expectation and (co)variance of X i,w do not depend on the index i. Hereafter, they are respectively de-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "noted as E[X i,w ] = \u00b5 w and V[X i,w , X j,v ] = \u03c1 w,v .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "Definition 2. Let Y i,w be a random variable that represents q i (w) when \u03b1 = 1.0. It is given as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "Y i,w = 1 i \u2211 i i \u2032 =1 X i \u2032 ,w . 4.2.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "Convergence of the first and second order moments of \u2206L(\u03b8) It can be shown that the first order moment of \u2206L(\u03b8) has an analytical form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "Theorem 1. The first order moment of \u2206L(\u03b8) is given as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "E[\u2206L(\u03b8)] = 2ck(H n \u2212 1) n \u2211 w,v\u2208W \u03c1 w,v \u03c8 \u2212 w,v ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "where H n is the n-th harmonic number.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsmoothed case",
"sec_num": "4.2"
},
{
"text": "Notice that E[\u2206L(\u03b8)] can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "2ck n \u2211 w,v\u2208W n \u2211 i=1 ( E[X i,w Y i,v ] \u2212 E[X i,w Y n,v ] ) \u03c8 \u2212 w,v .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "Because we have, for any i and j such that i \u2264 j,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "E[X i,w Y j,v ] = j \u2211 j \u2032 =1 E[X i,w X j \u2032 ,v j ] = \u00b5 w \u00b5 v + \u03c1 w,v j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "plugging this into E[\u2206L(\u03b8)] proves the theorem. See Appendix B.1 for the complete proof.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "Theorem 1 readily gives the convergence property of the first order moment of \u2206L(\u03b8):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "Theorem 2. The first-order moment of \u2206L(\u03b8) decreases in the order of O( log(n) n ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "E[\u2206L(\u03b8)] = O ( log(n) n ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "and thus converges to zero in the limit of infinity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "lim n\u2192\u221e E[\u2206L(\u03b8)] = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "Proof. We have H n = O(log(n)) from the upper integral bound, and thus Theorem 1 gives the proof.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "A similar result to Theorem 2 can be obtained for the second order moment of \u2206L(\u03b8) as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "Theorem 3. The second-order moment of \u2206L(\u03b8) decreases in the order of O( log (n) n ):",
"cite_spans": [
{
"start": 77,
"end": 80,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "E[\u2206L(\u03b8) 2 ] = O ( log(n) n ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "and thus converges to zero in the limit of infinity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "lim n\u2192\u221e E[\u2206L(\u03b8) 2 ] = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "Proof. Omitted. See Appendix B.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof.",
"sec_num": null
},
{
"text": "The above theorems reveal the relationship between the optimal solutions of the two objectives, as stated in the next lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Lemma 4. Let \u03b8 * and\u03b8 be the optimal solutions of L B (\u03b8) and L I (\u03b8), respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 * = arg min \u03b8 L B (\u03b8) and\u03b8 = arg min \u03b8 L I (\u03b8). Then, lim n\u2192\u221e E[L B (\u03b8) \u2212 L B (\u03b8 * )] = 0,",
"eq_num": "(2)"
}
],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "lim n\u2192\u221e V[L B (\u03b8) \u2212 L B (\u03b8 * )] = 0.",
"eq_num": "(3)"
}
],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Proof. The proof is made by the squeeze theorem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Let l = L B (\u03b8) \u2212 L B (\u03b8 * ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "The optimality of \u03b8 * gives 0 \u2264 l. Also, the optimality of\u03b8 gives",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "l = L B (\u03b8) \u2212 L I (\u03b8 * ) + L I (\u03b8 * ) \u2212 L B (\u03b8 * ) \u2264 L B (\u03b8) \u2212 L I (\u03b8) + L I (\u03b8 * ) \u2212 L B (\u03b8 * ) = \u2206L(\u03b8) \u2212 \u2206L(\u03b8 * ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "We",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "thus have 0 \u2264 E[l] \u2264 E[\u2206L(\u03b8) \u2212 \u2206L(\u03b8 * )].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Since Theorem 2 implies that the right hand side converges to zero when n \u2192 \u221e, the squeeze theorem gives Eq. (2). Next, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V[l] = E[l 2 ] \u2212 E[l] 2 \u2264 E[l 2 ] \u2264 E[(\u2206L(\u03b8) \u2212 \u2206L(\u03b8 * )) 2 ] \u2264 E[(\u2206L(\u03b8) \u2212 \u2206L(\u03b8 * )) 2 ] + E[(\u2206L(\u03b8)+\u2206L(\u03b8 * )) 2 ] = 2E[\u2206L(\u03b8) 2 ] + 2E[\u2206L(\u03b8 * ) 2 ].",
"eq_num": "(4)"
}
],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Theorem 3 suggests that Eq. (4) converges to zero when n \u2192 \u221e. Also, the non-negativity of the variance gives 0 \u2264 V [l] . Therefore, the squeeze theorem gives Eq. (3).",
"cite_spans": [
{
"start": 115,
"end": 118,
"text": "[l]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "We are now ready to provide the main result of the analysis. The next theorem shows the convergence of L B (\u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Theorem 5. L B (\u03b8) converges in probability to L B (\u03b8 * ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "\u2200\u03f5 > 0, lim n\u2192\u221e Pr [ |L B (\u03b8) \u2212 L B (\u03b8 * )| \u2265 \u03f5 ] = 0. Sketch of proof. Let again l = L B (\u03b8) \u2212 L B (\u03b8 * ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Chebyshev's inequality gives, for any \u03f5 1 > 0,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "lim n\u2192\u221e V[l] \u03f5 2 1 \u2265 lim n\u2192\u221e Pr [ |l \u2212 E[l]| \u2265 \u03f5 1 ] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "Remember that Eq. (2) means that for any \u03f5 2 > 0, there exists n \u2032 such that if n \u2032 \u2264 n then |E[l]| < \u03f5 2 . Therefore, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "lim n\u2192\u221e V[l] \u03f5 2 1 \u2265 lim n\u2192\u221e Pr [ |l| \u2265 \u03f5 1 + \u03f5 2 ] \u2265 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "The arbitrary property of \u03f5 1 and \u03f5 2 allows \u03f5 1 + \u03f5 2 to be rewritten as \u03f5. Also, Eq. 3 Informally, this theorem can be interpreted as suggesting that the optimal solutions of batch and incremental SGNS agree when n is infinitely large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main result",
"sec_num": "4.2.2"
},
{
"text": "We next examine the smoothed case (0 < \u03b1 < 1). In this case, the noise distribution can be represented by using the ones in the unsmoothed case:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "q i (w) = f i (w) \u03b1 \u2211 w \u2032 \u2208W f i (w \u2032 ) \u03b1 = ( f i (w) F i ) \u03b1 \u2211 w \u2032 \u2208W ( f i (w \u2032 ) F i ) \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "F i = \u2211 w \u2032 \u2208W f i (w \u2032 ) and f i (w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "F i corresponds to the unsmoothed noise distribution. Definition 3. Let Z i,w be a random variable that represents q i (w) in the smoothed case. Then, it can be written by using Y i,w :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "Z i,w = g w (Y i,1 , Y i,2 , . . . , Y i,|W| )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "g w (x 1 , x 2 , . . . , x |W| ) = x \u03b1 w \u2211 w \u2032 \u2208W x \u03b1 w \u2032 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "Because Z i,w is no longer a linear combination of X i,w , it becomes difficult to derive similar proofs to the unsmoothed case. To address this difficulty, Z i,w is approximated by the first-order Taylor expansion around",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "E[(Y i,1 , Y i,2 , . . . , Y i,|W| )] = (\u00b5 1 , \u00b5 2 , . . . , \u00b5 |W| ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "The first-order Taylor approximation gives",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "Z i,w \u2248 g w (\u00b5) + \u2211 v\u2208W M w,v (Y i,v \u2212 g v (\u00b5))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "\u00b5 = (\u00b5 1 , \u00b5 2 , . . . , \u00b5 |W| ) and M w,v = \u2202gw(x) \u2202xv |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "x=\u00b5 . Consequently, it can be shown that the first and second order moments of \u2206L(\u03b8) have the order of O( log (n) n ) in the smoothed case as well. See Appendix C for the details.",
"cite_spans": [
{
"start": 110,
"end": 113,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed case",
"sec_num": "4.3"
},
{
"text": "The same analysis result can also be obtained for the mini-batch SGNS. We can prove Theorems 2 and 3 in the mini-batch case as well (see Appendix D for the proof). The other part of the analysis remains the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mini-batch SGNS",
"sec_num": "4.4"
},
{
"text": "Three experiments were conducted to investigate the correctness of the theoretical analysis (Section 5.1) and the practical usefulness of incremental SGNS (Sections 5.2 and 5.3). Details of the experimental settings that do not fit into the paper are presented in Appendix E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "An empirical experiment was conducted to validate the result of the theoretical analysis. Since it is difficult to assess the main result in Section 4.2.2 directly, the theorems in Sections 4.2.1, from which the main result is readily derived, were investigated. Specifically, the first and second order moments of \u2206L(\u03b8) were computed on datasets of increasing sizes to empirically investigate the convergence property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation of theorems",
"sec_num": "5.1"
},
{
"text": "Datasets of various sizes were constructed from the English Gigaword corpus (Napoles et al., 2012) . The datasets made up of n words were constructed by randomly sampling sentences from the Gigaword corpus. The value of n was varied over {10 3 , 10 4 , 10 5 , 10 6 , 10 7 }. 10, 000 different datasets were created for each size n to compute the first and second order moments. Figure 1 (top left) shows log-log plots of the first order moments of \u2206L(\u03b8) computed on the different sized datasets when \u03b1 = 1.0. The crosses 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 2 10 3 10 4 10 5 10 6 10 7 10 8 First order moment Data size 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 2 10 3 10 4 10 5 10 6 10 7 10 8 Second order moment Data size 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 2 10 3 10 4 10 5 10 6 10 7 10 8 First order moment Data size 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 2 10 3 10 4 10 5 10 6 10 7 10 8 Second order moment Data size and circles represent the empirical values and theoretical values obtained by Theorem 1, respectively. Figure 1 (top right) similarly illustrates the second order moments of \u2206L(\u03b8). Since Theorem 3 suggests that the second order moment decreases in the order of O( log (n) n ), the graph y \u221d",
"cite_spans": [
{
"start": 76,
"end": 98,
"text": "(Napoles et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 1211,
"end": 1214,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 378,
"end": 397,
"text": "Figure 1 (top left)",
"ref_id": "FIGREF2"
},
{
"start": 1046,
"end": 1054,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Validation of theorems",
"sec_num": "5.1"
},
{
"text": "log(x) x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation of theorems",
"sec_num": "5.1"
},
{
"text": "is also shown. The graph was fitted to the empirical data by minimizing the squared error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation of theorems",
"sec_num": "5.1"
},
{
"text": "The top left figure demonstrates that the empirical values of the first order moments fit the theoretical result very well, providing a strong empirical evidence for the correctness of Theorem 1. In addition, the two figures show that the first and second order moments decrease almost in the order of O( log(n) n ), converging to zero as the data size increases. This result validates Theorems 2 and 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation of theorems",
"sec_num": "5.1"
},
{
"text": "Figures 1 (bottom left) and (bottom right) show similar results when \u03b1 = 0.75. Since we do not have theoretical estimates of the first order moment when \u03b1 \u0338 = 1.0, the graphs y \u221d log(n) n are shown in both figures. From these, we can again observe that the first and second order moments decrease almost in the order of O( log (n) n ). This indicates the validity of the investigation in Section 4.3. The relatively larger deviations from the graphs y \u221d log (n) n , compared with the top right figure, are considered to be attributed to the firstorder Taylor approximation.",
"cite_spans": [
{
"start": 327,
"end": 330,
"text": "(n)",
"ref_id": null
},
{
"start": 458,
"end": 461,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Validation of theorems",
"sec_num": "5.1"
},
{
"text": "The next experiment investigates the quality of the word embeddings learned by incremental SGNS through comparison with the batch counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of word embeddings",
"sec_num": "5.2"
},
{
"text": "The Gigaword corpus was used for the training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of word embeddings",
"sec_num": "5.2"
},
{
"text": "For the comparison, both our own implementation of batch SGNS as well as WORD2VEC (Mikolov et al., 2013c) were used (denoted as batch and w2v). The training configurations of the three methods were set the same as much as possible, although it is impossible to do so perfectly. For example, incremental SGNS (denoted as incremental) utilized the dynamic vocabulary (c.f., Section 3.2.1) and thus we set the maximum vocabulary size m to control the vocabulary size. On the other hand, we set a frequency threshold to determine the vocabulary size of w2v. We set m = 240k for incremental, while setting the frequency threshold to 100 for w2v. This yields vocabulary sets of comparable sizes: 220, 389 and 246, 134.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of word embeddings",
"sec_num": "5.2"
},
{
"text": "The learned word embeddings were assessed on five benchmark datasets commonly used in the literature (Levy et al., 2015) : WordSim353 (Agirre et al., 2009) , MEN (Bruni et al., 2013) , SimLex-999 (Hill et al., 2015) , the MSR analogy dataset (Mikolov et al., 2013c) , the Google analogy dataset (Mikolov et al., 2013a) . The former three are for a semantic similarity task, and the remaining two are for a word analogy task. As evaluation measures, Spearman's \u03c1 and prediction accuracy were used in the two tasks, respectively.",
"cite_spans": [
{
"start": 101,
"end": 120,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 134,
"end": 155,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 162,
"end": 182,
"text": "(Bruni et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 196,
"end": 215,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 242,
"end": 265,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF12"
},
{
"start": 295,
"end": 318,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of word embeddings",
"sec_num": "5.2"
},
{
"text": "Figures 2 (a) and (b) represent the results on the similarity datasets and the analogy datasets. We see that the three methods (incremental, batch, and w2v) perform equally well on all of the datasets. This indicates that incremental SGNS can learn as good word embeddings as the batch counterparts, while being able to perform incremental model update. Although incremental performs slightly better than the batch methods in some datasets, the difference seems to be a product of chance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of word embeddings",
"sec_num": "5.2"
},
{
"text": "The figures also show the results of incremental SGNS when the maximum vocabulary size m was reduced to 150k and 100k (incremental-150k and incremental-100k). The resulting vocabulary sizes were 135, 447 and 86, 993, respectively. We see that incremental-150k and incremental-100k perform comparatively well with incremental, although relatively large performance drops are observed in some datasets (MEN and MSR) . This demonstrates that the Misra-Gries algorithm can effectively control the vocabulary size. ",
"cite_spans": [
{
"start": 400,
"end": 413,
"text": "(MEN and MSR)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of word embeddings",
"sec_num": "5.2"
},
{
"text": "The last experiment investigates how much time incremental SGNS can save by avoiding retraining when updating the word embeddings. In this experiment, incremental was first trained on the initial training data of size 5 n 1 and then updated on the new training data of size n 2 to measure the update time. For comparison, batch and w2v were re-trained on the combination of the initial and new training data. We fixed n 1 = 10 7 and varied n 2 over {1\u00d710 6 , 2\u00d710 6 , . . . , 5\u00d710 6 }. The experiment was conducted on Intel R \u20dd Xeon R \u20dd 2GHz CPU. The update time was averaged over five trials. Figure 2 (c) compares the update time of the three methods across various values of n 2 . We see that incremental significantly reduces the update time. It achieves 10 and 7.3 times speed-up compared with batch and w2v (when n 2 = 10 6 ). This represents the advantage of the incremental algorithm, as well as the time efficiency of the dynamic vocabulary and adaptive unigram table. We note that batch is slower than w2v because it uses Ada-Grad, which maintains different learning rates for different dimensions of the parameter, while w2v uses the same learning rate for all dimensions.",
"cite_spans": [],
"ref_spans": [
{
"start": 594,
"end": 602,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Update time",
"sec_num": "5.3"
},
{
"text": "Word representations based on distributional semantics have been common (Turney and Pantel, 2010; Baroni and Lenci, 2010) . The distributional methods typically begin by constructing a wordcontext matrix and then applying dimension reduction techniques such as SVD to obtain highquality word meaning representations. Although some studies investigated incremental updating of the word-context matrix (Yin et al., 2015; Goyal 5 The number of sentences here.",
"cite_spans": [
{
"start": 98,
"end": 121,
"text": "Baroni and Lenci, 2010)",
"ref_id": "BIBREF1"
},
{
"start": 400,
"end": 418,
"text": "(Yin et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 419,
"end": 426,
"text": "Goyal 5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "and Daume III, 2011), they did not explore the reduced representations. On the other hand, neural word embeddings have recently gained much popularity as an alternative. However, most previous studies have not explored incremental strategies (Mikolov et al., 2013a,b; Pennington et al., 2014) . Peng et al. (2017) proposed an incremental learning method of hierarchical soft-max. Because hierarchical soft-max and negative sampling have different advantages (Peng et al., 2017) , the incremental SGNS and their method are complementary to each other. Also, their updating method needs to scan not only new but also old training data, and thus is not an incremental algorithm in a strict sense. As a consequence, it potentially incurs the same time complexity as the retraining. Another consequence is that their method has to retain the old training data and thus wastes space, while incremental SGNS can discard old training examples after processing them. Very recently, May et al. (2017) also proposed an incremental algorithm for SGNS. However, their work differs from ours in that their algorithm is not designed to use smoothed noise distribution (i.e., the smoothing parameter \u03b1 is assumed fixed as \u03b1 = 1.0 in their method), which is a key to learn high-quality word embeddings. Another difference is that they did not provide theoretical justification for their algorithm.",
"cite_spans": [
{
"start": 242,
"end": 267,
"text": "(Mikolov et al., 2013a,b;",
"ref_id": null
},
{
"start": 268,
"end": 292,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 295,
"end": 313,
"text": "Peng et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 458,
"end": 477,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 973,
"end": 990,
"text": "May et al. (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "There are publicly available implementations for training SGNS, one of the most popular being WORD2VEC (Mikolov, 2013) . However, it does not support an incremental training method. GEN-SIM (\u0158eh\u016f\u0159ek and Sojka, 2010) also offers SGNS training. Although GENSIM allows the incremental updating of SGNS models, it is done in an adhoc manner. In GENSIM, the vocabulary set as well as the unigram table are fixed once trained, meaning that new words cannot be added. Also, they do not provide any theoretical accounts for the validity of their training method. Finally, we want to note that most of the existing implementations can be easily extended to support the incremental (or mini-batch) SGNS by simply keep updating the noise distribution.",
"cite_spans": [
{
"start": 103,
"end": 118,
"text": "(Mikolov, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "This paper proposed incremental SGNS and provided thorough theoretical analysis to demonstrate its validity. We also conducted experiments to empirically demonstrate its effectiveness. Although the incremental model update is often required in practical machine learning applications, only a little attention has been paid to learning word embeddings incrementally. We consider that incremental SGNS successfully addresses this situation and serves as an useful tool for practitioners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "The success of this work suggests several research directions to be explored in the future. One possibility is to explore extending other embedding methods such as GloVe (Pennington et al., 2014) to incremental algorithms. Such studies would further extend the potential of word embedding methods.",
"cite_spans": [
{
"start": 170,
"end": 195,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "This overhead is amortized in mini-batch SGNS if the mini-batch size is sufficiently large. Our discussion here is dedicated to efficiently perform the incremental training irrespective of the mini-batch size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The appendices are in the supplementary material.4 We treat words as integers and thus W = {1, 2, . . . |W|}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pasca",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In Proceed- ings of NAACL, pages 19-27.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Distributional memory: A general framework for corpus-based semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2010,
"venue": "Computatoinal Linguistics",
"volume": "36",
"issue": "",
"pages": "673--721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alessandro Lenci. 2010. Dis- tributional memory: A general framework for corpus-based semantics. Computatoinal Linguis- tics, 36:673-721.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "N",
"middle": [
"K"
],
"last": "Tran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Bruni, N. K. Tran, and M. Baroni. 2013. Multi- modal distributional semantics. Journal of Artificial Intelligence Research, 49:1-49.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Weighted random sampling over data streams",
"authors": [
{
"first": "S",
"middle": [],
"last": "Pavlos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Efraimidis",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavlos S. Efraimidis. 2015. Weighted random sam- pling over data streams. ArXiv:1012.0256.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Approximate scalable bounded space sketch for large data nlp",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "250--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Goyal and Hal Daume III. 2011. Approximate scalable bounded space sketch for large data nlp. In Proceedings of EMNLP, pages 250-261.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41:665-695.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Streaming word embeddings with the space-saving algorithm",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Ashwin",
"middle": [],
"last": "Lall",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandler May, Kevin Duh, Benjamin Van Durme, and Ashwin Lall. 2017. Streaming word em- beddings with the space-saving algorithm. ArXiv:1704.07463.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Workshop at ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Workshop at ICLR.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in NIPS, pages 3111-3119.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-Tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of NAACL, pages 746-751.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Finding repeated elements",
"authors": [
{
"first": "Jayadev",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gries",
"suffix": ""
}
],
"year": 1982,
"venue": "Science of Computer Programming",
"volume": "2",
"issue": "2",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayadev Misra and David Gries. 1982. Finding re- peated elements. Science of Computer Program- ming, 2(2):143-152.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Annotated english gigaword ldc2012t21",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Matthew Gormley, and Ben- jamin Van Durme. 2012. Annotated english giga- word ldc2012t21.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Incrementally learning the hierarchical softmax function for neural language models",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yaopeng",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "3267--3273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Jianxin Li, Yangqiu Song, and Yaopeng Liu. 2017. Incrementally learning the hierarchical soft- max function for neural language models. In Pro- ceedings of AAAI, pages 3267-3273.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532-1543.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Software framework for topic modelling with large corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In Pro- ceedings of the LREC 2010 Workshop on New Chal- lenges for NLP Frameworks, pages 45-50.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Random sampling with a reservoir",
"authors": [
{
"first": "Jeffrey",
"middle": [
"S"
],
"last": "Vitter",
"suffix": ""
}
],
"year": 1985,
"venue": "ACM Transactions on Mathematical Software",
"volume": "11",
"issue": "",
"pages": "37--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey S. Vitter. 1985. Random sampling with a reser- voir. ACM Transactions on Mathematical Software, 11:37-57.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Online updating of word representations for part-of-speech tagging",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Schnabel",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1329--1334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Tobias Schnabel, and Hinrich Sch\u00fctze. 2015. Online updating of word representations for part-of-speech tagging. In Proceedings of EMNLP, pages 1329-1334.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "a, a, a, a, a, b, b, b, c, c)",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "This completes the proof. See Appendix B.3 for the detailed proof.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Log-log plots of the first and second order moments of \u2206L(\u03b8) on the different sized datasets when \u03b1 = 1.0 (top left and top right) and \u03b1 = 0.75 (bottom left and bottom right).",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "(a): Spearman's \u03c1 on the word similarity datasets. (b): Accuracy on the analogy datasets. (c): Update time when new training data is provided.",
"uris": null
},
"TABREF0": {
"html": null,
"text": "Example noise distribution q(w) for the vocabulary set W = {a, b, c} (left) and the corresponding unigram table T of size 10 (right).",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}