ACL-OCL / Base_JSON /prefixQ /json /Q16 /Q16-1034.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q16-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:06:45.398221Z"
},
"title": "Fast, Small and Exact: Infinite-order Language Modelling with Compressed Suffix Trees",
"authors": [
{
"first": "Ehsan",
"middle": [],
"last": "Shareghi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Petri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Efficient methods for storing and querying are critical for scaling high-order m-gram language models to large corpora. We propose a language model based on compressed suffix trees, a representation that is highly compact and can be easily held in memory, while supporting queries needed in computing language model probabilities on-the-fly. We present several optimisations which improve query runtimes up to 2500\u00d7, despite only incurring a modest increase in construction time and memory usage. For large corpora and high Markov orders, our method is highly competitive with the state-of-the-art KenLM package. It imposes much lower memory requirements, often by orders of magnitude, and has runtimes that are either similar (for training) or comparable (for querying).",
"pdf_parse": {
"paper_id": "Q16-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Efficient methods for storing and querying are critical for scaling high-order m-gram language models to large corpora. We propose a language model based on compressed suffix trees, a representation that is highly compact and can be easily held in memory, while supporting queries needed in computing language model probabilities on-the-fly. We present several optimisations which improve query runtimes up to 2500\u00d7, despite only incurring a modest increase in construction time and memory usage. For large corpora and high Markov orders, our method is highly competitive with the state-of-the-art KenLM package. It imposes much lower memory requirements, often by orders of magnitude, and has runtimes that are either similar (for training) or comparable (for querying).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language models (LMs) are fundamental to many NLP tasks, including machine translation and speech recognition. Statistical LMs are probabilistic models that assign a probability to a sequence of words w N 1 , indicating how likely the sequence is in the language. m-gram LMs are popular, and prove to be accurate when estimated using large corpora. In these LMs, the probabilities of m-grams are often precomputed and stored explicitly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although widely successful, current m-gram LM approaches are impractical for learning high-order LMs on large corpora, due to their poor scaling properties in both training and query phases. Prevailing methods (Heafield, 2011; Stolcke et al., 2011) precompute all m-gram probabilities, and consequently need to store and access as many as a hundred of billions of m-grams for a typical moderate-order LM.",
"cite_spans": [
{
"start": 210,
"end": 226,
"text": "(Heafield, 2011;",
"ref_id": "BIBREF13"
},
{
"start": 227,
"end": 248,
"text": "Stolcke et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent research has attempted to tackle scalability issues through the use of efficient data structures such as tries and hash-tables (Heafield, 2011; Stolcke et al., 2011) , lossy compression (Talbot and Osborne, 2007; Levenberg and Osborne, 2009; Guthrie and Hepple, 2010; Pauls and Klein, 2011; Church et al., 2007) , compact data structures (Germann et al., 2009; Watanabe et al., 2009; Sorensen and Allauzen, 2011) , and distributed computation Brants et al., 2007) . Fundamental to all the widely used methods is the precomputation of all probabilities, hence they do not provide an adequate trade-off between space and time for high m, both during training and querying. Exceptions are Kennington et al. (2012) and Zhang and Vogel (2006) , who use a suffix-tree or suffix-array over the text for computing the sufficient statistics on-the-fly.",
"cite_spans": [
{
"start": 134,
"end": 150,
"text": "(Heafield, 2011;",
"ref_id": "BIBREF13"
},
{
"start": 151,
"end": 172,
"text": "Stolcke et al., 2011)",
"ref_id": "BIBREF27"
},
{
"start": 193,
"end": 219,
"text": "(Talbot and Osborne, 2007;",
"ref_id": "BIBREF29"
},
{
"start": 220,
"end": 248,
"text": "Levenberg and Osborne, 2009;",
"ref_id": "BIBREF18"
},
{
"start": 249,
"end": 274,
"text": "Guthrie and Hepple, 2010;",
"ref_id": "BIBREF11"
},
{
"start": 275,
"end": 297,
"text": "Pauls and Klein, 2011;",
"ref_id": "BIBREF22"
},
{
"start": 298,
"end": 318,
"text": "Church et al., 2007)",
"ref_id": "BIBREF6"
},
{
"start": 345,
"end": 367,
"text": "(Germann et al., 2009;",
"ref_id": "BIBREF8"
},
{
"start": 368,
"end": 390,
"text": "Watanabe et al., 2009;",
"ref_id": "BIBREF31"
},
{
"start": 391,
"end": 419,
"text": "Sorensen and Allauzen, 2011)",
"ref_id": "BIBREF26"
},
{
"start": 450,
"end": 470,
"text": "Brants et al., 2007)",
"ref_id": "BIBREF0"
},
{
"start": 693,
"end": 717,
"text": "Kennington et al. (2012)",
"ref_id": "BIBREF16"
},
{
"start": 722,
"end": 744,
"text": "Zhang and Vogel (2006)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our previous work (Shareghi et al., 2015) , we extended this line of research using a Compressed Suffix Tree (CST) , which provides a considerably more compact searchable means of storing the corpus than an uncompressed suffix array or suffix tree. This approach showed favourable scaling properties with m and had only a modest memory requirement. However, the method only supported Kneser-Ney smoothing, not its modified variant (Chen and Goodman, 1999) which overall performs better and has become the de-facto standard. Additionally, querying was significantly slower than for leading LM toolkits, making the method impractical for widespread use.",
"cite_spans": [
{
"start": 21,
"end": 44,
"text": "(Shareghi et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 434,
"end": 458,
"text": "(Chen and Goodman, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we extend Shareghi et al. (2015) to support modified and 477 Transactions of the Association for Computational Linguistics, vol. 4, pp. 477-490, 2016. Action Editor: Brian Roark.",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
},
{
"start": 67,
"end": 74,
"text": "and 477",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Submission batch: 1/2016; Revision batch: 6/2016; Published 9/2016. c 2016 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license. present new optimisation methods for fast construction and querying. 1 Critical to our approach are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Precomputation of several modified counts, which would be very expensive to compute at query time. To orchestrate this, a subset of the CST nodes is selected based on the cost of computing their modified counts (which relates with the branching factor of a node). The precomputed counts are then stored in a compressed data structure supporting efficient memory usage and lookup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Re-use of CST nodes within m-gram probability computation as a sentence gets scored leftto-right, thus saving many expensive lookups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Empirical comparison against our earlier work (Shareghi et al., 2015) shows the significance of each of these optimisations. The strengths of our method are apparent when applied to very large training datasets (\u2265 16 GiB) and for high order models, m \u2265 5. In this setting, while our approach is more memory efficient than the leading KenLM model, both in the construction (training) and querying (testing) phases, we are highly competitive in terms of runtimes of both phases. When memory is a limiting factor at query time, our approach is orders of magnitude faster than the state of the art. Moreover, our method allows for efficient querying with an unlimited Markov order, m \u2192 \u221e, without resorting to approximations or heuristics.",
"cite_spans": [
{
"start": 46,
"end": 69,
"text": "(Shareghi et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In an m-gram language model, the probability of a sentence is decomposed into",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "N i=1 P (w i |w i\u22121 i\u2212m+1 ), where P (w i |w i\u22121 i\u2212m+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "is the conditional probability of the next word given its finite history. Smoothing techniques are employed to deal with sparsity when estimating the parameters of P (w i |w i\u22121 i\u2212m+1 ). A comprehensive comparison of different smoothing techniques is provided in (Chen and Goodman, 1999) . We focus on interpolated Modified Kneser-Ney (MKN) smoothing, which is widely regarded as a state-of-the-art technique and is supported by popular language modelling toolkits, e.g. SRILM (Stolcke, 2002) and KenLM (Heafield, 2011 ).",
"cite_spans": [
{
"start": 263,
"end": 287,
"text": "(Chen and Goodman, 1999)",
"ref_id": "BIBREF5"
},
{
"start": 477,
"end": 492,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF28"
},
{
"start": 503,
"end": 518,
"text": "(Heafield, 2011",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "1 https://github.com/eehsan/cstlm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "Pm(w|ux) = [c(uxw) \u2212 D m (c(uxw))] + c(ux) + \u03b3 m (ux)Pm\u22121(w|x) c(ux) P k (w|ux) = [N1+( \u2022 uxw) \u2212 D k (N1+( \u2022 uxw))] + N1+( \u2022 ux \u2022 ) + \u03b3 k (ux)P k\u22121 (w|x) N1+( \u2022 ux \u2022 ) P0(w| ) = [N1+( \u2022 w) \u2212 D 1 (N1+( \u2022 w))] + N1+( \u2022\u2022 ) + \u03b3( ) N1+( \u2022\u2022 ) \u00d7 1 \u03c3 \u03b3 k (ux) = j\u2208{1,2,3+} D k (j)Nj(ux \u2022 ), if k = m j\u2208{1,2,3+} D k (j)N j (ux \u2022 ), if k < m D k (j) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0, if j = 0 1 \u2212 2 n 2 (k) n 1 (k) . n 1 (k) n 1 (k)+2n 2 (k) , if j = 1 2 \u2212 3 n 3 (k) n 2 (k) . n 1 (k) n 1 (k)+2n 2 (k) , if j = 2 3 \u2212 4 n 4 (k) n 3 (k) . n 1 (k) n 1 (k)+2n 2 (k) , if j \u2265 3 ni(k) = |{\u03b1 s.t. |\u03b1| = k, c(\u03b1) = i}| , if k = m \u03b1 s.t. |\u03b1| = k, N1+( \u2022 \u03b1) = i , if k < m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "Figure 1: The quantities and formula needed for modified Kneser-Ney smoothing, where x is a k-gram, u and w are words, and [a] + def = max{0, a}. We use m to refer to the order of the language model, and k \u2208 [1, m] to the level of smoothing. The recursion stops at the unigram levelP 0 (w| ) where the probability is smoothed by the uniform distribution over the vocabulary 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "\u03c3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "MKN is a recursive smoothing technique which uses lower order k-gram language models to smooth higher order models. Figure 1 describes the recursive smoothing formula employed in MKN. It is distinguished from Kneser-Ney (KN) smoothing in its use of adaptive discount parameters (denoted as D k (j) in Figure 1 ) based on the k-gram counts. Importantly, MKN is based on not just m-gram frequencies, c(x), but also several modified counts based on numbers of unique contexts, namely",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 1",
"ref_id": null
},
{
"start": 301,
"end": 309,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "N i+ (\u03b1 \u2022 ) = |{w s.t. c(\u03b1w) \u2265 i}| N i+ ( \u2022 \u03b1) = |{w s.t. c(w\u03b1) \u2265 i}| N i+ ( \u2022 \u03b1 \u2022 ) = |{w 1 w 2 s.t. c(w 1 \u03b1w 2 ) \u2265 i}| N i+ (\u03b1 \u2022 ) = w s.t. N 1+ ( \u2022 \u03b1w) \u2265 i . N i+ ( \u2022 \u03b1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "and N i+ (\u03b1 \u2022 ) are the number of words with frequency at least i that come before and after a pattern \u03b1, respectively. N i+ ( \u2022 \u03b1 \u2022 ) is the number of word-pairs with frequency at least i which surround \u03b1. N i+ (\u03b1 \u2022 ) is the number of words coming after \u03b1 to form a pattern \u03b1w for which the number of unique left contexts is at least i; it is specific to MKN and not needed in KN. different types of quantities required for computing an example 4-gram MKN probability. Efficient computation of these quantities is challenging with limited memory and time resources, particularly when the order of the language model m is high and/or the training corpus is large. In this paper, we make use of advanced data structures to efficiently obtain the required quantities to answer probabilistic queries as they arrive. Our solution involves precomputing and caching expensive quan-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "\u2022 ) 3 N 1+ ( \u2022 is strong with) N 1+ ( \u2022 is strong \u2022 ) N {1,2,3+} (is strong \u2022 ) 2 N 1+ ( \u2022 strong with) N 1+ ( \u2022 strong \u2022 ) N {1,2,3+} (strong \u2022 ) 1 N 1+ ( \u2022 with) N 1+ ( \u2022\u2022 ) N {1,2,3+} ( \u2022 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "tities, N 1+ ( \u2022 \u03b1 \u2022 ), N 1+ ( \u2022 \u03b1), N {1,2,3+} ( \u2022 \u03b1) and N {1,2,3+} (\u03b1 \u2022 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": ", which we will explain in \u00a74. We start in \u00a73 by providing a review of the approach in Shareghi et al. (2015) upon which we base our work.",
"cite_spans": [
{
"start": 87,
"end": 109,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Kneser-Ney Language Model",
"sec_num": "2"
},
{
"text": "Shareghi et al. (2015) proposed a method for Kneser-Ney (KN) language modelling based on onthe-fly probability computation from a compressed suffix tree (CST) . The CST emulates the functionality of the Suffix Tree (ST) (Weiner, 1973) using substantially less space. The suffix tree is a classical search index consisting of a rooted labelled search tree constructed from a text T of length n drawn from an alphabet of size \u03c3. Each root to leaf path in the suffix tree corresponds to a suffix of T . The leaves, considered in left-toright order define the suffix array (SA) (Manber and Myers, 1993) ",
"cite_spans": [
{
"start": 220,
"end": 234,
"text": "(Weiner, 1973)",
"ref_id": "BIBREF32"
},
{
"start": 574,
"end": 598,
"text": "(Manber and Myers, 1993)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "such that the suffix T [SA[i], n \u2212 1] is lexicographically smaller than T [SA[i+1], n\u22121] for i \u2208 [0, n \u2212 2].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "Searching for a pattern \u03b1 of length m in T can be achieved by finding the \"highest\" node v in the ST such that the path from the root to v is prefixed by \u03b1. All leaf nodes in the subtree starting at v correspond to the locations of \u03b1 in T . This is translated to finding the specific range SA [lb, rb] such that",
"cite_spans": [
{
"start": 293,
"end": 301,
"text": "[lb, rb]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "T [SA[j], SA[j + m \u2212 1]] = \u03b1 for j \u2208 [lb, rb]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "as illustrated in the ST and SA of Figure 2 (left).",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "While searching using the ST or the SA is efficient in theory, it requires large amounts of main memory. A CST reduces the space requirements of ST by utilizing the compressibility of the Burrows-Wheeler transform (BWT) (Burrows and Wheeler, 1994) . The BWT corresponds to a reversible permutation of the text used in data compression tools such as BZIP2 to increase the compressibility of the input. The transform is defined as",
"cite_spans": [
{
"start": 220,
"end": 247,
"text": "(Burrows and Wheeler, 1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "BWT[i] = T[SA[i] \u2212 1 mod n]",
"eq_num": "(1)"
}
],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "and is the core component of the FM-Index (Ferragina and Manzini, 2000) which is a subcomponent of a CST to provide efficient search for locating arbitrary length patterns (m-grams), determining occurrence frequencies etc. The key functionality provided by the FM-Index is the ability to efficiently determine the range SA[lb, rb] matching a given pattern \u03b1 described above without the need to store the ST or SA explicitly. This is achieved by iteratively processing \u03b1 in reverse order using the BWT, which is usually referred to as backward-search. The backward-search procedure utilizes the duality between the BWT and SA to iteratively determine SA[lb, rb] for suffixes of \u03b1. Operation RANK(BWT, i, c) (and its inverse operation SELECT(BWT,i,c) 2 ) can be performed efficiently using a wavelet tree (Grossi et al., 2003) representation of the BWT. A wavelet tree is a versatile, space-efficient representation of a sequence which can efficiently support a variety of operations (Navarro, 2014) . The structure of the wavelet tree is derived by recursively decomposing the alphabet into subsets. At each level the alphabet is split into two subsets based on which symbols in the sequence are assigned to the left and right child nodes respectively. Using compressed bitvector representations and Huffman codes to define the alphabet partitioning, the space usage of the wavelet tree and associated RANK structures of the BWT is bound by H k (T)n + o(n log \u03c3) bits (Grossi et al., 2003) . Thus the space usage is proportional to the order k entropy (H k (T)) of the text. Figure 2 (right) shows a sample wavelet tree representation. Using the wavelet tree structure, RANK over a sequence drawn from an alphabet of size \u03c3 can be reduced to log \u03c3 binary RANK operations which can be answered efficiently in constant time (Jacobson, 1989) . The range SA[lb, rb] corresponding to a pattern \u03b1, can be determined in O(m log \u03c3) time using a wavelet tree of the BWT.",
"cite_spans": [
{
"start": 33,
"end": 71,
"text": "FM-Index (Ferragina and Manzini, 2000)",
"ref_id": null
},
{
"start": 803,
"end": 824,
"text": "(Grossi et al., 2003)",
"ref_id": "BIBREF10"
},
{
"start": 982,
"end": 997,
"text": "(Navarro, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 1467,
"end": 1488,
"text": "(Grossi et al., 2003)",
"ref_id": "BIBREF10"
},
{
"start": 1821,
"end": 1837,
"text": "(Jacobson, 1989)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 1574,
"end": 1582,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "Suppose SA[sp j , ep j ] corresponds to all suffixes in T matching \u03b1[j, m\u22121]. Range SA[sp j\u22121 , ep j\u22121 ] matching \u03b1[j \u2212 1, m \u2212 1] with c def = \u03b1[j \u2212 1] can be expressed as sp j\u22121 = C[c] + RANK(BWT, sp j , c) ep j\u22121 = C[c + 1] + RANK(BWT, ep j + 1, c) \u2212 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "In addition to the FM-index, a CST efficiently stores the tree topology of the ST to emulate tree operations such efficiently . Shareghi et al. (2015) showed how the requisite counts for a KN-LM, namely c(\u03b1),",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compressed Data Structures",
"sec_num": "3.1"
},
{
"text": "N 1+ ( \u2022 \u03b1), N 1+ ( \u2022 \u03b1 \u2022 ) and N 1+ (\u03b1 \u2022 ), can be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "computed directly from CST. Consider the example in Figure 2 , the number of occurrences of b corresponds to counting the number of leaves, size(v), in the subtree rooted at v. This can be computed in O(1) time by computing the size of the range [lb, rb] implicitly associated with each node. The number of unique right contexts of b can be determined using degree(v) (again O(1) but requires bit operations on the succinct tree representation of the ST). That",
"cite_spans": [
{
"start": 246,
"end": 254,
"text": "[lb, rb]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "is, N 1+ (b \u2022 ) = degree(v) = 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "Determining the number of left-contexts and surrounding contexts is more involved. Computing N 1+ ( \u2022 \u03b1) relies on the BWT. Recall that BWT[i] corresponds to the symbol preceding the suffix start-",
"cite_spans": [
{
"start": 136,
"end": 142,
"text": "BWT[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "ing at SA[i]. For example computing N 1+ ( \u2022 b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "first requires finding the interval of suffixes starting with b in SA, namely lb = 6 and rb = 10, and then counting the number of unique symbols in BWT[6, 10] = {d, b, a, a, a}, i.e., 3. Determining all unique symbols in BWT[i, j] can be performed efficiently (independently of the size of the range) using the wavelet tree encoding of the BWT. The set of symbols preceding pattern \u03b1, denoted by P (\u03b1) can be computed in O(|P (\u03b1)| log \u03c3) by visiting all unique leafs in the wavelet tree which correspond to symbols in BWT [i, j] . This is usually referred to as the interval-symbols (Schnattinger et al., 2010) procedure and uses RANK operations to find the set of symbols s \u2208 P (\u03b1) and corresponding ranges for s\u03b1 in SA. In the above example, identifying the SA range of ab requires finding the lb, rb in the SA for suffixes starting with a (SA [3, 5] ) and then adjusting the bounds to cover only the suffixes starting with ab. This last step is done via computing the rank of three a symbols",
"cite_spans": [
{
"start": 522,
"end": 528,
"text": "[i, j]",
"ref_id": null
},
{
"start": 583,
"end": 610,
"text": "(Schnattinger et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 846,
"end": 849,
"text": "[3,",
"ref_id": null
},
{
"start": 850,
"end": 852,
"text": "5]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "2 3 5 8 \u221e 0 10s 20s N ' 123+ (\u03b1 .) N 123+ (\u03b1 .) N 1+ (. \u03b1 .) N 1+ (. \u03b1) N 1+ (\u03b1 .) backward\u2212search",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "On\u2212the\u2212fly in BWT[8,10] using the wavelet tree, see Figure 2 (right) for RANK(BWT, a, 8). As illustrated, answering RANK(BWT, 8, a) corresponds to processing the first digit of the code word at the root level, which translates into RANK(WT root , 8, 0) = 4, followed by a RANK(WT 1 , 4, 1) = 1 on the left branch. Once the ranks are computed lb, rb are refined accordingly to SA [3+ (1 -1), 3+ (3 -1)]. Finally, for N 1+ ( \u2022 \u03b1 \u2022 ) all patterns which can follow \u03b1 are enumerated, and for each of these extended patterns, the number of preceding symbols is computed using interval-symbols. This proved to be the most expensive operation in their approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "Given these quantities, Shareghi et al. (2015) show how m-gram probabilities can be computed on demand using an iterative algorithm to search for matching nodes in the suffix tree for the required kgram (k \u2264 m) patterns in the numerator and denominator of the KN recursive equations, which are then used to compute the probabilities. We refer the reader to Shareghi et al. (2015) for further details. Overall their approach showed promise, in that it allowed for unlimited order KN-LMs to be evaluated with a modest memory footprint, however it was many orders of magnitude slower for smaller m than leading LM toolkits.",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
},
{
"start": 357,
"end": 379,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "To illustrate the cost of querying, see Figure 3 (top) which shows per-sentence query time for KN,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 N {1,2,3+} (\u03b1 \u2022 ) or N {1,2,3+} (\u03b1 \u2022 ) 1: function N123PFRONT(t, v, \u03b1, is-prime)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "t is a CST, v is the node matching pattern \u03b1 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "N1, N2, N3+ \u2190 0 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "for u \u2190 children(v) do 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "if is-prime then 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "f \u2190 interval-symbols (t, [lb(u) , rb(u)]) 6: else 7:",
"cite_spans": [
{
"start": 21,
"end": 31,
"text": "(t, [lb(u)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "f \u2190 size(u) 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "if 1 \u2264 f \u2264 2 then 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "N f \u2190 N f + 1 10: N3+ \u2190 degree(v) \u2212 N1 \u2212 N2 11: return N1, N2, N3+",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "based on the approach of Shareghi et al. (2015) (also shown is MKN, through an extension of their method as described in \u00a74). It is clear that the runtimes for KN are much too long for practical use -about 5 seconds per sentence, with the majority of this time spent computing N 1+ ( \u2022 \u03b1 \u2022 ). Clearly optimisation is warranted, and the gains from it are spectacular (see Figure 3 bottom, which uses the precomputation method as described in \u00a74.2).",
"cite_spans": [
{
"start": 25,
"end": 47,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 371,
"end": 379,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Computing KN modified counts",
"sec_num": "3.2"
},
{
"text": "A central requirement for extending Shareghi et al. (2015) to support MKN are algorithms for computing N {1,2,3+} (\u03b1 \u2022 ) and N {1,2,3+} (\u03b1 \u2022 ), which we now expound upon. Algorithm 1 computes both of these quantities, taking as input a CST t, a node v matching the pattern \u03b1, the pattern and a flag is-prime, denoting which of the N and N variants is required. This method enumerates the children of the node (line 3) and calculates either the frequency of each child (line 7) or the modified count N 1+ ( \u2022 \u03b1",
"cite_spans": [
{
"start": 36,
"end": 58,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "x), for each child u where x is the first symbol on the edge vu (line 5). Lines 8 and 9 accumulate the number of these values equal to one or two, and finally in line 10, N 3+ is computed by the difference between N 1+ (\u03b1 \u2022 ) = degree(v) and the already Figure 2 , which again enumerates over child nodes (whose path labels start with symbols b, c and d) and computes the number of preceding symbols for the extended patterns. 3 Accordingly N 1 (b",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 262,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "counted events N 1 + N 2 . For example, computing N {1,2,3+} (b \u2022 ) in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "\u2022 ) = 2, N 2 (b \u2022 ) = 1 and N 3+ (b \u2022 ) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "While roughly similar in approach, computing N {1,2,3+} (\u03b1 \u2022 ) is in practice slower than N {1,2,3+} (\u03b1 \u2022 ) since it requires calling intervalsymbols (line 7) instead of calling the constant time size operation (line 5). This gives rise to a time",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "complexity of O(d|P (\u03b1)| log \u03c3) for N {1,2,3+} (\u03b1 \u2022 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "where d is the number of children of v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "As illustrated in Figure 3 (top) , the modified counts ( \u00a72) combined are responsible for 99% of the query time. Moreover the already expensive runtime of KN is considerably worsened in MKN due to the additional counts required. These facts motivate optimisation, which we achieve by precomputing values, resulting in a 2500\u00d7 speed up in query time as shown in Figure 3 (bottom).",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 32,
"text": "Figure 3 (top)",
"ref_id": "FIGREF3"
},
{
"start": 361,
"end": 369,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Extending to MKN 4.1 Computing MKN modified counts",
"sec_num": "4"
},
{
"text": "Language modelling toolkits such as KenLM and SRILM precompute real valued probabilities and backoff-weights at training time, such that querying becomes largely a problem of retrieval. We might consider taking a similar route in optimising our language model, however we would face the problem that floating point numbers cannot be compressed very effectively. Even with quantisation, which can have a detrimental effect on modelling perplexity, we would not expect good compression and thus this technique would limit the scaling potential of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "For these reasons, instead we store the most expensive count data, targeting those counts which have the greatest effect on runtime (see Figure 3 top). We expect these integer values to compress well: as highlighted by Figure 4 most counts will have low values, and accordingly a variable byte compression scheme will be able to realise high compression rates. Removing the need for computing these values at query time leaves only pattern search and a few floating point operations in order to compute language model probabilities (see \u00a74.3) which can be done cheaply.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 145,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 219,
"end": 227,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "3 That is N1+( \u2022 bb) = 1, N1+( \u2022 bc) = 2, N1+( \u2022 bd) = 1. Storage Threshold % of total space usage Our first consideration is how to structure the cache. Given that each precomputed value is computed using a CST node, v, (with the pattern as its path-label), we structure the cache as a mapping between unique node identifiers id(v) and the precomputed values. 4 Next we consider which values to cache: while it is possible to precompute values for every node in the CST, many nodes are unlikely to be accessed at query time. Moreover, these rare patterns are likely to be cheap to process using the onthe-fly methods, given they occur in few contexts. Consequently precomputing these values will bring minimal speed benefits, while still incurring a memory cost. For this reason we precompute the values only for nodes corresponding to k-grams up to lengthm (for our word-level experimentsm = 10), which are most likely to be accessed at runtime. 5 The precomputation method is outlined in Algorithm 2, showing how a compressed cache is created for the quantities x \u2208",
"cite_spans": [
{
"start": 361,
"end": 362,
"text": "4",
"ref_id": null
},
{
"start": 948,
"end": 949,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "N 1+ ( \u2022 \u03b1) N 1 (\u03b1 \u2022 ) N 1 (\u03b1 \u2022 ) N 2 (\u03b1 \u2022 ) N 2 (\u03b1 \u2022 ) N 1+ ( \u2022 \u03b1 \u2022 ) bv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "{N 1+ ( \u2022 \u03b1), N 1+ ( \u2022 \u03b1 \u2022 ), N 12 (\u03b1 \u2022 ), N 12 (\u03b1 \u2022 )}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "The algorithm visits the suffix tree nodes in depthfirst-search (DFS) order, and selects a subset of nodes for precomputation (line 7), such that the remaining nodes are either rare or trivial to handle Algorithm 2 Precomputing expensive counts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "N {1,2} (\u03b1 \u2022 ), N 1+ ( \u2022 \u03b1 \u2022 ), N 1+ ( \u2022 \u03b1), N {1,2} (\u03b1 \u2022 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "1: function PRECOMPUTE(t,m) 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "bv l \u2190 0 \u2200l \u2208 [0, nodes(t) \u2212 1] 3: i (x) l \u2190 0 \u2200l \u2208 [0, nodes(t) \u2212 1], x \u2208 count types 4: j \u2190 0 5: for v \u2190 descendants(root(t)) do DFS 6: d \u2190 string-depth(v) 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "if not is-leaf(v) \u2227 d \u2264m then 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "l \u2190 id(v) unique id 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "bv l \u2190 1 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "Call N1PFRONTBACK1(t, v, \u2022) 11: Call N123PFRONT(t, v, \u2022, 0) 12: Call N123PFRONT(t, v, \u2022, 1) 13: i (x) j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "\u2190 counts from above, for each output x 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "j \u2190 j + 1 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "bvrrr \u2190 compress-rrr(bv) 16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "i \u2190 compress-dac({i (x) \u2200x}) 17:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "write-to-disk(bvrrr,i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "on-the-fly (i.e., leaf nodes). A node included in the cache is marked by storing a 1 in the bit vector bv (lines 8-9) at index l, where l is the node identifier. For each selected node we precompute the expensive counts in lines 10-12,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "N 1+ ( \u2022 \u03b1 \u2022 ), N 1+ ( \u2022 \u03b1) via 6 N1PFRONTBACK1(t, v, \u2022), N {1,2} (\u03b1 \u2022 ) via N123PFRONT(t, v, \u2022, 0), N {1,2} (\u03b1 \u2022 ) via N123PFRONT(t, v, \u2022, 1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "which are stored into integer vectors i (x) for each count type x (line 13). The integer vectors are streamed to disk and then compressed (lines 15-17) in order to limit memory usage. The final steps in lines 15 and 16 compress the integer and bit-vectors. The integer vectors i (x) are compressed using a variable length encoding, namely Directly Addressable Variable-Length Codes (DAC; Brisaboa et al. (2009)) which allows for efficient storage of integers while providing efficient random access. As the overwhelming majority of our precomputed values are small (see Figure 4 left), this gives rise to a dramatic compression rate of only \u2248 5.2 bits per integer. The bit vector bv of size O(n) where n is the number of nodes in the suffix tree, is compressed using the scheme of Raman et al. (2002) which supports constant time rank operation over very large bit vectors. This encoding allows for efficient retrieval of the precomputed counts at query time. The compressed vectors are loaded into memory and when an expensive count is required for node v, the precomputed quantities can be fetched in constant time via LOOKUP(v, bv, i (x) ",
"cite_spans": [
{
"start": 781,
"end": 800,
"text": "Raman et al. (2002)",
"ref_id": "BIBREF23"
},
{
"start": 1137,
"end": 1140,
"text": "(x)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 570,
"end": 578,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": ") = i (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "RANK(bv,id(v),1) . We use RANK to determine the number of 1s preceding v's position in the bit vector bv. This corresponds to v's index in the compressed integer vectors i (x) , from which its precomputed count can be fetched. This strategy only applies for precomputed nodes; for other nodes, the values are computed on-the-fly. Figure 3 compares the query time breakdown for on-the-fly count computation (top) versus precomputation (bottom), for both KN and MKN and with different Markov orders, m. Note that query speed improves dramatically, by a factor of about 2500\u00d7, for precomputed cases. This improvement comes at a modest cost in construction space. Precomputing for CST nodes with m \u2264 10 resulted in 20% of the nodes being selected for precomputation. The space used by the precomputed values accounts for 20% of the total space usage (see Figure 4 right). Index construction time increased by 70%.",
"cite_spans": [
{
"start": 172,
"end": 175,
"text": "(x)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 330,
"end": 338,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 851,
"end": 859,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Efficient Precomputation",
"sec_num": "4.2"
},
{
"text": "Having established a means of computing the requisite counts for MKN and an efficient precomputation strategy, we now turn to the algorithm for computing the language model probability. This is presented in Algorithm 3, which is based on Shareghi et al. (2015)'s single CST approach for computing the KN probability (reported in their paper as Algorithm 4.) Similar to their method, our approach implements the recursive m-gram probability formulation as an iterative loop (here using MKN). The core of the algorithm are the two nodes v full and v which correspond to nodes matching the full k-gram and its (k \u2212 1)-gram context, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "Although similar to Shareghi et al. (2015) 's method, which also features a similar right-to-left pattern lookup, in addition we optimise the computation of a full sentence probability by sliding a window of width m over the sequence from left-to-right, adding one new word at a time. 7 This allows for the re-use of nodes in one window matching the full k-",
"cite_spans": [
{
"start": 20,
"end": 42,
"text": "Shareghi et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "Algorithm 3 MKN probability P w i |w i\u22121 i\u2212(m\u22121) 1: function PROBMKN(t, w i i\u2212m+1 , m, [v k ] m\u22121 k=0 ) 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "Assumption: v k is the matching node for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "w i\u22121 i\u2212k 3: v full 0 \u2190 root(t) tracks match for w i i\u2212k 4: p \u2190 1/|\u03c3| 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "for k \u2190 1 to m do 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "if v k\u22121 does not match then 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "break out of loop 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "v full k \u2190 back-search([lb(v full k\u22121 ), rb(v full k\u22121 )], w i\u2212k+1 ) 9: D k (1), D k (2), D k (3+) \u2190 discounts for k-grams 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "if k = m then 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "c \u2190 size(v full k ) 12: d \u2190 size(v k\u22121 ) 13: N1,2,3+ \u2190 N123PFRONT(t, v k\u22121 , w i\u22121 i\u2212k+1 , 0) 14: else 15: c \u2190 N1PBACK1(t, v full k , w i\u22121 i\u2212k+1 ) 16: d \u2190 N1PFRONTBACK1(t, v k\u22121 , w i\u22121 i\u2212k+1 ) 17: N1,2,3+ \u2190 N123PFRONT(t, v k\u22121 , w i\u22121 i\u2212k+1 , 1) 18: if 1 \u2264 c \u2264 2 then 19: c \u2190 c \u2212 D k (c) 20: else 21: c \u2190 c \u2212 D k (3+) 22: \u03b3 \u2190 D k (1)N1 + D k (2)N2 + D k (3+)N3+ 23: p \u2190 1 d (c + \u03b3p) 24: return p, v full k m\u22121 k=0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "grams, v full , as the nodes matching the context in the subsequent window, denoted v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "For example, in the sentence \"The Force is strong with this one.\", computing the 4-gram probability of \"The Force is strong\" requires matches into the CST for \"strong\", \"is strong\", etc. As illustrated in Table 1 , for the next 4-gram resulting from sliding the window to include \"with\", the denominator terms require exactly these nodes, see Figure 5 . Practically, this is achieved by storing the matching v full nodes computed in line 8, and passing this vector as the input argument [v k ] m\u22121 k=0 to the next call to PROBMKN (line 1). This saves half the calls to backward-search, which, as shown in Figure 3 , represent a significant fraction of the querying cost, resulting in a 30% improvement in query runtime.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 343,
"end": 351,
"text": "Figure 5",
"ref_id": null
},
{
"start": 605,
"end": 613,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "The algorithm starts by considering the unigram probability, and grows the context to its left by one word at a time until the m-gram is fully covered (line 5). This best suits the use of backward-search in a CST, which proceeds from right-to-left over the search pattern. At each stage the search for v full k uses the span from the previous match,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "v full k\u22121 , S IS FIS TFIS w SW ISW FISW S IS FIS N 1+ (\u2022W) N 1+ (\u2022\u2022) N 1+ (\u2022SW) N 1+ (\u2022S\u2022) N' 123+ (S\u2022) N 1+ (\u2022 ISW) N 1+ (\u2022IS \u2022) N' 123+ (IS \u2022) c (FISW) c (FIS) N 123+ (FIS\u2022) N' 123+ (\u03b5\u2022) Root : [0,n] Root : [0,n]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "Figure 5: Example MKN probability computation for a 4-gram LM applied to \"The Force is strong with\" (each word abbreviated to its first character), showing in the two left columns the suffix matches required for the 4gram FISW and elements which can be reused from previous 4-gram computation (gray shading), TFIS. Elements on the right denote the count and occurrence statistics derived from the suffix matches, as linked by blue lines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "along with the BWT to efficiently locate the matching node. Once the nodes matching the full sequence and its context are retrieved, the procedure is fairly straightforward: the discounts are loaded on line 9 and applied in lines 18-21, while the numerator, denominator and smoothing quantities as required for computing P andP are calculated in lines 10-13 and 15-17, respectively. 8 Note that the calls for functions N123PFRONT, N1PBACK1, and N1PFRONTBACK1 are avoided if the corresponding node is amongst the selected nodes in the precomputation step; instead the LOOKUP function is called. Finally, the smoothing weight \u03b3 is computed in line 22 and the conditional probability computed on line 23. The loop terminates when we reach the length limit k = m or we cannot match the context, i.e., w i\u22121 i\u2212k is not in the training corpus, in which case the probability value p for the longest match is returned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "We now turn to the discount parameters, D k (j) , k \u2264 m, j \u2208 1, 2, 3+, which are functions of the corpus statistics as outlined in Figure 1 . While these could be computed based on raw m-gram statistics, this approach is very inefficient for large m \u2265 5; instead these values can be computed efficiently from the compressed data structures. Algorithm 4 outlines how the D k (i) values can be computed directly from the CST. This method iterates",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 139,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "Algorithm 4 Compute discounts 1: function COMPUTEDISCOUNTS(t,m, bv , SA ) 2: ni(k) \u2190 0,ni(k) \u2190 0 \u2200i \u2208 [1, 4], k \u2208 [1,m] 3: N1+( \u2022\u2022 ) \u2190 0 4: for v \u2190 descendants(root(t)) do DFS 5: dP \u2190 string-depth(parent(v)) 6: d \u2190 string-depth(v) 7: dS \u2190 depth-next-sentinel(SA, bv , lb(v)) 8: i \u2190 size(v) frequency 9: c \u2190 interval-symbols(t, [lb(v), rb(v)])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "left occ. 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "for k \u2190 dP + 1 to min (d,m, dS \u2212 1) do 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "if k = 2 then 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "N1+( \u2022\u2022 ) \u2190 N1+( \u2022\u2022 ) + 1 13: if 1 \u2264 i \u2264 4 then 14: ni(k) \u2190 ni(k) + 1 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "if 1 \u2264 c \u2264 4 then 16:nc(k) \u2190nc(k) + 1 17:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "D k (i) \u2190 computed using formula in Figure 1 18: return D k (i), k \u2208 [1,m], i \u2208 {1, 2, 3+}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "over the nodes in the suffix tree, and for each node considers the k-grams encoded in the edge label, where each k-gram is taken to start at the root node (to avoid duplicate counting, we consider k-grams only contained on the given edge but not in the parent edges, i.e., by bounding k based on the string depth of the parent and current nodes,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "d P \u2264 k \u2264 d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "For each k-gram we record its count, i (line 8), and the number of unique symbols to the left, c (line 9), which are accumulated in an array for each kgram size for values between 1 and 4 (lines 13-14 and 15-16, respectively). We also record the number of unique bigrams by incrementing a counter during the traversal (lines 11-12). Special care is required to exclude edge labels that span sentence boundaries, by detecting special sentinel symbols (line 8) that separate each sentence or conclude the corpus. This check could be done by repeatedly calling edge(v, k) to find the k th symbol on the given edge to check for sentinels, however this is a slow operation as it requires multiple backward search calls. Instead we precalculate a bit vector, bv , of size equal to the number of tokens in the corpus, n, in which sentinel locations in the text are marked by 1 bits. Coupled with this, we use the suffix array SA, such that depth-next-sentinel(SA, bv , ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "SELECT(bv , RANK(bv , SA , 1) + 1, 1) \u2212 SA ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "where SA returns the offset into the text for index , and the SA is stored uncompressed to avoid the expensive cost of recovering these values. 9 This function can be understood as finding the first occurrence of the pattern in the text (using SA ) then finding the location of the next 1 in the bit vector, using constant time RANK and SELECT operations. This locates the next sentinel in the text, after which it computes the distance to the start of the pattern. Using this method in place of explicit edge calls improved the training runtime substantially up to 41\u00d7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "We precompute the discount values for k \u2264mgrams. For querying with m >m (including \u221e) we reuse the discounts for the largestm-grams. 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing MKN Probability",
"sec_num": "4.3"
},
{
"text": "To evaluate our approach we measure memory and time usage, along with the predictive perplexity score of word-level LMs on a number of different corpora varying in size and domain. For all of our word-level LMs, we usem,m \u2264 10. We also demonstrate the positive impact of increasing the set limit onm,m from 10 to 50 on improving characterlevel LM perplexity. The SDSL library (Gog et al., 2014 ) is used to implement our data structures. The benchmarking experiments were run on a single core of a Intel Xeon E5-2687 v3 3.10GHz server with 500GiB of RAM.",
"cite_spans": [
{
"start": 376,
"end": 393,
"text": "(Gog et al., 2014",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In our word-level experiments, we use the German subset of the Europarl (Koehn, 2005) as a small corpus, which is 382 MiB in size measuring the raw uncompressed text. We also evaluate on much larger corpora, training on 32GiB subsets of the deduplicated English, Spanish, German, and French Common Crawl corpus (Buck et al., 2014) . As test sets, we used newstest-2014 for all languages except Spanish, for which we used newstest-2013. benchmarking experiments we used the bottom 1M sentences (not used in training) of the German Comman Crawl corpus. We used the preprocessing script of Buck et al. (2014) , then removed sentences with \u2264 2 words, and replaced rare words 12 c \u2264 9 in the training data with a special token. In our characterlevel experiments, we used the training and test data of the benchmark 1-billion-words corpus (Chelba et al., 2013) .",
"cite_spans": [
{
"start": 72,
"end": 85,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF17"
},
{
"start": 311,
"end": 330,
"text": "(Buck et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 587,
"end": 605,
"text": "Buck et al. (2014)",
"ref_id": "BIBREF2"
},
{
"start": 833,
"end": 854,
"text": "(Chelba et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Small data: German Europarl First, we compare the time and memory consumption of both the SRILM and KenLM toolkits, and the CST on the small German corpus. Figure 6 shows the memory usage for construction and querying for CST-based methods w/o precomputation is independent of m, but becomes substantially with m for the SRILM and KenLM benchmarks. To make our results comparable to those reported in (Shareghi et al., 2015) for query time measurements we reported the loading and query time combined. The construction cost is modest, requiring less memory than the benchmark systems for m \u2265 3, and running in a similar time 13 (despite our method supporting queries 12 Running with the full vocabulary increased the memory requirement by 40% for construction and 5% for querying with our model, and 10% and 30%, resp. for KenLM. Construction times for both approaches were 15% slower, but query runtime was 20% slower for our model versus 80% for KenLM.",
"cite_spans": [
{
"start": 401,
"end": 424,
"text": "(Shareghi et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 667,
"end": 669,
"text": "12",
"ref_id": null
}
],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "13 For all timings reported in the paper we manually flushed the system cache between each operation (both for construction of unlimited size). Precomputation adds to the construction time, which rose from 173 to 299 seconds, but yielded speed improvements of several orders of magnitude for querying (218k to 98 seconds for 10gram). In querying, the CST-precompute method is 2-4\u00d7 slower than both SRILM and KenLM for large m \u2265 5, with the exception of m = 10 where it outperforms SRILM. A substantial fraction of the query time is loading the structures from disk; when this cost is excluded, our approach is between 8-13\u00d7 slower than the benchmark toolkits. Note that perplexity computed by the CST closely matched KenLM (differences \u2264 0.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Big Data: Common Crawl Table 2 reports the perplexity results for training on 32GiB subsets of the English, Spanish, French, and German Common Crawl corpus. Note that with such large datasets, perplexity improves with increasing m, with substantial gains available moving above the widely used m = 5. This highlights the importance of our approach being independent from m, in that we can evaluate for any m, including \u221e, at low cost.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Heterogeneous Data To illustrate the effects of domain shift, corpus size and language model capacity on modelling accuracy, we now evaluate the system using a variety of different training corpora. Table 3 reports the perplexity for German when training over datasets ranging from the small Europarl up to 32GiB of the Common Crawl corpus. Note that the test set is from the same domain as the News and querying) to remove the effect of caching on runtime. To query KenLM, we used the speed optimised populate method. (We also compare the memory optimised lazy method in Figure 7. ) To train and query SRILM we used the default method which is optimised for speed, but had slightly worse memory usage than the compact method. m, roughly matching KenLM for m = 10. 15 For the 32GiB dataset, the CST model took 14 hours to build, compared to KenLM's 13.5 and 4 hours for the 10-gram and 5-gram models, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 572,
"end": 581,
"text": "Figure 7.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Query Cost. As shown in Figure 7b , the memory requirements for querying with the CST method were consistently lower than KenLM for m \u2265 4: for m = 10 the memory consumption of KenLM was 277GiB compared to our 27GiB, a 10\u00d7 improvement. This closely matches the file sizes of the stored models on disk. Figure 7d reports the query runtimes, showing that KenLM becomes substantially slower with increasing dataset size and increasing language model order. In contrast, the runtime of our CST approach is much less affected by data size or model order. Our approach is faster than KenLM with the memory optimised lazy option for m \u2265 3, often by several orders of magnitude. For the faster KenLM populate, our model is still highly competitive, growing to 4\u00d7 faster for the largest data size. 16 The loading time is still a significant part of the runtime; without this cost, our model is 5\u00d7 slower than KenLM populate for m = 10 on the largest dataset. Running our model with m = \u221e on the largest data size did not change the memory usage and only had a minor effect on runtime, taking 645s.",
"cite_spans": [
{
"start": 788,
"end": 790,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 24,
"end": 33,
"text": "Figure 7b",
"ref_id": null
},
{
"start": 301,
"end": 310,
"text": "Figure 7d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Character-level modelling To demonstrate the full potential of our approach, we now consider character based language modelling, evaluated on the large benchmark 1-billion-words language modelling corpus, a 3.9GiB (training) dataset with 768M words and 4 billion characters. 17 Table 4 shows the test perplexity results for our models, using the full training vocabulary. Note that perplexity improves with m for the character based model, but plateaus at m = 10 for the word based model; one reason for this is the limited discount computation,m \u2264 10, Figure 7 : Memory and runtime statistics for CST and KenLM for construction and querying with different amounts of German Common Crawl training data and different Markov orders, m. We compare the query runtimes against the optimised version of KenLM for memory (lazy) and speed (populate). For clarity, in the figure we only show CST numbers for m = 10; the results for other settings of m are very similar. KenLM was trained to match the construction memory requirements of the CST-precompute method. for the word model, which may not be a good parameterisation for m >m.",
"cite_spans": [
{
"start": 275,
"end": 277,
"text": "17",
"ref_id": null
}
],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 553,
"end": 561,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Despite the character based model (implicitly) having a massive parameter space, estimating this model was tractable with our approach: the construction time was a modest 5 hours (and 2.3 hours for the word based model.) For the same dataset, Chelba et al. (2013) report that training a MKN 5gram model took 3 hours using a cluster of 100 CPUs; our algorithm is faster than this, despite only using a single CPU core. 18 Queries were also fast: 0.72-0.87ms and 15ms per sentence for word and character based models, respectively.",
"cite_spans": [
{
"start": 243,
"end": 263,
"text": "Chelba et al. (2013)",
"ref_id": "BIBREF4"
},
{
"start": 418,
"end": 420,
"text": "18",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We proposed a language model based on compressed suffix trees, a representation that is highly 18 Chelba et al. (2013) report a better perplexity of 67.6, but they pruned the training vocabulary, whereas we did not. Also we use a stringent treatment of OOV, following Heafield (2013) . compact and can be easily held in memory, while supporting queries needed in computing language model probabilities on the fly. We presented several optimisations to accelerate this process, with only a modest increase in construction time and memory usage, yet improving query runtimes up to 2500\u00d7. In benchmarking against the state-of-the-art KenLM package on large corpora, our method has superior memory usage and highly competitive runtimes for both querying and training. Our approach allows easy experimentation with high order language models, and our results provide evidence that such high orders are most useful when using large training sets.",
"cite_spans": [
{
"start": 95,
"end": 97,
"text": "18",
"ref_id": null
},
{
"start": 98,
"end": 118,
"text": "Chelba et al. (2013)",
"ref_id": "BIBREF4"
},
{
"start": 268,
"end": 283,
"text": "Heafield (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We posit that further perplexity gains can be realised using richer smoothing techniques, such as a non-parametric Bayesian prior (Teh, 2006; Wood et al., 2011) . Our ongoing work will explore this avenue, as well as integrating our language model into the Moses machine translation system, and improving the querying time by caching the lower order probabilities (e.g., m < 4) which we believe can improve query time substantially while maintaining a modest memory footprint.",
"cite_spans": [
{
"start": 130,
"end": 141,
"text": "(Teh, 2006;",
"ref_id": "BIBREF30"
},
{
"start": 142,
"end": 160,
"text": "Wood et al., 2011)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "SELECT(BWT,i,c) returns the position of the ith occurrence of symbol c in BWT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each node can uniquely be identified by the order which it is visited in a DFS traversal of the suffix tree. This corresponds to the RANK of the opening parenthesis of the node in the balanced parenthesis representation of the tree topology of the CST which can be determined in O(1) time.5 We did not test other selection criteria. Other methods may be more effective, such as selecting nodes for precomputation based on the frequency of their corresponding patterns in the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The function N1PFRONTBACK1 is defined as Algorithm 5 inShareghi et al. (2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Pauls and Klein (2011) propose a similar algorithm for triebased LMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "N1PBACK1 and N1PFRONTBACK1 are defined inShareghi et al. (2015); see also \u00a73 for an overview.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although the SA can be very large, we need not store it in memory. The DFS traversal in Algorithm 4 (lines 4-16) means that the calls to SA occur in increasing order of . Hence, we use on-disk storage for the SA with a small memory mapped buffer, thereby incurring a negligible memory overhead.10 It is possible to compute the discounts for all patterns of the text using our algorithm with complexity linear in the length of the text. However, the discounts appear to converge by pattern lengthm = 10. This limit also helps to avoid problems of wild fluctuations in discounts for very long patterns arising from noise for low count events.11 http://www.statmt.org/wmt{13,14}/test.tgz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using the memory budget option, -S. Note that KenLM often used more memory than specified. Allowing KenLM use of 80% of the available RAM reduced training time by a factor of between 2 and 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The CST method uses a single thread for construction, while KenLM uses several threads. Most stages of construction for our method could be easily parallelised.16 KenLM benefits significantly from caching which can occur between runs or as more queries are issued (from m-gram repetition in our large 1 million sentence test set), whereas the CST approach does not benefit noticeably (as it does not incorporate any caching functionality).17 http://www.statmt.org/lm-benchmark/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by the Australian Research Council (FT130101105), National ICT Australia (NICTA) and a Google Faculty Research Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Crawl, which explains the vast difference in perplexities. The domain effect is strong enough to eliminate the impact of using much larger corpora, compare 10-gram perplexities for training on the smaller News Crawl 2007 corpus versus Europarl. However 'big data' is still useful: in all cases the perplexity improves as we provide more data from the same source. Moreover, the magnitude of the gain in perplexity when increasing m is influenced by the data size: with more training data higher order m-grams provide richer models; therefore, the scalability of our method to large datasets is crucially important.Benchmarking against KenLM Next we compare our model against the state-of-the-art method, KenLM trie. The perplexity difference between CST and KenLM was less than 0.003 in all experiments.Construction Cost. Figure 7a compares the peak memory usage of our CST models and KenLM. KenLM is given a target memory usage of the peak usage of our CST models. 14 The construction phase for the CST required more time for lower order models (see Figure 7c ) but was comparable for larger",
"cite_spans": [],
"ref_spans": [
{
"start": 822,
"end": 831,
"text": "Figure 7a",
"ref_id": null
},
{
"start": 1051,
"end": 1060,
"text": "Figure 7c",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Large language models in machine translation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashok",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Ashok C Popat, Peng Xu, Franz J Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Directly addressable variable-length codes",
"authors": [
{
"first": "Susana",
"middle": [],
"last": "Nieves R Brisaboa",
"suffix": ""
},
{
"first": "Gonzalo",
"middle": [],
"last": "Ladra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navarro",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Symposium on String Processing and Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nieves R Brisaboa, Susana Ladra, and Gonzalo Navarro. 2009. Directly addressable variable-length codes. In Proceedings of the International Symposium on String Processing and Information Retrieval.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "N-gram counts and language models from the common crawl",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Bas",
"middle": [],
"last": "Van Ooyen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In Proceedings of the Language Re- sources and Evaluation Conference.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A block sorting lossless data compression algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Burrows",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Wheeler",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Burrows and David Wheeler. 1994. A block sorting lossless data compression algorithm. Techni- cal Report 124, Digital Equipment Corporation Sys- tems Research Center.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "One billion word benchmark for measuring progress in statistical language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.3005"
]
},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Speech & Language",
"volume": "13",
"issue": "4",
"pages": "359--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F Chen and Joshua Goodman. 1999. An empir- ical study of smoothing techniques for language mod- eling. Computer Speech & Language, 13(4):359-393.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Compressing trigram language models with Golomb coding",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Hart",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Church, Ted Hart, and Jianfeng Gao. 2007. Compressing trigram language models with Golomb coding. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Opportunistic data structures with applications",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Ferragina",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Manzini",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Annual Symposium on Foundations of Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paolo Ferragina and Giovanni Manzini. 2000. Oppor- tunistic data structures with applications. In Proceed- ings of the Annual Symposium on Foundations of Com- puter Science.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tightly packed tries: How to fit large models into memory, and make them load fast, too",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Joanis",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Larkin",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Germann, Eric Joanis, and Samuel Larkin. 2009. Tightly packed tries: How to fit large models into memory, and make them load fast, too. In Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "From theory to practice: Plug and play with succinct data structures",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Gog",
"suffix": ""
},
{
"first": "Timo",
"middle": [],
"last": "Beller",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Symposium on Experimental Algorithms",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Gog, Timo Beller, Alistair Moffat, and Matthias Petri. 2014. From theory to practice: Plug and play with succinct data structures. In Proceedings of the In- ternational Symposium on Experimental Algorithms.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "High-order entropy-compressed text indexes",
"authors": [
{
"first": "R",
"middle": [],
"last": "Grossi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Vitter",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACM-SIAM symposium on Discrete algorithms",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Grossi, A. Gupta, and J. S. Vitter. 2003. High-order entropy-compressed text indexes. In Proceedings of the ACM-SIAM symposium on Discrete algorithms.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Storing the web in memory: Space efficient language models with constant time retrieval",
"authors": [
{
"first": "David",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hepple",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Guthrie and Mark Hepple. 2010. Storing the web in memory: Space efficient language models with con- stant time retrieval. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Scalable modified Kneser-Ney language model estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser- Ney language model estimation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "KenLM: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller lan- guage model queries. In Proceedings of the Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient Language Modeling Algorithms with Applications to Statistical Machine Translation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2013. Efficient Language Modeling Algorithms with Applications to Statistical Machine Translation. Ph.D. thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Space-efficient static trees and graphs",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Jacobson",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Annual Symposium on Foundations of Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Jacobson. 1989. Space-efficient static trees and graphs. In Proceedings of the Annual Symposium on Foundations of Computer Science.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Suffix trees as language models",
"authors": [
{
"first": "Casey",
"middle": [],
"last": "Redd Kennington",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Annemarie",
"middle": [],
"last": "Friedrich",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Casey Redd Kennington, Martin Kay, and Annemarie Friedrich. 2012. Suffix trees as language models. In Proceedings of the Conference on Language Re- sources and Evaluation.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Machine Translation summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the Machine Translation summit.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Streambased randomised language models for SMT",
"authors": [
{
"first": "Abby",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abby Levenberg and Miles Osborne. 2009. Stream- based randomised language models for SMT. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Suffix arrays: A new method for on-line string searches",
"authors": [
{
"first": "Udi",
"middle": [],
"last": "Manber",
"suffix": ""
},
{
"first": "Eugene",
"middle": [
"W"
],
"last": "Myers",
"suffix": ""
}
],
"year": 1993,
"venue": "SIAM Journal on Computing",
"volume": "22",
"issue": "5",
"pages": "935--948",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Udi Manber and Eugene W. Myers. 1993. Suffix arrays: A new method for on-line string searches. SIAM Jour- nal on Computing, 22(5):935-948.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Wavelet trees for all",
"authors": [
{
"first": "Gonzalo",
"middle": [],
"last": "Navarro",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Discrete Algorithms",
"volume": "25",
"issue": "",
"pages": "2--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gonzalo Navarro. 2014. Wavelet trees for all. Journal of Discrete Algorithms, 25:2-20.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Proceedings of the International Symposium on String Processing and Information Retrieval",
"authors": [
{
"first": "Enno",
"middle": [],
"last": "Ohlebusch",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Fischer",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Gog",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enno Ohlebusch, Johannes Fischer, and Simon Gog. 2010. CST++. In Proceedings of the International Symposium on String Processing and Information Re- trieval.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Faster and smaller ngram language models",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Pauls and Dan Klein. 2011. Faster and smaller n- gram language models. In Proceedings of the Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Succinct indexable dictionaries with applications to encoding k-ary trees and multisets",
"authors": [
{
"first": "Rajeev",
"middle": [],
"last": "Raman",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Raman",
"suffix": ""
},
{
"first": "S Srinivasa",
"middle": [],
"last": "Rao",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the thirteenth annual ACM-SIAM Symposium on Discrete algorithms",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajeev Raman, Venkatesh Raman, and S Srinivasa Rao. 2002. Succinct indexable dictionaries with applica- tions to encoding k-ary trees and multisets. In Pro- ceedings of the thirteenth annual ACM-SIAM Sympo- sium on Discrete algorithms.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bidirectional search in a string with wavelet trees",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Schnattinger",
"suffix": ""
},
{
"first": "Enno",
"middle": [],
"last": "Ohlebusch",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Gog",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual Symposium on Combinatorial Pattern Matching",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Schnattinger, Enno Ohlebusch, and Simon Gog. 2010. Bidirectional search in a string with wavelet trees. In Proceedings of the Annual Symposium on Combinatorial Pattern Matching.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Compact, efficient and unlimited capacity: Language modeling with compressed suffix trees",
"authors": [
{
"first": "Ehsan",
"middle": [],
"last": "Shareghi",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Petri",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehsan Shareghi, Matthias Petri, Gholamreza Haffari, and Trevor Cohn. 2015. Compact, efficient and unlimited capacity: Language modeling with compressed suffix trees. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unary data structures for language models",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IN-TERSPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Sorensen and Cyril Allauzen. 2011. Unary data structures for language models. In Proceedings of IN- TERSPEECH.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SRILM at sixteen: Update and outlook",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Abrash",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Jing Zheng, Wen Wang, and Victor Abrash. 2011. SRILM at sixteen: Update and outlook. In Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "SRILM-an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference of Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Proceedings of the International Conference of Spoken Language Processing.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Randomised language modelling for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Talbot and Miles Osborne. 2007. Randomised language modelling for statistical machine translation. In Proceedings of the Annual Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A hierarchical Bayesian language model based on Pitman-Yor processes",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A succinct n-gram language model",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe, Hajime Tsukada, and Hideki Isozaki. 2009. A succinct n-gram language model. In Pro- ceedings of the Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Linear pattern matching algorithms",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Weiner",
"suffix": ""
}
],
"year": 1973,
"venue": "Proceedings of the Annual Symposium Switching and Automata Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Weiner. 1973. Linear pattern matching algorithms. In Proceedings of the Annual Symposium Switching and Automata Theory.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The sequence memoizer",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Wood",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Gasthaus",
"suffix": ""
},
{
"first": "C\u00e9dric",
"middle": [],
"last": "Archambeau",
"suffix": ""
},
{
"first": "Lancelot",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2011,
"venue": "Communications of the ACM",
"volume": "54",
"issue": "2",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Wood, Jan Gasthaus, C\u00e9dric Archambeau, Lancelot James, and Yee Whye Teh. 2011. The sequence memoizer. Communications of the ACM, 54(2):91-98.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Suffix array and its applications in empirical natural language processing",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zhang and Stephan Vogel. 2006. Suffix array and its applications in empirical natural language process- ing. Technical report, CMU, Pittsburgh PA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "where C[c] refers to the starting position of all suffixes prefixed by c in SA and RANK(BWT, sp j , c) determines the number of occurrences of symbol c in BWT[0, sp j ]."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "(a) Character-Level Suffix Tree, Suffix Array (SA), and Burrows-Wheeler Transform (BWT) for \"#abcabcabdbbc#$\" (as formulated in eq. 1). (b) The Wavelet Tree of BWT and the RANK(BWT, 8, a). The ordered alphabet symbols and their code words are {$:000, #:001, a:01, b:100, c:101, d:11}, and symbols \"#\" and \"$\" are to mark sentence and file boundaries. The red bounding boxes and digits signify the path for computing RANK(BWT, 8, a)."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Time breakdown for querying average persentence, shown without runtime precomputation of expensive contextual counts (above) vs. with precomputation (below). The left and right bar in each group denote KN and MKN, respectively. Trained on the German portion of the Europarl corpus and tested over the first 10K sentences from the News Commentary corpus."
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Fig- ure 2 corresponds to enumerating over its three children. Two of v's children are leaf nodes {10, 8}, and one child has three leaf descendants {11, 2, 5}, hence N 1 and N 2 are 2 and 0 respectively, and N 3+ is 1. Further, consider computing N {1,2,3+} (b \u2022 ) in"
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Left: Distribution of values prestored for Europarl German; Right: Space usage of prestored values relative to total index size for Europarl German for different storage thresholds (m)."
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Memory consumption and total runtime for the CST with and without precomputation, KenLM (trie), and SRILM (default) with m \u2208 [2, 10], while we also include m = \u221e for CST methods. Trained on the Europarl German corpus and tested over the bottom 1M sentences from the German Common Crawl corpus."
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "size (M) perplexity tokens m = 2 m = 3 m = 5 m = 7 m = 10 m = \u221e"
},
"FIGREF9": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "unit time (s) mem (GiB) m = 5 m = 10 m = 20 m = \u221e"
},
"TABREF0": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>illustrates the</td></tr></table>",
"text": ""
},
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "The main quantities required for computing P (with|Force, is, strong) under MKN."
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "EN 6470 321.6 183.8 154.3 152.7 152.5 152.3 ES 6276 231.3 133.2 111.7 109.7 109.3 109.2 FR 6100 215.8 109.2 85.2 83.1 82.6 82.4 DE 5540 588.3 336.6 292.8 288.1 287.8 287.8"
},
"TABREF3": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Perplexities on English, French, German newstests 2014, and Spanish newstest 2013 when trained on 32GiB chunks of English, Spanish, French, and German Common Crawl corpus."
},
"TABREF6": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Perplexity results for the 1 billion word benchmark corpus, showing word based and character based MKN models, for different m. Timings and peak memory usage are reported for construction. The word model computed discounts and precomputed counts up tom,m = 10, while the character model used thresholds m,m = 50. Timings measured on a single core."
}
}
}
}