ACL-OCL / Base_JSON /prefixD /json /D07 /D07-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D07-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:19:29.904022Z"
},
"title": "Compressing Trigram Language Models With Golomb Coding",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Church",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft One Microsoft Way Redmond",
"location": {
"region": "WA",
"country": "USA"
}
},
"email": "church@microsoft.com"
},
{
"first": "Ted",
"middle": [],
"last": "Hart",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft",
"location": {
"addrLine": "One Microsoft Way Redmond, Jianfeng Gao Microsoft One Microsoft Way Redmond",
"region": "WA, WA",
"country": "USA, USA"
}
},
"email": "tedhar@microsoft.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Trigram language models are compressed using a Golomb coding method inspired by the original Unix spell program. Compression methods trade off space, time and accuracy (loss). The proposed HashTBO method optimizes space at the expense of time and accuracy. Trigram language models are normally considered memory hogs, but with HashTBO, it is possible to squeeze a trigram language model into a few megabytes or less. HashTBO made it possible to ship a trigram contextual speller in Microsoft Office 2007.",
"pdf_parse": {
"paper_id": "D07-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "Trigram language models are compressed using a Golomb coding method inspired by the original Unix spell program. Compression methods trade off space, time and accuracy (loss). The proposed HashTBO method optimizes space at the expense of time and accuracy. Trigram language models are normally considered memory hogs, but with HashTBO, it is possible to squeeze a trigram language model into a few megabytes or less. HashTBO made it possible to ship a trigram contextual speller in Microsoft Office 2007.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper will describe two methods of compressing trigram language models: HashTBO and ZipTBO. ZipTBO is a baseline compression method that is commonly used in many applications such as the Microsoft IME (Input Method Editor) systems that convert Pinyin to Chinese and Kana to Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Trigram language models have been so successful that they are beginning to be rolled out to applications with millions and millions of users: speech recognition, handwriting recognition, spelling correction, IME, machine translation and more. The EMNLP community should be excited to see their technology having so much influence and visibility with so many people. Walter Mossberg of the Wall Street Journal called out the contextual speller (the blue squiggles) as one of the most notable features in Office 2007:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are other nice additions. In Word, Outlook and PowerPoint, there is now contextual spell checking, which points to a wrong word, even if the spelling is in the dictionary. For example, if you type \"their\" instead of \"they're,\" Office catches the mistake. It really works. 1 The use of contextual language models in spelling correction has been discussed elsewhere: (Church and Gale, 1991) , (Mays et al, 1991) , (Kukich, 1992) and (Golding and Schabes, 1996) . This paper will focus on how to deploy such methods to millions and millions of users. Depending on the particular application and requirements, we need to make different tradeoffs among:",
"cite_spans": [
{
"start": 278,
"end": 279,
"text": "1",
"ref_id": null
},
{
"start": 371,
"end": 394,
"text": "(Church and Gale, 1991)",
"ref_id": "BIBREF1"
},
{
"start": 397,
"end": 415,
"text": "(Mays et al, 1991)",
"ref_id": "BIBREF9"
},
{
"start": 418,
"end": 432,
"text": "(Kukich, 1992)",
"ref_id": "BIBREF11"
},
{
"start": 437,
"end": 464,
"text": "(Golding and Schabes, 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Space (for compressed language model), 2. Runtime (for n-gram lookup), and 3. Accuracy (losses for n-gram estimates). HashTBO optimizes space at the expense of the other two. We recommend HashTBO when space concerns dominate the other concerns; otherwise, use ZipTBO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many applications where space is extremely tight, especially on cell phones. HashTBO was developed for contextual spelling in Microsoft Office 2007, where space was the key challenge. The contextual speller probably would not have shipped without HashTBO compression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We normally think of trigram language models as memory hogs, but with HashTBO, a few megabytes are more than enough to do interesting things with trigrams. Of course, more memory is always better, but it is surprising how much can be done with so little.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For English, the Office contextual speller started with a predefined vocabulary of 311k word types and a corpus of 6 billion word tokens. (About a third of the words in the vocabulary do not appear in the corpus.) The vocabularies for other languages tend to be larger, and the corpora tend to be smaller. Initially, the trigram language model is very large. We prune out small counts (8 or less) to produce a starting point of 51 million trigrams, 14 million bigrams and 311k unigrams (for English). With extreme Stolcke, we cut the 51+14+0.3 million n-grams down to a couple million. Using a Golomb code, each n-gram consumes about 3 bytes on average.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With so much Stolcke pruning and lossy compression, there will be losses in precision and recall. Our evaluation finds, not surprisingly, that compression matters most when space is tight. Although HashTBO outperforms ZipTBO on the spelling task over a wide range of memory sizes, the difference in recall (at 80% precision) is most noticeable at the low end (under 10MBs), and least noticeable at the high end (over 100 MBs). When there is plenty of memory (100+ MBs), the difference vanishes, as both methods asymptote to the upper bound (the performance of an uncompressed trigram language model with unlimited memory).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both methods start with a TBO (trigrams with backoff) LM (language model) in the standard ARPA format. The ARPA format is used by many toolkits such as the CMU-Cambridge Statistical Language Modeling Toolkit. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "No matter how much data we have, we never have enough. Nothing has zero probability. We will see n-grams in the test set that did not appear in the training set. To deal with this reality, Katz (1987) proposed backing off from trigrams to bigrams (and from bigrams to unigrams) when we don't have enough training data.",
"cite_spans": [
{
"start": 189,
"end": 200,
"text": "Katz (1987)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "Backoff doesn't have to do much for trigrams that were observed during training. In that case, the backoff estimate of ( | \u22122 \u22121 ) is simply a discounted probability ( | \u22122 \u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "The discounted probabilities steal from the rich and give to the poor. They take some probability mass from the rich n-grams that have been seen in training and give it to poor unseen n-grams that 2 http://www.speech.cs.cmu.edu/SLM might appear in test. There are many ways to discount probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "Katz used Good-Turing smoothing, but other smoothing methods such as Kneser-Ney are more popular today.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "Backoff is more interesting for unseen trigrams. In that case, the backoff estimate is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "\u22122 \u22121 ( | \u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "The backoff alphas (\u03b1) are a normalization factor that accounts for the discounted mass. That is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "\u22122 \u22121 = 1 \u2212 ( | \u22122 \u22121 ) : ( \u22122 \u22121 ) 1 \u2212 ( | \u22121 ) : ( \u22122 \u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "where \u22122 \u22121 > 0 simply says that the trigram was seen in training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Katz Backoff",
"sec_num": "2.1"
},
{
"text": "Both ZipTBO and HashTBO start with Stolcke pruning (1998). 3 We will refer to the trigram language model after backoff and pruning as a pruned TBO LM.",
"cite_spans": [
{
"start": 59,
"end": 60,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "Stolcke pruning looks for n-grams that would receive nearly the same estimates via Katz backoff if they were removed. In a practical system, there will never be enough memory to explicitly materialize all n-grams that we encounter during training. In this work, we need to compress a large set of n-grams (that appear in a large corpus of 6 billion words) down to a relatively small language model of just a couple of megabytes. We prune as much as necessary to make the model fit into the memory allocation (after subsequent Hash-TBO/ZipTBO compression).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "Pruning saves space by removing n-grams subject to a loss consideration:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "1. Select a threshold \uf071. Stolcke pruning uses a loss function based on relative entropy. Formally, let P denote the trigram probabilities assigned by the original unpruned model, and let P' denote the probabilities in the pruned model. Then the relative entropy D(P||P') between the two models is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "\u2212 , \u210e [log \u2032 \u210e \u2212 log ( , \u210e)] ,\u210e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "where h is the history. For trigrams, the history is the previous two words. Stolcke showed that this reduces to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "\u2212 \u210e { ( |\u210e) [log \u210e \u2032 + log \u2032 \u210e \u2212 log ( |\u210e)] +[log \u2032 (\u210e) \u2212 log (\u210e)] \u210e : \u210e, >0 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "where \u2032 (\u210e) is the revised backoff weight after pruning and h' is the revised history after dropping the first word. The summation is over all the trigrams that were seen in training: \u210e, > 0. Stolcke pruning will remove n-grams as necessary, minimizing this loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stolcke Pruning",
"sec_num": "3"
},
{
"text": "After Stolcke pruning, we apply additional compression (either ZipTBO or HashTBO). ZipTBO uses a fairly straightforward data structure, which introduces relatively few additional losses on top of the pruned TBO model. A few small losses are introduced by quantizing the log likelihoods and the backoff alphas, but those losses probably don't matter much. More serious losses are introduced by restricting the vocabulary size, V, to the 64k most-frequent words. It is convenient to use byte aligned pointers. The actual vocabulary of more than 300,000 words for English (and more for other languages) would require 19-bit pointers (or more) without pruning. Byte operations are faster than bit operations. There are other implementations of ZipTBO that make different tradeoffs, and allow for larger V without pruning losses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compression on Top of Pruning",
"sec_num": "3.1"
},
{
"text": "HashTBO is more heroic. It uses a method inspired by McIlroy (1982) in the original Unix Spell Program, which squeezed a word list of N=32,000 words into a PDP-11 address space (64k bytes). That was just 2 bytes per word! HashTBO uses similar methods to compress a couple million n-grams into half a dozen mega-bytes, or about 3 bytes per n-gram on average (including log likelihoods and alphas for backing off). ZipTBO is faster, but takes more space (about 4 bytes per n-gram on average, as opposed to 3 bytes per n-gram). Given a fixed memory budget, ZipTBO has to make up the difference with more aggressive Stolcke pruning. More pruning leads to larger losses, as we will see, for the spelling application.",
"cite_spans": [
{
"start": 53,
"end": 67,
"text": "McIlroy (1982)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compression on Top of Pruning",
"sec_num": "3.1"
},
{
"text": "Losses will be reported in terms of performance on the spelling task. It would be nice if losses could be reported in terms of cross entropy, but the values output by the compressed language models cannot be interpreted as probabilities due to quantization losses and other compression losses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compression on Top of Pruning",
"sec_num": "3.1"
},
{
"text": "McIlroy's spell program started with a hash table. Normally, we store the clear text in the hash table, but he didn't have space for that, so he didn't. Hash collisions introduce losses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "McIlroy's Spell Program",
"sec_num": "4"
},
{
"text": "McIlroy then sorted the hash codes and stored just the interarrivals of the hash codes instead of the hash codes themselves. If the hash codes, h, are distributed by a Poisson process, then the interarrivals, t, are exponentially distributed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "McIlroy's Spell Program",
"sec_num": "4"
},
{
"text": "Pr = \u2212 , where = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "McIlroy's Spell Program",
"sec_num": "4"
},
{
"text": "Recall that the dictionary contains N=32,000 words. P is the one free parameter, the range of the hash function. McIlroy hashed words into a large integer mod P, where P is a large prime that trades off space and accuracy. Increasing P consumes more space, but also reduces losses (hash collisions).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "McIlroy's Spell Program",
"sec_num": "4"
},
{
"text": "McIlroy used a Golomb (1966) code to store the interarrivals. A Golomb code is an optimal Huffman code for an infinite alphabet of symbols with exponential probabilities.",
"cite_spans": [
{
"start": 15,
"end": 28,
"text": "Golomb (1966)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "McIlroy's Spell Program",
"sec_num": "4"
},
{
"text": "The space requirement (in bits per lexical entry) is close to the entropy of the exponential.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "McIlroy's Spell Program",
"sec_num": "4"
},
{
"text": "Pr log 2 Pr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "= \u2212",
"sec_num": null
},
{
"text": "\u221e =0 = 1 log 2 + log 2 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "= \u2212",
"sec_num": null
},
{
"text": "The ceiling operator \u2308 \u2309 is introduced because Huffman codes use an integer number of bits to encode each symbol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "= \u2212",
"sec_num": null
},
{
"text": "We could get rid of the ceiling operation if we replaced the Huffman code with an Arithmetic code, but it is probably not worth the effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "= \u2212",
"sec_num": null
},
{
"text": "Lookup time is relatively slow. Technically, lookup time is O(N), because one has to start at the beginning and add up the interarrivals to reconstruct the hash codes. McIlroy actually introduced a small table on the side with hash codes and offsets so one could seek to these offsets and avoid starting at the beginning every time. Even so, our experiments will show that HashTBO is an order of magnitude slower than ZipTBO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "= \u2212",
"sec_num": null
},
{
"text": "Accuracy is also an issue. Fortunately, we don't have a problem with dropouts. If a word is in the dictionary, we aren't going to misplace it. But two words in the dictionary could hash to the same value. In addition, a word that is not in the dictionary could hash to the same value as a word that is in the dictionary. For McIlroy's application (detecting spelling errors), the only concern is the last possibility. McIlroy did what he could do to mitigate false positive errors by increasing P as much as he could, subject to the memory constraint (the PDP-11 address space of 64k bytes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "= \u2212",
"sec_num": null
},
{
"text": "We recommend these heroics when space dominates other concerns (time and accuracy).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "= \u2212",
"sec_num": null
},
{
"text": "Golomb coding takes advantage of the sparseness in the interarrivals between hash codes. Let's start with a simple recipe. Let t be an interarrival. We will decompose t into a pair of a quotient (t q ) and a remainder (t r ). That is, let = + where = \u230a / \u230b and = mod . We choose m to be a power of two near \u2248 2 = 2 , where E[t] is the expected value of the interarrivals, defined below. Store t q in unary and t r in binary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Golomb Coding",
"sec_num": "5"
},
{
"text": "Binary codes are standard, but unary is not. To encode a number z in unary, simply write out a sequence of z-1 zeros followed by a 1. Thus, it takes z bits to encode the number z in unary, as opposed to log 2 bits in binary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Golomb Coding",
"sec_num": "5"
},
{
"text": "This recipe consumes + log 2 bits. The first term is for the unary piece and the second term is for the binary piece.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Golomb Coding",
"sec_num": "5"
},
{
"text": "Why does this recipe make sense? As mentioned above, a Golomb code is a Huffman code for an infinite alphabet with exponential probabilities. We illustrate Huffman codes for infinite alphabets by starting with a simple example of a small (very finite) alphabet with just three symbols: {a, b, c}. Assume that half of the time, we see a, and the rest of the time we see b or c, with equal probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Golomb Coding",
"sec_num": "5"
},
{
"text": "Code Length Pr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol",
"sec_num": null
},
{
"text": "A 0 1 50% B 10 2 25% C 11 2 25%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol",
"sec_num": null
},
{
"text": "The Huffman code in the table above can be read off the binary tree below. We write out a 0 whenever we take a left branch and a 1 whenever we take a right branch. The Huffman tree is constructed so that the two branches are equally likely (or at least as close as possible to equally likely).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol",
"sec_num": null
},
{
"text": "Now, let's consider an infinite alphabet where Pr = 1 2 , Pr = 1 4 and the probability of the t+1 st symbol is Pr = (1 \u2212 ) where = 1 2 . In this case, we have the following code, which is simply t in unary. That is, we write out 1 \uf02d t zeros followed by a 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol",
"sec_num": null
},
{
"text": "Length Pr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "A 1 1 2 \u22121 B 01 2 2 \u22122 C 001 3 2 \u22123",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "The Huffman code reduces to unary when the Huffman tree is left branching:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "In general, \u03b2 need not be \u00bd. Without loss of generality, assume Pr = 1 \u2212 where 1 2 \u2264 < 1 and \u2265 0. \u03b2 depends on E[t], the expected value of the interarrivals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "= = 1 \u2212 \u21d2 = 1 +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "Recall that the recipe above calls for expressing t as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "\u2022 + where = \u230a \u230b and = mod . We encode t q in unary and t r in binary. (The binary piece consumes log 2 bits, since t r ranges from 0 to m.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "How do we pick m? For convenience, let m be a power of 2. The unary encoding makes sense as a Huffman code if \u2248 1 2 . Thus, a reasonable choice 4 is \u2248 2 . If",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "= 1+ , then = 1+ \u2248 1 \u2212 . Set- ting \u2248 1 2 , means \u2248 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Code",
"sec_num": null
},
{
"text": "The HashTBO format is basically the same as McIlroy's format, except that McIlroy was storing words and we are storing n-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Format",
"sec_num": "6"
},
{
"text": "One could store all of the n-grams in a single table, though we actually store unigrams in a separate table. An ngram is represented as a key of n integers (offsets into the vocabulary) and two values, a log likelihood and, if appropriate, an alpha for backing off. We'll address the keys first.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Format",
"sec_num": "6"
},
{
"text": "Trigrams consist of three integers (offsets into the Vocabulary): 1 2 3 . These three integers are mapped into a single hash between 0 and \u2212 1 in the obvious way:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "\u210e \u210e = 3 0 + 2 1 + 1 2 mod",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "where V is vocabulary size. Bigrams are hashed the same way, except that the vocabulary is padded with an extra symbol for NA (not applicable). In the bigram case, 3 is NA. We then follow a simple recipe for bigrams and trigrams:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "1. Stolcke prune appropriately 2. Let N be the number of n-grams 3. Choose an appropriate P (hash range) 4. Hash the N n-grams 5. Sort the hash codes 6. Take the first differences (which are modeled as interarrivals of a Poisson process) 7. Golomb code the first differences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "We did not use this method for unigrams, since we assumed (perhaps incorrectly) that we will have explicit likelihoods for most of them and therefore there is little opportunity to take advantage of sparseness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "Most of the recipe can be fully automated with a turnkey process, but two steps require appropriate hand intervention to meet the memory allocation for a particular application:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "1. Stolcke prune appropriately, and 2. Choose an appropriate P Ideally, we'd like to do as little pruning as possible and we'd like to use as large a P as possible, subject to the memory allocation. We don't have a principled argument for how to balance Stolcke pruning losses with hashing losses; this can be arrived at empirically on an application-specific basis. For example, to fix the storage per n-gram at around 13 bits:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "13 = 1 log 2 + log 2 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "If we solve for \u03bb, we obtain 0000 , 20 / 1 \uf0bb \uf06c . In other words, set P to a prime near N 000 , 20 and then do as much Stolcke pruning as necessary to meet the memory constraint. Then measure your application's accuracy, and adjust accordingly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Keys",
"sec_num": "6.1"
},
{
"text": "There are N log likelihood values, one for each key. These N values are quantized into a small number of distinct bins. They are written out as a sequence of N Huffman codes. If there are Katz backoff alphas, then they are also written out as a sequence of N Huffman codes. (Unigrams and bigrams have alphas, but trigrams don't.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Values and Alphas",
"sec_num": "6.2"
},
{
"text": "The lookup process is given an n-gram, \u22122 \u22121 , and is asked to estimate a log likelihood, log Pr \u22122 \u22121 ) . Using the standard backoff model, this depends on the likelihoods for the unigrams, bigrams and trigrams, as well as the alphas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Lookup",
"sec_num": "6.3"
},
{
"text": "The lookup routine not only determines if the ngram is in the table, but also determines the offset within that table. Using that offset, we can find the appropriate log likelihood and alpha. Side tables are maintained to speed up random access.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HashTBO Lookup",
"sec_num": "6.3"
},
{
"text": "ZipTBO is a well-established representation of trigrams. Detailed descriptions can be found in (Clarkson and Rosenfeld 1997; Whittaker and Raj 2001) .",
"cite_spans": [
{
"start": 95,
"end": 124,
"text": "(Clarkson and Rosenfeld 1997;",
"ref_id": null
},
{
"start": 125,
"end": 148,
"text": "Whittaker and Raj 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ZipTBO Format",
"sec_num": "7"
},
{
"text": "ZipTBO consumes 8 bytes per unigram, 5 bytes per bigram and 2.5 bytes per trigram. In practice, this comes to about 4 bytes per n-gram on average.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZipTBO Format",
"sec_num": "7"
},
{
"text": "Note that there are some important interactions between ZipTBO and Stolcke pruning. ZipTBO is relatively efficient for trigrams, compared to bigrams. Unfortunately, aggressive Stolcke pruning generates bigram-heavy models, which don't compress well with ZipTBO. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZipTBO Format",
"sec_num": "7"
},
{
"text": "probs & weights bounds BIGRAM ids probs & weights W[i-2]w[i-1] W[i-2]w[i-1]w[i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZipTBO Format",
"sec_num": "7"
},
{
"text": "The tree structure of the trigram model is implemented using three arrays. As shown in Figure 1 , from left to right, the first array (called unigram array) stores unigram nodes, each of which branches out into bigram nodes in the second array (bigram array). Each bigram node then branches out into trigram nodes in the third array (trigram array).",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 96,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "ZipTBO Keys",
"sec_num": "7.1"
},
{
"text": "The length of the unigram array is determined by the vocabulary size (V). The lengths of the other two arrays depend on the number of bigrams and the number of trigrams, which depends on how aggressively they were pruned. (We do not prune unigrams.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZipTBO Keys",
"sec_num": "7.1"
},
{
"text": "We store a 2-byte word id for each unigram, bigram and trigram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZipTBO Keys",
"sec_num": "7.1"
},
{
"text": "The unigram nodes point to blocks of bigram nodes, and the bigram nodes point to blocks of trigram nodes. There are boundary symbols between blocks (denoted by the pointers in Figure 1 ). The boundary symbols consume 4 bytes for each unigram and 2 bytes for each bigram.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "ZipTBO Keys",
"sec_num": "7.1"
},
{
"text": "In each block, nodes are sorted by their word ids. Blocks are consecutive, so the boundary value of an n\u22121-gram node together with the boundary value of its previous n\u22121-gram node specifies, in the n-gram array, the location of the block containing all its child nodes. To locate a particular child node, a binary search of word ids is performed within the block. Figure 2 vanish if we adjust for prune size.",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 372,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "ZipTBO Keys",
"sec_num": "7.1"
},
{
"text": "Like HashTBO, the log likelihood values and backoff alphas are quantized to a small number of quantization levels (256 levels for unigrams and 16 levels for bigrams and trigrams). Unigrams use a full byte for the log likelihoods, plus another full byte for the alphas. Bigrams use a half byte for the log likelihood, plus another half byte for the alphas. Trigrams use a half byte for the log likelihood. (There are no alphas for trigrams.) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZipTBO Values and Alphas",
"sec_num": "7.2"
},
{
"text": "We normally think of trigram language models as memory hogs, but Figure 2 shows that trigrams can be squeezed down to a megabyte in a pinch. Of course, more memory is always better, but it is surprising how much can be done (27% recall at 80% precision) with so little memory.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "8"
},
{
"text": "Given a fixed memory budget, HashTBO outperforms ZipTBO which outperforms StdTBO, a baseline system with no compression. Compression matters more when memory is tight. The gap between methods is more noticeable at the low end (under 10 megabytes) and less noticeable at the high end (over 100 megabytes), where both methods asymptote to the performance of the StdTBO baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "8"
},
{
"text": "All methods start with Stolcke pruning. Figure 3 shows that the losses are largely due to pruning. Figure 2 . When there is plenty of memory, performance (recall @ 80% precision) asymptotes to the performance of baseline system with no compression (StdTBO). When memory is tight, HashTBO >> ZipTBO >> StdTBO. 14 0 500,000 1,000,000 1,500,000 2,000,000 2,500,000 3,000,000 3,500,000 4,000,000 4,500,000",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 49,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 100,
"end": 108,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "8"
},
{
"text": "HashTBO ZipTBO",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Megabytes",
"sec_num": null
},
{
"text": "All three methods perform about equally well, assuming the same amount of pruning. The difference is that HashTBO can store more n-grams in the same memory and therefore it doesn't have to do as much pruning. Figure 4 shows that HashTBO consumes 3 bytes per n-gram whereas ZipTBO consumes 4. Figure 4 combines unigrams, bigrams and trigrams into a single n-gram variable. Figure 5 drills down into this variable, distinguishing bigrams from trigrams. The axes here have been reversed so we can see that HashTBO can store more of both kinds in less space. Note that both HashTBO lines are above both ZipTBO lines. In addition, note that both bigram lines are above both trigram lines (triangles). Aggressively pruned models have more bigrams than trigrams!",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 292,
"end": 300,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 372,
"end": 380,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Megabytes",
"sec_num": null
},
{
"text": "Linear regression on this data shows that Hash-TBO is no better than ZipTBO on trigrams (with the particular settings that we used), but there is a big difference on bigrams. The regressions below model M (memory in bytes) as a function of bi and tri, the number of bigrams and trigrams, respectively. (Unigrams are modeled as part of the intercept since all models have the same number of unigrams.) \u210e = 0.8 + 3.4 + 2.6 = 2.6 + 4.9 + 2.6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Megabytes",
"sec_num": null
},
{
"text": "As a sanity check, it is reassuring that ZipTBO's coefficients of 4.9 and 2.6 are close to the true values of 5 bytes per bigram and 2.5 bytes per trigram, as reported in Section 7.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Megabytes",
"sec_num": null
},
{
"text": "According to the regression, HashTBO is no better than ZipTBO for trigrams. Both models use roughly 2.6 bytes per trigram. When trigram models have relatively few trigrams, the other coefficients matter. HashTBO uses less space for bigrams (3.4 bytes/bigram << 4.9 bytes/bigram) and it has a better intercept (0.8 << 2.6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Megabytes",
"sec_num": null
},
{
"text": "We recommend HashTBO if space is so tight that it dominates other concerns. However, if there is plenty of space, or time is an issue, then the tradeoffs work out differently. Figure 6 shows that ZipTBO is an order of magnitude faster than HashTBO. The times are reported in microseconds per n-gram lookup on a dual Xeon PC with a 3.6 ghz clock and plenty of RAM (4GB). These times were averaged over a test set of 4 million lookups. The test process uses a cache. Turning off the cache increases the difference in lookup times. Figure 6 . HashTBO is slower than ZipTBO.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 6",
"ref_id": null
},
{
"start": 529,
"end": 537,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Megabytes",
"sec_num": null
},
{
"text": "Trigram language models were compressed using HashTBO, a Golomb coding method inspired by McIlroy's original spell program for Unix. McIlroy used the method to compress a dictionary of 32,000 words into a PDP-11 address space of 64k bytes. That is just 2 bytes per word!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "We started with a large corpus of 6 billion words of English. With HashTBO, we could compress the trigram language model into just a couple of megabytes using about 3 bytes per n-gram (compared to 4 bytes per n-gram for the ZipTBO baseline). The proposed HashTBO method is not fast, and it is not accurate (not lossless), but it is hard to beat if space is tight, which was the case for the contextual speller in Microsoft Office 2007. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "http://www.nist.gov/speech/publications/darpa98/html/l m20/lm20.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This discussion follows slide 29 of http://www.stanford.edu/class/ee398a/handouts/lectures/ 01-EntropyLosslessCoding.pdf. See(Witten et al,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ") and http://en.wikipedia.org/wiki/Golomb_coding, for similar discussion, though with slightly different notation. The primary reference is(Golomb, 1966).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Dong-Hui Zhang for his contributions to ZipTBO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Probability Scoring for Spelling Correction",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1991,
"venue": "Statistics and Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K., and Gale, W. 1991 Probability Scoring for Spelling Correction, Statistics and Computing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improved language modeling through better language model evaluation measures",
"authors": [
{
"first": "P",
"middle": [],
"last": "Clarkson",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 2001,
"venue": "Computer Speech and Language",
"volume": "15",
"issue": "",
"pages": "39--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarkson, P. and Robinson, T. 2001 Improved language modeling through better language model evaluation measures, Computer Speech and Language, 15:39- 53, 2001.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Algorithms on Strings, Trees and Sequences",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gusfield",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gusfield. 1997 Algorithms on Strings, Trees and Sequences. Cambridge University Press, Cambridge, UK",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving language model size reduction using better pruning criteria",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "176--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, J. and Zhang, M., 2002 Improving language model size reduction using better pruning criteria. ACL 2002: 176-182.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The use of clustering techniques for language modeling -application to Asian languages",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Miao",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "6",
"issue": "",
"pages": "27--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, J., Goodman, J., and Miao, J. 2001 The use of clustering techniques for language modeling -appli- cation to Asian languages. Computational Linguis- tics and Chinese Language Processing, 6:1, pp 27- 60.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Combining Trigram-based and feature-based methods for contextsensitive spelling correction",
"authors": [
{
"first": "A",
"middle": [
"R"
],
"last": "Golding",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1996,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Golding, A. R. and Schabes, Y. 1996 Combining Tri- gram-based and feature-based methods for context- sensitive spelling correction, ACL, pp. 71-78.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Run-length encodings",
"authors": [
{
"first": "S",
"middle": [
"W"
],
"last": "Golomb",
"suffix": ""
}
],
"year": 1966,
"venue": "IEEE Transactions on Information Theory",
"volume": "12",
"issue": "3",
"pages": "399--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Golomb, S.W. 1966 Run-length encodings IEEE Trans- actions on Information Theory, 12:3, pp. 399-40.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language model size reduction by pruning and clustering",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2000,
"venue": "International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodman, J. and Gao, J. 2000 Language model size reduction by pruning and clustering, ICSLP-2000, International Conference on Spoken Language Processing, Beijing, October 16-20, 2000.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Context based spelling correction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Mays",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Damerau",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "Inf. Process. Manage",
"volume": "27",
"issue": "",
"pages": "517--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mays, E., Damerau, F. J., and Mercer, R. L. 1991 Con- text based spelling correction. Inf. Process. Manage. 27, 5 (Sep. 1991), pp. 517-522.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Estimation of probabilities from sparse data for other language component of a speech recognizer",
"authors": [
{
"first": "Slava",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE transactions on Acoustics, Speech and Signal Processing",
"volume": "35",
"issue": "3",
"pages": "400--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz, Slava, 1987 Estimation of probabilities from sparse data for other language component of a speech recognizer. IEEE transactions on Acoustics, Speech and Signal Processing, 35:3, pp. 400-401.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Techniques for automatically correcting words in text",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Kukich",
"suffix": ""
}
],
"year": 1992,
"venue": "Computing Surveys",
"volume": "24",
"issue": "4",
"pages": "377--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kukich, Karen, 1992 Techniques for automatically cor- recting words in text, Computing Surveys, 24:4, pp. 377-439.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Development of a spelling list",
"authors": [
{
"first": "M",
"middle": [
"D"
],
"last": "Mcilroy",
"suffix": ""
}
],
"year": 1982,
"venue": "IEEE Trans. on Communications",
"volume": "30",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. D. McIlroy, 1982 Development of a spelling list, IEEE Trans. on Communications 30 pp. 91-99.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Scalable backoff language models",
"authors": [
{
"first": "K",
"middle": [],
"last": "Seymore",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. ICSLP",
"volume": "1",
"issue": "",
"pages": "232--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seymore, K., and Rosenfeld, R. 1996 Scalable backoff language models. Proc. ICSLP, Vol. 1, pp.232-235.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Entropy-based Pruning of Backoff Language Models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. DARPA News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "270--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, A. 1998 Entropy-based Pruning of Backoff Lan- guage Models. Proc. DARPA News Transcription and Understanding Workshop, 1998, pp. 270--274, Lans- downe, VA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Quantization-based language model compression",
"authors": [
{
"first": "E",
"middle": [],
"last": "Whittaker",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ray",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. Eurospeech",
"volume": "",
"issue": "",
"pages": "33--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Whittaker, E. and Ray, B. 2001 Quantization-based lan- guage model compression. Proc. Eurospeech, pp. 33-36.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Compressing and Indexing Documents and Images",
"authors": [
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "T",
"middle": [
"C"
],
"last": "Bell",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Witten, I. H., Moffat, A., and Bell, T. C. 1999 Manag- ing Gigabytes (2nd Ed.): Compressing and Indexing Documents and Images. Morgan Kaufmann Publish- ers Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Compute the performance loss due to pruning each trigram and bigram individually using the pruning criterion. 3. Remove all trigrams with performance loss less than \uf071 4. Remove all bigrams with no child nodes (trigram nodes) and with performance loss less than \uf071 5. Re-compute backoff weights."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Tree structure of n-grams in ZipTBO format, followingWhittaker and Ray (2001)"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The differences between the methods in"
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "On average, HashTBO consumes about 3 bytes per n-gram, whereas ZipTBO consumes 4."
},
"FIGREF5": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "HashTBO stores more bigrams and trigrams than ZipTBO in less space."
}
}
}
}