ACL-OCL / Base_JSON /prefixN /json /N03 /N03-2002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-2002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:56.879831Z"
},
"title": "Factored Language Models and Generalized Parallel Backoff",
"authors": [
{
"first": "Jeff",
"middle": [
"A"
],
"last": "Bilmes",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "bilmes@ssli.ee.washington.edu"
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "katrin@ssli.ee.washington.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce factored language models (FLMs) and generalized parallel backoff (GPB). An FLM represents words as bundles of features (e.g., morphological classes, stems, data-driven clusters, etc.), and induces a probability model covering sequences of bundles rather than just words. GPB extends standard backoff to general conditional probability tables where variables might be heterogeneous types, where no obvious natural (temporal) backoff order exists, and where multiple dynamic backoff strategies are allowed. These methodologies were implemented during the JHU 2002 workshop as extensions to the SRI language modeling toolkit. This paper provides initial perplexity results on both CallHome Arabic and on Penn Treebank Wall Street Journal articles. Significantly, FLMs with GPB can produce bigrams with significantly lower perplexity, sometimes lower than highly-optimized baseline trigrams. In a multi-pass speech recognition context, where bigrams are used to create first-pass bigram lattices or N-best lists, these results are highly relevant.",
"pdf_parse": {
"paper_id": "N03-2002",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce factored language models (FLMs) and generalized parallel backoff (GPB). An FLM represents words as bundles of features (e.g., morphological classes, stems, data-driven clusters, etc.), and induces a probability model covering sequences of bundles rather than just words. GPB extends standard backoff to general conditional probability tables where variables might be heterogeneous types, where no obvious natural (temporal) backoff order exists, and where multiple dynamic backoff strategies are allowed. These methodologies were implemented during the JHU 2002 workshop as extensions to the SRI language modeling toolkit. This paper provides initial perplexity results on both CallHome Arabic and on Penn Treebank Wall Street Journal articles. Significantly, FLMs with GPB can produce bigrams with significantly lower perplexity, sometimes lower than highly-optimized baseline trigrams. In a multi-pass speech recognition context, where bigrams are used to create first-pass bigram lattices or N-best lists, these results are highly relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The art of statistical language modeling (LM) is to create probability models over words and sentences that tradeoff statistical prediction with parameter variance. The field is both diverse and intricate (Rosenfeld, 2000; Chen and Goodman, 1998; Jelinek, 1997; Ney et al., 1994) , with many different forms of LMs including maximumentropy, whole-sentence, adaptive and cache-based, to name a small few. Many models are simply smoothed conditional probability distributions for a word given its preceding history, typically the two preceding words.",
"cite_spans": [
{
"start": 205,
"end": 222,
"text": "(Rosenfeld, 2000;",
"ref_id": "BIBREF8"
},
{
"start": 223,
"end": 246,
"text": "Chen and Goodman, 1998;",
"ref_id": "BIBREF2"
},
{
"start": 247,
"end": 261,
"text": "Jelinek, 1997;",
"ref_id": "BIBREF5"
},
{
"start": 262,
"end": 279,
"text": "Ney et al., 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we introduce two new methods for language modeling: factored language model (FLM) and generalized parallel backoff (GPB) . An FLM considers a word as a bundle of features, and GPB is a technique that generalized backoff to arbitrary conditional probability tables. While these techniques can be considered in isolation, the two methods seem particularly suited to each other -in particular, the method of GPB can greatly facilitate the production of FLMs with better performance.",
"cite_spans": [
{
"start": 129,
"end": 134,
"text": "(GPB)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a factored language model, a word is viewed as a vector of k factors, so that w t \u2261 {f 1 t , f 2 t , . . . , f K t }. Factors can be anything, including morphological classes, stems, roots, and other such features in highly inflected languages (e.g., Arabic, German, Finnish, etc.), or data-driven word classes or semantic features useful for sparsely inflected languages (e.g., English). Clearly, a two-factor FLM generalizes standard class-based language models, where one factor is the word class and the other is words themselves. An FLM is a model over factors, i.e., p(f 1:K t |f 1:K t\u22121:t\u2212n ), that can be factored as a product of probabilities of the form p(f |f 1 , f 2 , . . . , f N ). Our task is twofold: 1) find an appropriate set of factors, and 2) induce an appropriate statistical model over those factors (i.e., the structure learning problem in graphical models (Bilmes, 2003; Friedman and Koller, 2001) ).",
"cite_spans": [
{
"start": 883,
"end": 897,
"text": "(Bilmes, 2003;",
"ref_id": "BIBREF1"
},
{
"start": 898,
"end": 924,
"text": "Friedman and Koller, 2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Factored Language Models",
"sec_num": "2"
},
{
"text": "An individual FLM probability model can be seen as a directed graphical model over a set of N + 1 random variables, with child variable F and N parent variables F 1 through F N (if factors are words, then F = W t and F i = W t\u2212i ). Two features make an FLM distinct from a standard language model: 1) the variables {F, F 1 , . . . , F N } can be heterogeneous (e.g., words, word clusters, morphological classes, etc.); and 2) there is no obvious natural (e.g., temporal) backoff order as in standard wordbased language models. With word-only models, backoff proceeds by dropping first the oldest word, then the next oldest, and so on until only the unigram remains. In p(f |f 1 , f 2 , . . . , f N ), however, many of the parent variables might be the same age. Even if the variables have differing seniorities, it is not necessarily best to drop the oldest variable first.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "F 1 F 2 F 3 F F F 1 F 2 F F 1 F 3 F F 2 F 3 F F 1 F F 3 F F 2 F A B C D E F G H Figure 1: A backoff graph for F with three parent vari- ables F 1 , F 2 , F 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "The graph shows all possible singlestep backoff paths, where exactly one variable is dropped per backoff step. The SRILM-FLM extensions, however, also support multi-level backoff.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "We introduce the notion of a backoff graph ( Figure 1 ) to depict this issue, which shows the various backoff paths from the all-parents case (top graph node) to the unigram (bottom graph node). Many possible backoff paths could be taken. For example, when all variables are words, the path A \u2212 B \u2212 E \u2212 H corresponds to trigram with standard oldest-first backoff order. The path A \u2212 D \u2212 G \u2212 H is a reverse-time backoff model. This can be seen as a generalization of lattice-based language modeling (Dupont and Rosenfeld, 1997) where factors consist of words and hierarchically derived word classes.",
"cite_spans": [
{
"start": 498,
"end": 526,
"text": "(Dupont and Rosenfeld, 1997)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 45,
"end": 53,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "In our GPB procedure, either a single distinct path is chosen for each gram or multiple parallel paths are used simultaneously. In either case, the set of backoff path(s) that are chosen are determined dynamically (at \"run-time\") based on the current values of the variables. For example, a path might consist of nodes A \u2212 (BCD) \u2212 (EF) \u2212 G where node A backs off in parallel to the three nodes BCD, node B backs off to nodes (EF), C backs off to (E), and D backs off to (F).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "This can be seen as a generalization of the standard backoff equation. In the two parents case, this becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "p GBO (f |f 1 , f 2 ) = d N (f,f 1 ,f 2 ) p M L (f |f 1 , f 2 ) if N (f, f 1 , f 2 ) > \u03c4 \u03b1(f 1 , f 2 )g(f, f 1 , f 2 ) otherwise where d N (f,f1,f2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "is a standard discount (determining the smoothing method), p M L is the maximum likelihood distribution, \u03b1(f 1 , f 2 ) are backoff weights, and g(f, f 1 , f 2 ) is an arbitrary non-negative backoff function of its three factor arguments. Standard backoff occurs with g(f, f 1 , f 2 ) = p BO (f |f 1 ), but the GPB procedures can be obtained by using different g-functions. For example, g(f, f 1 , f 2 ) = p BO (f |f 2 ) corresponds to a different backoff path, and parallel backoff is obtained by using an appropriate g (see below). As long as g is non-negative, the backoff weights are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "\u03b1(f 1 , f 2 ) = 1 \u2212 f :N (f,f 1 ,f 2 )>\u03c4 d N (f,f 1 ,f 2 ) p M L (f |f 1 , f 2 ) f :N (f,f 1 ,f 2 )<=\u03c4 g(f, f 1 , f 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "This equation is non-standard only in the denominator, where one may no longer sum over the factors f only with counts greater than \u03c4 . This is because g is not necessarily a distribution (i.e., does not sum to unity). Therefore, backoff weight computation can indeed be more expensive for certain g functions, but this appears not to be prohibitive as demonstrated in the next few sections. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "w 1 , w 2 -/ temporal [2, 1] 173 FLM 3-gram w 1 , w 2 , m 1 , s 1 -/ [2, 1, 4, 3] 178 GPB-FLM 3-gram w 1 , w 2 , m 1 , s 1 g 1 / [2, 1, (3, 4), 3, 4] 166 2-gram w 1 -/ temporal [1] 175 FLM 2-gram w 1 , m 1 -/ [2, 1] 173 FLM 2-gram w 1 , m 1 , s 1 -/ [1, 2, 3] 179 GPB-FLM 2-gram w 1 , m 1 , s 1 g 1 / [1, (2, 3), 2, 3] 167",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Parallel Backoff",
"sec_num": "3"
},
{
"text": "During the recent 2002 JHU workshop (Kirchhoff et al., 2003) , significant extensions were made to the SRI language modeling toolkit (Stolcke, 2002) to support arbitrary FLMs and GPB procedures. This uses a graphicalmodel like specification language, and where many different backoff functions (19 in total) were implemented. Other features include: 1) all SRILM smoothing methods at every node in a backoff graph; 2) graph level skipping; and 3) up to 32 possible parents (e.g., 33-gram). Two of the backoff functions are (in the three parents case):",
"cite_spans": [
{
"start": 36,
"end": 60,
"text": "(Kirchhoff et al., 2003)",
"ref_id": "BIBREF6"
},
{
"start": 133,
"end": 148,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SRILM-FLM extensions",
"sec_num": "4"
},
{
"text": "g(f, f 1 , f 2 , f 3 ) = p GBO (f |f 1 , f 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRILM-FLM extensions",
"sec_num": "4"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRILM-FLM extensions",
"sec_num": "4"
},
{
"text": "( 1, 2) = argmax (m 1 ,m 2 )\u2208{(1,2),(1,3),(2,3)} pGBO(f |fm 1 , fm 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRILM-FLM extensions",
"sec_num": "4"
},
{
"text": "(call this g 1 ) or alternatively, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRILM-FLM extensions",
"sec_num": "4"
},
{
"text": "( 1 , 2 ) = argmax (m 1 ,m 2 )\u2208{(1,2),(1,3),(2,3)} N (f, fm 1 , fm 2 ) |{f : N (f, fm 1 , fm 2 ) > 0}|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRILM-FLM extensions",
"sec_num": "4"
},
{
"text": "(call this g 2 ) where N () is the count function. Implemented backoff functions include maximum/min (normalized) counts/backoff probabilities, products, sums, mins, maxs, (weighted) averages, and geometric means.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRILM-FLM extensions",
"sec_num": "4"
},
{
"text": "GPB-FLMs were applied to two corpora and their perplexity was compared with standard optimized vanilla biand trigram language models. In the following, we consider as a \"bigram\" a language model with a temporal history that includes information from no longer than one previous time-step into the past. Therefore, if factors are deterministically derivable from words, a \"bigram\" might include both the previous words and previous factors as a history. From a decoding state-space perspective, any such bigram would be relatively cheap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In CallHome-Arabic, words are accompanied with deterministically derived factors: morphological class (M), [(1, 2, 3), (1, 2), (2, 3), (3, 1) [1, (2, 3) , 2, 3] 275(\u00b11.2) stems (S), roots (R), and patterns (P). Training data consisted of official training portions of the LDC CallHome ECA corpus plus the CallHome ECA supplement (100 conversations). For testing we used the official 1996 evaluation set. Results are given in Table 1 and show perplexity for: 1) the baseline 3-gram; 2) a FLM 3-gram using morphs and stems; 3) a GPB-FLM 3-gram using morphs, stems and backoff function g 1 ; 4) the baseline 2-gram; 5) an FLM 2-gram using morphs; 6) an FLM 2-gram using morphs and stems; and 7) an GPB-FLM 2-gram using morphs and stems. Backoff path(s) are depicted by listing the parent number(s) in backoff order. As can be seen, the FLM alone might increase perplexity, but the GPB-FLM decreases it. Also, it is possible to obtain a 2-gram with lower perplexity than the optimized baseline 3-gram. The Wall Street Journal (WSJ) data is from the Penn Treebank 2 tagged ('88-'89) WSJ collection. Word and POS tag information (T t ) was extracted. The sentence order was randomized to produce 5-fold crossvalidation results using (4/5)/(1/5) training/testing sizes. Other factors included the use of a simple deterministic tagger obtained by mapping a word to its most frequent tag (F t ), and word classes obtained using SRILM's ngram-class tool with 50 (C t ) and 500 (D t ) classes. Results are given in Table 2 . The table shows the baseline 3-gram and 2-gram perplexities, and three GPB-FLMs. Model A uses the true by-hand tag information from the Treebank. To simulate conditions during first-pass decoding, Model B shows the results using the most frequent tag, and Model C uses only the two data-driven word classes. As can be seen, the bigram perplexities are significantly reduced relative to the baseline, almost matching that of the baseline trigram. Note that none of these reduced perplexity bigrams were possible without using one of the novel backoff functions.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 141,
"text": "[(1, 2, 3), (1, 2), (2, 3), (3, 1)",
"ref_id": null
},
{
"start": 142,
"end": 152,
"text": "[1, (2, 3)",
"ref_id": null
},
{
"start": 425,
"end": 432,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1504,
"end": 1511,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "w 1 , w 2 -/ temporal [2, 1] 258(\u00b11.2) 2-gram w 1 -/ temporal [1] 320(\u00b11.3) GPB-FLM 2-gram A w 1 , d 1 , t 1 g 2 /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": ", 1, 2, 3] 266(\u00b11.1) GPB-FLM 2-gram B w 1 , d 1 , f 1 g 2 / [2, 1] 276(\u00b11.3) GPB-FLM 2-gram C w 1 , d 1 , c 1 g 2 /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The improved perplexity bigram results mentioned above should ideally be part of a first-pass recognition step of a multi-pass speech recognition system. With a bigram, the decoder search space is not large, so any appreciable LM perplexity reductions should yield comparable word error reductions for a fixed set of acoustic scores in a firstpass. For N-best or lattice generation, the oracle error should similarly improve. The use of an FLM with GPB in such a first pass, however, requires a decoder that supports such language models. Therefore, FLMs with GPB will be incorporated into GMTK (Bilmes, 2002) , a general purpose graphical model toolkit for speech recognition and language processing. The authors thank Dimitra Vergyri, Andreas Stolcke, and Pat Schone for useful discussions during the JHU'02 workshop.",
"cite_spans": [
{
"start": 595,
"end": 609,
"text": "(Bilmes, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The GMTK documentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Bilmes. 2002. The GMTK docu- mentation. http://ssli.ee.washington.edu/ bilmes/gmtk.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Graphical models and automatic speech recognition",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2003,
"venue": "Mathematical Foundations of Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. A. Bilmes. 2003. Graphical models and au- tomatic speech recognition. In R. Rosenfeld, M. Osten- dorf, S. Khudanpur, and M. Johnson, editors, Mathematical Foundations of Speech and Language Processing. Springer- Verlag, New York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "]",
"middle": [
"S F"
],
"last": "Goodman1998",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Chen and Goodman1998] S. F. Chen and J. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report Tr-10-98, Center for Research in Computing Technology, Harvard University, Cambridge, Massachusetts, August.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Lattice based language models",
"authors": [
{
"first": "P",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Dupont and Rosenfeld1997] P. Dupont and R. Rosenfeld. 1997. Lattice based language models. Technical Report CMU-CS-97-173, Carnegie Mellon University, Pittsburgh, PA 15213, September.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning Bayesian networks from data",
"authors": [
{
"first": "Koller2001] N",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2001,
"venue": "NIPS 2001 Tutorial Notes. Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Friedman and Koller2001] N. Friedman and D. Koller. 2001. Learning Bayesian networks from data. In NIPS 2001 Tuto- rial Notes. Neural Information Processing Systems, Vancou- ver, B.C. Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical Methods for Speech Recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Jelinek. 1997. Statistical Methods for Speech Recognition. MIT Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Novel approaches to arabic speech recognition: Report from the 2002 johns-hopkins summer workshop",
"authors": [
{
"first": "[",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Kirchhoff et al.2003] K. Kirchhoff et al 2003. Novel ap- proaches to arabic speech recognition: Report from the 2002 johns-hopkins summer workshop. In Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, Hong Kong.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On structuring probabilistic dependencies in stochastic language modelling",
"authors": [
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1994,
"venue": "Computer Speech and Language",
"volume": "8",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Ney et al.1994] H. Ney, U. Essen, and R. Kneser. 1994. On structuring probabilistic dependencies in stochastic language modelling. Computer Speech and Language, 8:1-38.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Two decades of statistical language modeling: Where do we go from here? Proceedings of the IEEE",
"authors": [
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "88",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proceed- ings of the IEEE, 88(8).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SRILM-an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. Int. Conf. on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 2002. SRILM-an extensible lan- guage modeling toolkit. In Proc. Int. Conf. on Spoken Lan- guage Processing, Denver, Colorado, September.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>LM</td><td>parents</td><td>backoff function/path(s)</td><td>ppl</td></tr><tr><td>3-gram</td><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "CallHome Arabic Results."
},
"TABREF1": {
"content": "<table><tr><td>LM</td><td>parents</td><td>Backoff function/path(s)</td><td>ppl (\u00b1std. dev.)</td></tr><tr><td>3-gram</td><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Penn Treebank WSJ Results."
}
}
}
}