ACL-OCL / Base_JSON /prefixN /json /N03 /N03-2003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-2003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:12.906934Z"
},
"title": "Getting More Mileage from Web Text Sources for Conversational Speech Language Modeling using Class-Dependent Mixtures",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Bulyko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"postCode": "98195",
"settlement": "Seattle",
"region": "WA"
}
},
"email": "bulyko@ssli.ee.washington.edu"
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"postCode": "98195",
"settlement": "Seattle",
"region": "WA"
}
},
"email": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": "stolcke@speech.sri.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sources of training data suitable for language modeling of conversational speech are limited. In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams.",
"pdf_parse": {
"paper_id": "N03-2003",
"_pdf_hash": "",
"abstract": [
{
"text": "Sources of training data suitable for language modeling of conversational speech are limited. In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language models constitute one of the key components in modern speech recognition systems. Training an Ngram language model, the most commonly used type of model, requires large quantities of text that is matched to the target recognition task both in terms of style and topic. In tasks involving conversational speech the ideal training material, i.e. transcripts of conversational speech, is costly to produce, which limits the amount of training data currently available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Methods have been developed for the purpose of language model adaptation, i.e. the adaptation of an existing model to new topics, domains, or tasks for which little or no training material may be available. Since out-of-domain data can contain relevant as well as irrelevant information, various methods are used to identify the most relevant portions of the out-of-domain data prior to combination. Past work on pre-selection has been based on word frequency counts (Rudnicky, 1995) , probability (or perplexity) of word or part-of-speech sequences (Iyer and Ostendorf, 1999) , latent semantic analysis (Bellegarda, 1998) , and information retrieval techniques (Mahajan et al., 1999; Iyer and Ostendorf, 1999) . Perplexitybased clustering has also been used for defining topicspecific subsets of in-domain data (Clarkson and Robinson, 1997; Martin et al, 1997) , and test set perplexity has been used to prune documents from a training corpus (Klakow, 2000) . The most common method for using the additional text sources is to train separate language models on a small amount of in-domain and large amounts of out-of-domain data and to combine them by interpolation, also referred to as mixtures of language models. The technique was reported by IBM in 1995 (Liu et al, 1995) , and has been used by many sites since then. An alternative approach involves decomposition of the language model into a class n-gram for interpolation (Iyer and Ostendorf, 1997; Ries, 1997) , allowing content words to be interpolated with different weights than filled pauses, for example, which gives an improvement over standard mixture modeling for conversational speech.",
"cite_spans": [
{
"start": 467,
"end": 483,
"text": "(Rudnicky, 1995)",
"ref_id": "BIBREF13"
},
{
"start": 550,
"end": 576,
"text": "(Iyer and Ostendorf, 1999)",
"ref_id": "BIBREF5"
},
{
"start": 604,
"end": 622,
"text": "(Bellegarda, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 662,
"end": 684,
"text": "(Mahajan et al., 1999;",
"ref_id": "BIBREF8"
},
{
"start": 685,
"end": 710,
"text": "Iyer and Ostendorf, 1999)",
"ref_id": "BIBREF5"
},
{
"start": 812,
"end": 841,
"text": "(Clarkson and Robinson, 1997;",
"ref_id": "BIBREF2"
},
{
"start": 842,
"end": 861,
"text": "Martin et al, 1997)",
"ref_id": "BIBREF9"
},
{
"start": 944,
"end": 958,
"text": "(Klakow, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 1259,
"end": 1276,
"text": "(Liu et al, 1995)",
"ref_id": "BIBREF7"
},
{
"start": 1430,
"end": 1456,
"text": "(Iyer and Ostendorf, 1997;",
"ref_id": "BIBREF4"
},
{
"start": 1457,
"end": 1468,
"text": "Ries, 1997)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently researchers have turned to the World Wide Web as an additional source of training data for language modeling. For \"just-in-time\" language modeling (Berger and Miller, 1998) , adaptation data is obtained by submitting words from initial hypotheses of user utterances as queries to a web search engine. Their queries, however, treated words as individual tokens and ignored function words. Such a search strategy typically generates text of a non-conversational style, hence not ideally suited for ASR. In (Zhu and Rosenfeld, 2001 ), instead of downloading the actual web pages, the authors retrieved Ngram counts provided by the search engine. Such an approach generates valuable statistics but limits the set of N-grams to ones occurring in the baseline model.",
"cite_spans": [
{
"start": 156,
"end": 181,
"text": "(Berger and Miller, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 513,
"end": 537,
"text": "(Zhu and Rosenfeld, 2001",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present an approach to extracting additional training data from the web by searching for text that is better matched to a conversational speaking style. We also show how we can make better use of this new data by applying class-dependent interpolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The amount of text available on the web is enormous (over 3 billion web pages are indexed via Google alone) and continues to grow. Most of the text on the web is non-conversational, but there is a fair amount of chat-like material that is similar to conversational speech though often omitting disfluencies. This was our primary target when extracting data from the web. Queries submitted to Google were composed of N-grams that occur most frequently in the switchboard training corpus, e.g. \"I never thought I would\", \"I would think so\", etc. We were searching for the exact match to one or more of these N-grams within the text of the web pages. Web pages returned by Google for the most part consisted of conversational style phrases like \"we were friends but we don't actually have a relationship\" and \"well I actually I I really haven't seen her for years.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Text from the Web",
"sec_num": "2"
},
{
"text": "We used a slightly different search strategy when collecting topic-specific data. First we extended the baseline vocabulary with words from a small in-domain training corpus (Schwarm and Ostendorf, 2002) , and then we used N-grams with these new words in our web queries, e.g. \"wireless mikes like\", \"I know that recognizer\" for a meeting transcription task (Morgan et al, 2001) . Web pages returned by Google mostly contained technical material related to topics similar to what was discussed in the meetings, e.g. \"we were inspired by the weighted count scheme...\", \"for our experiments we used the Bellman-Ford algorithm...\", etc.",
"cite_spans": [
{
"start": 174,
"end": 203,
"text": "(Schwarm and Ostendorf, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 358,
"end": 378,
"text": "(Morgan et al, 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Text from the Web",
"sec_num": "2"
},
{
"text": "The retrieved web pages were filtered before their content could be used for language modeling. First we stripped the HTML tags and ignored any pages with a very high OOV rate. We then piped the text through a maximum entropy sentence boundary detector (Ratnaparkhi, 1996) and performed text normalization using NSW tools (Sproat et al, 2001 ).",
"cite_spans": [
{
"start": 253,
"end": 272,
"text": "(Ratnaparkhi, 1996)",
"ref_id": "BIBREF11"
},
{
"start": 322,
"end": 341,
"text": "(Sproat et al, 2001",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Text from the Web",
"sec_num": "2"
},
{
"text": "Linear interpolation is a standard approach to combining language models, where the probability of a word w i given history h is computed as a linear combination of the corresponding N-gram probabilities from S different models: p(w i |h) = s\u2208S \u03bb s p s (w i |h). Depending on how much adaptation data is available it may be beneficial to estimate a larger number of mixture weights \u03bb s (more than one per data source) in order to handle source mismatch, specifically letting the mixture weight depend on the context h. One approach is to use a mixture weight corresponding to the source posterior probability \u03bb s (h) = p(s|h) (Weintraub et al, 1996) . Here, we instead choose to let the weight vary as a function of the previous word class, i.e. p(w",
"cite_spans": [
{
"start": 626,
"end": 649,
"text": "(Weintraub et al, 1996)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class-dependent Mixture of LMs",
"sec_num": "3"
},
{
"text": "i |h) = s\u2208S \u03bb s (c(w i\u22121 ))p s (w i |h),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class-dependent Mixture of LMs",
"sec_num": "3"
},
{
"text": "where classes c(w i\u22121 ) are part-of-speech tags except for the 100 most frequent words which form their own individual classes. Such a scheme can generalize across domains by tapping into the syntactic structure (POS tags), already shown to be useful for cross-domain language modeling (Iyer and Ostendorf, 1997) , and at the same time target conversational speech since the top 100 words cover 70% of tokens in Switchboard training corpus.",
"cite_spans": [
{
"start": 286,
"end": 312,
"text": "(Iyer and Ostendorf, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class-dependent Mixture of LMs",
"sec_num": "3"
},
{
"text": "Combining several N-grams can produce a model with a very large number of parameters, which is costly in decoding. In such cases N-grams are typically pruned. Here we use entropy-based pruning (Stolcke, 1998) after mixing unpruned models, and reduce the model aggressively to about 15% of its original size. The same pruning parameters were applied to all models in our experiments.",
"cite_spans": [
{
"start": 193,
"end": 208,
"text": "(Stolcke, 1998)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class-dependent Mixture of LMs",
"sec_num": "3"
},
{
"text": "We evaluated on two tasks: 1) Switchboard (Godfrey et al., 1992) , specifically the HUB5 eval 2001 set having a total of 60K words spoken by 120 speakers, and 2) an ICSI Meeting recorder (Morgan et al, 2001 ) eval set having a total of 44K words spoken by 25 speakers. Both sets featured spontaneous conversational speech. There were 45K words of held-out data for each task.",
"cite_spans": [
{
"start": 42,
"end": 64,
"text": "(Godfrey et al., 1992)",
"ref_id": "BIBREF3"
},
{
"start": 187,
"end": 206,
"text": "(Morgan et al, 2001",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Text corpora of conversational telephone speech (CTS) available for training language models consisted of Switchboard, Callhome English, and Switchboardcellular, a total of 3 million words. In addition to that we used 150 million words of Broadcast News (BN) transcripts, and we collected 191 million words of \"conversational\" text from the web. For the Meetings task, there were 200K words of meeting transcripts available for training, and we collected 28 million words of \"topicrelated\" text from the web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The experiments were conducted using the SRI large vocabulary speech recognizer (Stolcke et al, 2000) in the N-best rescoring mode. A baseline bigram language model was used to generate N-best lists, which were then rescored with various trigram models. Table 1 shows word error rates (WER) on the HUB5 test set, comparing performance of the class-based mixture against standard (i.e. class-independent) interpolation. The class-based mixture gave better results in all cases except when only CTS sources were used, probably because these sources are similar to each other and the class-based mixture is mainly useful when data sources are more diverse. We also obtained lower WER by using the web data instead of BN, which indicates that the web data is better matched to our task (i.e. it is more \"conversational\"). If training data is completely arbitrary, then its benefits to the recognition task are minimal, as shown by an example of using a 66M-word corpus collected from random web pages. The baseline Switchboard model gave test set perplexity of 96, which is reduced to 87 with a standard mixture CTS and BN data, reduced further to 83 by adding the web data, and to a best case of 82 with class-dependent interpolation and the added web data.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Stolcke et al, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Increasing the amount of web training data from 61M to 191M gave relatively small performance gains. We \"trimmed\" the 191M-word web corpus down to 61M words by choosing documents with lowest perplexity according to the combined CTS model, yielding the \"Web2\" data source. The model that used Web2 gave the same WER as the one trained with the original 61M web corpus. It could be that the web text obtained with \"Google\" filtering is fairly homogeneous, so little is gained by further perplexity filtering. Or, it could be that when choosing better matched data, we also exclude new N-grams that may occur only in testing. Results on the Meeting test set are shown in Table 2 , where the baseline model was trained on CTS and BN sources. As in the HUB5 experiments, the classbased mixture outperformed standard interpolation. We achieved lower WER by using the web data instead of the meeting transcripts, but the best results are obtained by using all data sources. Language model perplexity is reduced from 122 for the baseline to a best case of 95.",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 676,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We also tried different class assignments for the classbased mixture on the HUB5 set and we found that using automatically derived classes instead of part-of-speech tags does not lead to performance degradation as long as we allocate individual classes for the top 100 words. Automatic class mapping can make class-based mixtures feasible for other languages where part-of-speech tags are difficult to derive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In summary, we have shown that, if filtered, web text can be successfully used for training language models of conversational speech, outperforming some other outof-domain (BN) and small domain-specific (Meetings) sources of data. We have also found that by combining LMs from different domains with class-dependent interpolation (particularly when each of the top 100 words forms its own class), we achieve lower WER than if we use the standard approach where mixture weights depend only on the data source. Recognition experiments show a significant reduction in WER (1.3-2.3% absolute) due to additional training data and class-based interpolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploiting both local and global constraints for multispan statistical language modeling",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bellegarda",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. ICASSP, pages II",
"volume": "",
"issue": "",
"pages": "677--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Bellegarda. 1998. Exploiting both local and global con- straints for multispan statistical language modeling. In Proc. ICASSP, pages II:677-680.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Just-in-time language modeling",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. ICASSP, pages II",
"volume": "",
"issue": "",
"pages": "705--708",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger and R. Miller. 1998. Just-in-time language modeling. In Proc. ICASSP, pages II:705-708.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language model adaptation using mixtures and an exponentially decaying cache",
"authors": [
{
"first": "P",
"middle": [],
"last": "Clarkson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. ICASSP, pages II",
"volume": "",
"issue": "",
"pages": "799--802",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Clarkson and A. Robinson. 1997. Language model adapta- tion using mixtures and an exponentially decaying cache. In Proc. ICASSP, pages II:799-802.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Switchboard: telephone speech corpus for research and development",
"authors": [
{
"first": "J",
"middle": [],
"last": "Godfrey",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Holliman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "517--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Godfrey, E. Holliman, and J. McDaniel. 1992. Switchboard: telephone speech corpus for research and development. In Proc. ICASSP, pages I:517-520.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Transforming out-of-domain estimates to improve in-domain language models",
"authors": [
{
"first": "R",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Eurospeech",
"volume": "4",
"issue": "",
"pages": "1975--1978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Iyer and M. Ostendorf. 1997. Transforming out-of-domain estimates to improve in-domain language models. In Proc. Eurospeech, volume 4, pages 1975-1978.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Relevance weighting for combining multi-domain data for n-gram language modeling",
"authors": [
{
"first": "R",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Speech and Language",
"volume": "13",
"issue": "3",
"pages": "267--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Iyer and M. Ostendorf. 1999. Relevance weighting for combining multi-domain data for n-gram language model- ing. Computer Speech and Language, 13(3):267-282.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Selecting articles from the language model training corpus",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. ICASSP, pages III",
"volume": "",
"issue": "",
"pages": "1695--1698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klakow. 2000. Selecting articles from the language model training corpus. In Proc. ICASSP, pages III:1695-1698.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "IBM Switchboard progress and evaluation site report",
"authors": [
{
"first": "F",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 1995,
"venue": "LVCSR Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Liu et al. 1995. IBM Switchboard progress and evaluation site report. In LVCSR Workshop, Gaithersburg, MD. Na- tional Institute of Standards and Technology.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improved topic-dependent language modeling using information retrieval techniques",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mahajan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Beeferman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "541--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Mahajan, D. Beeferman, and D. Huang. 1999. Improved topic-dependent language modeling using information re- trieval techniques. In Proc. ICASSP, pages I:541-544.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adaptive topic-dependent language modeling using word-based varigrams",
"authors": [
{
"first": "S",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Eurospeech",
"volume": "3",
"issue": "",
"pages": "1447--1450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Martin et al. 1997. Adaptive topic-dependent language modeling using word-based varigrams. In Proc. Eurospeech, pages 3:1447-1450.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The meeting project at ICSI",
"authors": [
{
"first": "N",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. Conf. on Human Language Technology",
"volume": "",
"issue": "",
"pages": "246--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Morgan et al. 2001. The meeting project at ICSI. In Proc. Conf. on Human Language Technology, pages 246-252.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A maximum entropy part-of-speech tagger",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. Empirical Methods in Natural Language Processing Conference",
"volume": "",
"issue": "",
"pages": "133--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ratnaparkhi. 1996. A maximum entropy part-of-speech tag- ger. In Proc. Empirical Methods in Natural Language Pro- cessing Conference, pages 133-141.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A class based approach to domain adaptation and constraint integration for empirical m-gram models",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Eurospeech",
"volume": "4",
"issue": "",
"pages": "1983--1986",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Ries. 1997. A class based approach to domain adaptation and constraint integration for empirical m-gram models. In Proc. Eurospeech, pages 4:1983-1986.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Language modeling with limited domain data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. ARPA Spoken Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "66--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Rudnicky. 1995. Language modeling with limited domain data. In Proc. ARPA Spoken Language Technology Work- shop, pages 66-69.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Text normalization with varied data sources for conversational speech language modeling",
"authors": [
{
"first": "S",
"middle": [],
"last": "Schwarm",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "789--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Schwarm and M. Ostendorf. 2002. Text normalization with varied data sources for conversational speech language mod- eling. In Proc. ICASSP, pages I:789-792.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Normalization of non-standard words",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2001,
"venue": "Computer Speech and Language",
"volume": "15",
"issue": "3",
"pages": "287--333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Sproat et al. 2001. Normalization of non-standard words. Computer Speech and Language, 15(3):287-333.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hub-5 conversational speech transcription system",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. NIST Speech Transcription Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke et al. 2000. The SRI March 2000 Hub-5 conver- sational speech transcription system. In Proc. NIST Speech Transcription Workshop.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Entropy-based pruning of backoff language models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. DARPA Broadcast News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "270--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broadcast News Transcription and Understanding Workshop, pages 270-274.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "LM95 Project Report: Fast training and portability",
"authors": [
{
"first": "M",
"middle": [],
"last": "Weintraub",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Weintraub et al. 1996. LM95 Project Report: Fast training and portability. Technical Report 1, Center for Language and Speech Processing, Johns Hopkins University, Baltimore.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving trigram language modeling with the world wide web",
"authors": [
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "533--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Zhu and R. Rosenfeld. 2001. Improving trigram language modeling with the world wide web. In Proc. ICASSP, pages I:533-536.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"text": "HUB5 (eval 2001) WER results using standard and class-based mixtures.",
"content": "<table><tr><td>LM Data Sources</td><td colspan=\"2\">Std. mix Class mix</td></tr><tr><td>Baseline CTS</td><td>38.9%</td><td>38.9%</td></tr><tr><td>+ 150M BN</td><td>37.9%</td><td>37.8%</td></tr><tr><td>+ 66M Web (Random)</td><td>38.6%</td><td>38.3%</td></tr><tr><td>+ 61M Web</td><td>37.7%</td><td>37.6%</td></tr><tr><td>+ 191M Web</td><td>37.6%</td><td>37.4%</td></tr><tr><td>+ 150M BN + 61M Web</td><td>37.7%</td><td>37.3%</td></tr><tr><td>+ 150M BN + 191M Web</td><td>37.5%</td><td>37.2%</td></tr><tr><td>+ 150M BN + 61M Web2</td><td>37.7%</td><td>37.3%</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"text": "Meetings results (WER).",
"content": "<table><tr><td>LM Data Sources</td><td colspan=\"2\">Std. mix Class mix</td></tr><tr><td>Baseline</td><td/><td>38.2%</td></tr><tr><td>+ 0.2M Meetings</td><td>37.2%</td><td>36.9%</td></tr><tr><td>+ 28M Web (Topic)</td><td>36.9%</td><td>36.7%</td></tr><tr><td>+ Meetings + Web (Topic)</td><td>36.2%</td><td>35.9%</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}