ACL-OCL / Base_JSON /prefixP /json /P98 /P98-1028.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P98-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:16:37.180199Z"
},
"title": "Beyond N-Grams: Can Linguistic Sophistication Improve Language Modeling?",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "Md",
"country": "USA"
}
},
"email": "brill@cs.jhu.edu"
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "Md",
"country": "USA"
}
},
"email": "rflorian@cs.jhu.edu"
},
{
"first": "John",
"middle": [
"C"
],
"last": "Henderson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "Md",
"country": "USA"
}
},
"email": ""
},
{
"first": "Lidia",
"middle": [],
"last": "Mangu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "Md",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "It seems obvious that a successful model of natural language would incorporate a great deal of both linguistic and world knowledge. Interestingly, state of the art language models for speech recognition are based on a very crude linguistic model, namely conditioning the probability of a word on a small fixed number of preceding words. Despite many attempts to incorporate more sophisticated information into the models, the n-gram model remains the state of the art, used in virtually all speech recognition systems. In this paper we address the question of whether there is hope in improving language modeling by incorporating more sophisticated linguistic and world knowledge, or whether the ngrams are already capturing the majority of the information that can be employed.",
"pdf_parse": {
"paper_id": "P98-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "It seems obvious that a successful model of natural language would incorporate a great deal of both linguistic and world knowledge. Interestingly, state of the art language models for speech recognition are based on a very crude linguistic model, namely conditioning the probability of a word on a small fixed number of preceding words. Despite many attempts to incorporate more sophisticated information into the models, the n-gram model remains the state of the art, used in virtually all speech recognition systems. In this paper we address the question of whether there is hope in improving language modeling by incorporating more sophisticated linguistic and world knowledge, or whether the ngrams are already capturing the majority of the information that can be employed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "N-gram language models are very crude linguistic models that attempt to capture the constraints of language by simply conditioning the probability of a word on a small fixed number of predecessors. It is rather frustrating to language engineers that the n-gram model is the workhorse of virtually every speech recognition system. Over the years, there have been many attempts to improve language models by utilizing linguistic information, but these methods have not been able to achieve significant improvements over the n-gram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The insufficiency of Markov models has been known for many years (see Chomsky (1956) ). It is easy to construct examples where a trigram model fails and a more sophisticated model could succeed. For instance, in the sentence : The dog on the hill barked, the word barked would be assigned a low probability by a trigram model. However, a linguistic model could determine that dog is the head of the noun phrase preceding barked and therefore assign barked a high probability, since P(barkedldog) is high.",
"cite_spans": [
{
"start": 70,
"end": 84,
"text": "Chomsky (1956)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Using different sources of rich linguistic information will help speech recognition if the phenomena they capture are prevalent and they involve instances where the recognizer makes errors. ~ In this paper we first give a brief overview of some recent attempts at incorporating linguistic information into language models. Then we discuss experiments which give some insight into what aspects of language hold most promise for improving the accuracy of speech recognizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "There is a continuing push among members of the speech recognition community to remedy the weaknesses of linguistically impoverished ngram language models. It is widely believed that incorporating linguistic concepts can lead to more accurate language models and more accurate speech recongizers. One of the In'st attempts at linguistically-based modelling used probabilistic context-free grammars (PCFGs) directly to I This is one of the problems with perplexity as a measure of language model quality: if the better model simply assigns higher probability to the elements the recognizer already gets correct, the model will look better in terms of perplexity, but will do nothing to improve recognizer accuracy. compute language modeling probabilities (Jelinek(1992) ). Another approach retrieved ngram statistics from a handwritten PCFG and combined those statistics with traditional ngrams elicited from a corpus (Jurafsky(1995) ). Research has been carded out in adaptively modifying language models using knowledge of the subject matter being discussed (Seymore(1997) ). This research depends on the prevalence of jargon and domain-specific language.",
"cite_spans": [
{
"start": 754,
"end": 768,
"text": "(Jelinek(1992)",
"ref_id": "BIBREF6"
},
{
"start": 917,
"end": 932,
"text": "(Jurafsky(1995)",
"ref_id": "BIBREF8"
},
{
"start": 1059,
"end": 1073,
"text": "(Seymore(1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistically-Based Models",
"sec_num": "1"
},
{
"text": "Linguistically motivated language models were investigated for two consecutive years at the Summer Speech Recognition Workshop, held at Johns Hopkins University. In 1995 experiments were run adding part-ofspeech (POS) tags to the language models (Brill(1996) ). In the 1996 Summer Speech Recognition Workshop, recognizer improvements were attempted by exploiting the long-distance dependencies provided by a dependency parse (Chelba(1997) ). The goal was to exploit the predictive power of predicateargument structures found in parse trees. In Della Pietra(1994) and Fong(1995) , link grammars were used, again in an attempt to improve the language model by providing it with long-distance dependencies not captured in the n-gram statistics. 2",
"cite_spans": [
{
"start": 246,
"end": 258,
"text": "(Brill(1996)",
"ref_id": "BIBREF0"
},
{
"start": 425,
"end": 438,
"text": "(Chelba(1997)",
"ref_id": "BIBREF1"
},
{
"start": 550,
"end": 562,
"text": "Pietra(1994)",
"ref_id": "BIBREF3"
},
{
"start": 567,
"end": 577,
"text": "Fong(1995)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistically-Based Models",
"sec_num": "1"
},
{
"text": "Although much work has been done exploring how to create linguistically-based language models, improvement in speech recognizer accuracy has been elusive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistically-Based Models",
"sec_num": "1"
},
{
"text": "In an attempt to gain insight into what linguistic knowledge we should be exploring to improve language models for speech recognition, we ran experiments where people tried to improve the output of speech recognition systems and then recorded what types of knowledge they used in doing so. We hoped to both assess how much gain might be expected from very sophisticated models and to determine just what information sources could contribute to this gain. People were given the ordered list of the ten most likely hypotheses for an utterance according to the recognizer. They were then 2 For a more comprehensive review of the historical involvement of natural language parsing in language modelling, see Stolcke(1997) .",
"cite_spans": [
{
"start": 704,
"end": 717,
"text": "Stolcke(1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "2"
},
{
"text": "asked to choose from the ten-best list the hypothesis that they thought would have the lowest word error rate, in other words, to try to determine which hypothesis is closest to the truth. Often, the truth is not present in the 10best list. An example 5-best list from the Wall Street Journal corpus is shown in Figure 1 . Four subjects were used in this experiment, and each subject was presented with 75 10-best lists from three different speech recognition systems (225 instances total per subject).",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 320,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "2"
},
{
"text": "From this experiment, we hoped to gauge what the upper bound is on how much we could improve upon state of the art by using very rich models)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "2"
},
{
"text": "For our experiments, we used three different speech recognizers, trained respectively on Switchboard (spontaneous speech), Broadcast News (recorded news broadcasts) and Wall Street Journal data. 4 The word error rates of the recognizers for each corpus are shown in the first line of Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "2"
},
{
"text": "The human subjects were presented with the ten-best lists. Sentences within each ten-best list were aligned to make it easier to compare them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "2"
},
{
"text": "In addition to choosing the most appropriate selection from the 10-best list, subjects were also allowed to posit a string not in the list by editing any of the strings in the 10best list in any way they chose. For each sample, subjects were asked to determine what types of information were used in deciding. This was done by presenting the subjects with a set of check boxes, and asking them to check all that applied. A list of the options presented to the human can be found in Figure 2 . Subjects were provided with a detailed explanation, as well as examples, for each of these options .5",
"cite_spans": [],
"ref_spans": [
{
"start": 482,
"end": 490,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "2"
},
{
"text": "The first question to ask is whether people are able to improve upon the speech recognizer's output by postprocessing the n-best lists. For (1) the recognizer's word error rate, (2) the oracle error rate, (3) human error rate when choosing among the 10-best (human selection) and (4) human error rate when allowed to posit any word sequence (human edit).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Net Human Improvement",
"sec_num": "2"
},
{
"text": "The oracle error rate is the upper bound on how well anybody could do when restricted to choosing between the 10 best hypotheses: the oracle always chooses the string with the lowest word error rate. Note that if the human always picked the highest-ranking hypothesis, then her accuracy would be equivalent to that of the recognizer. Below we show the results for each corpus, averaged across the subjects: There are a number of interesting things to note about these results. First, they are quite encouraging, in that people are able to improve the output on all corpora. As the accuracy of the recognizer improves, the relative human improvement increases. While people can attain over three-quarters of the possible word error rate reduction over the recognizer on Wall Street Journal, they are only able to attain 25.9% of the possible reduction in Switchboard. This is probably attributable to two causes. The more varied the language is in the corpus, the harder it is for a person to predict what was said. Also, the higher the recognizer word error rate, the less reliable the contextual cues will be which the human uses to choose a lower error rate string. In Switchboard, over 40% of the words in the highest ranked hypothesis are wrong. Therefore, the human is basing her judgement on much less reliable contexts in Switchboard than in the much lower word error rate Wall Street Journal, resulting in less net improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Net Human Improvement",
"sec_num": "2"
},
{
"text": "For all three corpora, allowing the person to edit the output, as opposed to being limited to pick one of the ten highest ranked hypotheses, resulted in significant gains: over 50% for Switchboard and Broadcast News, and 30% for Wall Street Journal. This indicates that within the paradigm of n-best list postprocessing, one should strongly consider methods for editing, rather than simply choosing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Net Human Improvement",
"sec_num": "2"
},
{
"text": "In examining the relative gain over the recognizer the human was able to achieve as a function of sentence length, for the three different corpora, we observed that the general trend is that the longer the sentence is, the greater the net gain is. This is because a longer sentence provides more cues, both syntactic and semantic, that can be used in choosing the highest quality word sequence. We also observed that, other than the case of very low oracle error rate, the more difficult the task is the lower the net human gain. So both across corpora and corpus-internal, we find this relationship between quality of recognizer output and ability of a human to improve upon recognizer output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Net Human Improvement",
"sec_num": "2"
},
{
"text": "In discussions with the participants after they ran the experiment, it was determined that all participants essentially used the same strategy. When all hypotheses appeared to be equally bad, the highest-ranking hypothesis was chosen. This is a conservative strategy that will ensure that the person does no worse than the recognizer on these difficult cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Usefulness of Linguistic Information",
"sec_num": "3"
},
{
"text": "In other cases, people tried to use linguistic knowledge to pick a hypothesis they felt was better than the highest ranked hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Usefulness of Linguistic Information",
"sec_num": "3"
},
{
"text": "In Figure 2 , we show the distribution of proficiencies that were used by the subjects. We show for each of the three corpora, the percentage of 10-best instances for which the person used each type of knowledge (along with the ranking of these percentages), as well as the net gain over the recognizer accuracy that people were able to achieve by using this information source. For all three corpora, the most common (and most useful) proficiency was that of closed class word choice, for example confusing the words in and and, or confusing than and that. It is encouraging that although world knowledge was used frequently, there were many linguistic proficiencies that the person used as well. If only world knowledge accounted for the person's ability to improve upon the recognizer's output, then we might be faced with an AI-complete problem: speech recognizer improvements are possible, but we would have to essentially solve AI before the benefit could be realized.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Usefulness of Linguistic Information",
"sec_num": "3"
},
{
"text": "One might conclude that although people were able to make significant improvements over the recognizer, we may still have to solve linguistics before these improvements could actually be realized by any actual computer system. However, we are encouraged that algorithms could be created that can do quite well at mimicking a number of proficiencies that contributed to the human's performance improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Usefulness of Linguistic Information",
"sec_num": "3"
},
{
"text": "For instance, determiner choice was a factor in roughly 25% of the examples for the Wall Street Journal. There already exist algorithms for choosing the proper determiner with fairly high accuracy (Knight(1994) ). Many of the cases involved confusion between a relatively small set of choices: closed class word choice, determiner choice, and preposition choice. Methods already exist for choosing the proper word from a fixed set of possibilities based upon the context in which the word appears (e.g. Golding(1996) ).",
"cite_spans": [
{
"start": 197,
"end": 210,
"text": "(Knight(1994)",
"ref_id": "BIBREF9"
},
{
"start": 503,
"end": 516,
"text": "Golding(1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Usefulness of Linguistic Information",
"sec_num": "3"
},
{
"text": "In this paper, we have shown that humans, by postprocessing speech recognizer output, can make significant improvements in accuracy over the recognizer. The improvements increase with the recognizer's accuracy, both within a particular corpus and across corpora. This demonstrates that there is still a great deal to gain without changing the recognizer's internal models, and simply operating on the recognizer's output. This is encouraging news, as it is typically a much simpler matter to do postprocessing than to attempt to integrate a knowledge source into the recognizer itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "We have presented a description of the proficiencies people used to make these improvements and how much each contributed to the person's success in improving over the recognizer accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "Many of the gains involved linguistic proficiencies that appear to be solvable (to a degree) using methods that have been recently developed in natural language processing. We hope that by honing in on the specific high-yield proficiencies that are amenable to being solved using current technology, we will finally advance beyond ngrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "There are four primary foci of future work. First, we want to expand our study to include more people. Second, now that we have some picture as to the proficiencies used, we would like to do a more refined study at a lower level of granularity by expanding the repertoire of proficiencies the person can choose from in describing her decision process. Third, we want to move from what to how: we now have some idea what proficiencies were used and we would next like to establish to the extent we can how the human used them. Finally, eventually we can only prove the validity of our claims by actually using what we have learned to improve speech recognition, which is our ultimate goal. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A hidden tag model for language",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Ristad",
"middle": [
"E"
],
"last": "",
"suffix": ""
},
{
"first": "Roukos",
"middle": [
"S"
],
"last": "",
"suffix": ""
}
],
"year": 1996,
"venue": "Research Notes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brill E, Harris D, Lowe S, Luo X, Rao P, Ristad E and Roukos S. (1996). A hidden tag model for language. In \"Research Notes\", Center for Language and speech processing. The Johns Hopkins University. Chapter 2.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Structure and Performance of a Dependency Language Model",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Eagle",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Printz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ristad",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
},
{
"first": "Stolcke",
"middle": [
"A"
],
"last": "Wu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1997,
"venue": "Eurospeech '97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelba C, Eagle D, Jelinek F, Jimenez V, Khudanpur S, Mangu L, Printz H, Ristad E, Rosenfeld R, Stolcke A and Wu D. (1997) Structure and Performance of a Dependency Language Model. In Eurospeech '97. Rhodes, Greece.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Three models for the description of language",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1956,
"venue": "IRE Trans. On Inform. Theory. IT",
"volume": "2",
"issue": "",
"pages": "113--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chomsky N. (1956) Three models for the description of language. IRE Trans. On Inform. Theory. IT-2, 113-124.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Inference and Estimation of a Long-Range Trigram Model",
"authors": [
{
"first": "Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Gillett",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Printz",
"middle": [
"H"
],
"last": "Tires",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Second International Colloquium on Grammatical Inference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Della Pietra S, Della Pietra V, Gillett J, Lafferty J, Printz H and tires L. (1994) Inference and Estimation of a Long-Range Trigram Model. In Proceedings of the Second International Colloquium on Grammatical Inference. Alicante, Spain.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning restricted probabilistic link grammars",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fong",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "IJCAI Workshop on New Approaches to Learning for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fong E and Wu D. (1995) Learning restricted probabilistic link grammars. IJCAI Workshop on New Approaches to Learning for Natural Language Processing, Montreal.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Applying Winnow to Context-Sensitive Spelling Correction",
"authors": [
{
"first": "Golding",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of ICML '96",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Golding A and Roth D. (1996) Applying Winnow to Context-Sensitive Spelling Correction. In Proceedings of ICML '96.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Basic Methods of Probabilistic Context-Free Grammars",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek F, Lafferty J.D. and Mercer R.L. (1992) Basic Methods of Probabilistic Context-Free Grammars.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Speech Recognition and Understanding. Recent Advances, Trends, and Applications",
"authors": [],
"year": null,
"venue": "",
"volume": "75",
"issue": "",
"pages": "345--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In \"Speech Recognition and Understanding. Recent Advances, Trends, and Applications\", Volume F75,345-360. Berlin:Springer Verlag.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using a stochastic context-free grammar as a language model for speech recognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wooters",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Segal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fosler",
"suffix": ""
},
{
"first": "Tajchman",
"middle": [
"G"
],
"last": "",
"suffix": ""
},
{
"first": "Morgan",
"middle": [
"N"
],
"last": "",
"suffix": ""
}
],
"year": 1995,
"venue": "ICASSP '95",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jurafsky D., Wooters C, Segal J, Stolcke A, Fosler E, Tajchman G and Morgan N. (1995) Using a stochastic context-free grammar as a language model for speech recognition. In ICASSP '95.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automated Postediting of Documents",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Chandler",
"middle": [
"I"
],
"last": "",
"suffix": ""
}
],
"year": 1994,
"venue": "Twelfth National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knight K and Chandler I. (1994). Automated Postediting of Documents. Proceedings, Twelfth National Conference on Artificial Intelligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using Story Topics for Language Model Adaptation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Seymore",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1997,
"venue": "Eurospeech '97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seymore K. and Rosenfeld R. (1997) Using Story Topics for Language Model Adaptation. In Eurospeech '97. Rhodes, Greece.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Linguistic Knowledge and Empirical Methods in Speech Recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1997,
"venue": "In AI Magazine",
"volume": "18",
"issue": "4",
"pages": "25--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke A. (1997) Linguistic Knowledge and Empirical Methods in Speech Recognition. In AI Magazine, Volume 18, 25-31, No.4.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Analysis of Proficiencies Used and their Effectiveness",
"num": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "In the following table, we show the results as a function of what percentage of the difference between recognizer and oracle the humans are able to attain. In other words, when the human is not restricted to the 10-best list, he is able to advance 75.5% of the way between recognizer and oracle word error rate on the Wall Street Journal.",
"html": null,
"content": "<table><tr><td/><td>Switchboard</td><td>Broadcast</td><td>Wall Street</td></tr><tr><td/><td/><td>News</td><td>Journal</td></tr><tr><td>Recognizer</td><td>43.9%</td><td>27.2%</td><td>13.2%</td></tr><tr><td>Oracle</td><td>32.7%</td><td>22.6%</td><td>7.9%</td></tr><tr><td>Human</td><td>42.0%</td><td>25.9%</td><td>10.1%</td></tr><tr><td>Selection</td><td/><td/><td/></tr><tr><td>Human Edit</td><td>41.0%</td><td>25.2%</td><td>9.2%</td></tr><tr><td>Table 1</td><td colspan=\"3\">Word Error Rate: Recognizer,</td></tr><tr><td colspan=\"2\">Oracle and Human</td><td/><td/></tr><tr><td/><td>Switchboard</td><td>Broadcast</td><td>Wall Street</td></tr><tr><td/><td/><td>News</td><td>Journal</td></tr><tr><td>Human</td><td>17.0%</td><td>28.3%</td><td>58.5%</td></tr><tr><td>Selection</td><td/><td/><td/></tr><tr><td>Human Edit</td><td>25.9%</td><td>43.5%</td><td>75.5%</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>(1) people</td><td>consider</td><td>what</td><td>they</td><td>want</td><td>but</td><td>we</td><td>won't</td><td>comment</td><td>he</td><td>said</td></tr><tr><td>(2) people</td><td>to say</td><td>what</td><td>they</td><td>want</td><td>but</td><td>we</td><td>won't</td><td>comment</td><td>he</td><td>said</td></tr><tr><td>(3) people</td><td>can say</td><td>what</td><td>they</td><td>want</td><td>but</td><td>we</td><td>won't</td><td>comment</td><td>he</td><td>said</td></tr><tr><td>(4) people</td><td>consider</td><td>what</td><td>they</td><td>want</td><td>them</td><td>we</td><td>won't</td><td>comment</td><td>he</td><td>said</td></tr><tr><td>(5) people</td><td>to say</td><td>what</td><td>they</td><td>want</td><td>them</td><td>we</td><td>won't</td><td>comment</td><td>he</td><td>said</td></tr><tr><td colspan=\"4\">Figure 1 A sample 5-Switchboard</td><td/><td colspan=\"3\">Broadcast News</td><td colspan=\"3\">Wall Street Journal</td></tr><tr><td/><td/><td>% of time</td><td colspan=\"2\">Absolute</td><td colspan=\"2\">% of time</td><td>Absolute</td><td colspan=\"2\">% of time</td><td>Absolute</td></tr><tr><td/><td/><td>clicked</td><td colspan=\"2\">WER</td><td>clicked</td><td/><td>WER</td><td>clicked</td><td/><td>WER</td></tr><tr><td/><td/><td/><td colspan=\"2\">reduction</td><td/><td/><td>reduction</td><td/><td/><td>reduction</td></tr><tr><td/><td/><td/><td colspan=\"2\">using this</td><td/><td/><td>using this</td><td/><td/><td>using this</td></tr><tr><td>Argument Structure</td><td/><td>1.3 (14)</td><td colspan=\"2\">0.18 (10)</td><td>2.0(12)</td><td/><td>0.10(11)</td><td>5.3 (12)</td><td/><td>0.40(8)</td></tr><tr><td colspan=\"2\">Closed Class Word Choice</td><td>25.7 (1)</td><td colspan=\"2\">1.62 (1)</td><td>40.2 (1)</td><td/><td>1.14 (1)</td><td>46.4 (1)</td><td/><td>2.40 (1)</td></tr><tr><td colspan=\"2\">Complete Sent. Vs. Not</td><td>16.5 (2)</td><td colspan=\"2\">1.03 (2)</td><td>11.0 (6)</td><td/><td>0.32 (8)</td><td>29.1 (2)</td><td/><td>1.52 (2)</td></tr><tr><td>Determiner Choice</td><td/><td>1.7 (12)</td><td colspan=\"2\">0.06 (13)</td><td>17.6 (3)</td><td/><td>0.41 (5)</td><td>24.8 (3)</td><td/><td>0.93 (5)</td></tr><tr><td colspan=\"2\">IdiomsdCommonPhrases</td><td>3.5 (6)</td><td colspan=\"2\">0.19 (9)</td><td>6.6 (8)</td><td/><td>0.35 (6)</td><td>8.6 (8)</td><td/><td>0.57 (7)</td></tr><tr><td>Modal Structure</td><td/><td>2.6 (8)</td><td colspan=\"2\">0.13 (11)</td><td>3.0 (11)</td><td/><td>0.09 (12)</td><td>2.3 (15)</td><td/><td>0.04 (14)</td></tr><tr><td colspan=\"2\">Number Agreement</td><td>4.4 (5)</td><td colspan=\"2\">0.32 (8)</td><td>3.7 (10)</td><td/><td>0.22 (9)</td><td>4.0 (14)</td><td/><td>0.08 (13)</td></tr><tr><td colspan=\"2\">Open Class Word Choice</td><td>8.3 (3)</td><td colspan=\"2\">0.71 (3)</td><td>19.3 (2)</td><td/><td>0.60 (2)</td><td>9.6 (7)</td><td/><td>0.40 (8)</td></tr><tr><td>Parallel Structure</td><td/><td>0.9 (15)</td><td colspan=\"2\">0.39 (6)</td><td>0.7 (15)</td><td/><td>0.04 (15)</td><td>5.6 (10)</td><td/><td>0.25 (11)</td></tr><tr><td colspan=\"2\">Part of Speech Confusion</td><td>2.2 (9)</td><td colspan=\"2\">0.06 (13)</td><td>2.0 (12)</td><td/><td>0.07 (13)</td><td>7.6 (9)</td><td/><td>0.04 (15)</td></tr><tr><td colspan=\"2\">Pred-Argument/Semantic</td><td>2.2 (9)</td><td colspan=\"2\">0.13 (11)</td><td>2.0 (12)</td><td/><td>0.06 (14)</td><td>5.6 (10)</td><td/><td>0.34 (10)</td></tr><tr><td>Agreement</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Preposition Choice</td><td/><td>3.5 (6)</td><td colspan=\"2\">0.58 (5)</td><td>17.3 (4)</td><td/><td>0.44 (4)</td><td>15.9 (5)</td><td/><td>0.82 (6)</td></tr><tr><td>Tense Agreement</td><td/><td>1.7 (12)</td><td colspan=\"2\">0.06 (13)</td><td>4.0 (9)</td><td/><td>0.16 (10)</td><td>5.3 (12)</td><td/><td>0.13 (12)</td></tr><tr><td>Topic</td><td/><td>2.2 (9)</td><td colspan=\"2\">0.39 (6)</td><td>9.3 (7)</td><td/><td>0.34 (7)</td><td>15.2 (6)</td><td/><td>1.03 (4)</td></tr><tr><td>World Knowledge</td><td/><td>6.1 (4)</td><td colspan=\"2\">0.65 (4)</td><td>12.3 (5)</td><td/><td>0.57 (3)</td><td>19.5 (4)</td><td/><td>1.35 (3)</td></tr></table>"
}
}
}
}