ACL-OCL / Base_JSON /prefixP /json /P02 /P02-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P02-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:30:33.559747Z"
},
"title": "Pronunciation Modeling for Improved Spelling Correction",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Moore",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"addrLine": "One Microsoft Way Redmond",
"postCode": "98052",
"region": "WA",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a method for incorporating word pronunciation information in a noisy channel model for spelling correction. The proposed method builds an explicit error model for word pronunciations. By modeling pronunciation similarities between words we achieve a substantial performance improvement over the previous best performing models for spelling correction.",
"pdf_parse": {
"paper_id": "P02-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a method for incorporating word pronunciation information in a noisy channel model for spelling correction. The proposed method builds an explicit error model for word pronunciations. By modeling pronunciation similarities between words we achieve a substantial performance improvement over the previous best performing models for spelling correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spelling errors are generally grouped into two classes (Kuckich, 1992 ) -typographic and cognitive. Cognitive errors occur when the writer does not know how to spell a word. In these cases the misspelling often has the same pronunciation as the correct word ( for example writing latex as latecks). Typographic errors are mostly errors related to the keyboard; e.g., substitution or transposition of two letters because their keys are close on the keyboard. Damerau (1964) found that 80% of misspelled words that are non-word errors are the result of a single insertion, deletion, substitution or transposition of letters. Many of the early algorithms for spelling correction are based on the assumption that the correct word differs from the misspelling by exactly one of these operations (M. D. Kernigan and Gale, 1990; Church and Gale, 1991; Mayes and F. Damerau, 1991) .",
"cite_spans": [
{
"start": 55,
"end": 69,
"text": "(Kuckich, 1992",
"ref_id": "BIBREF6"
},
{
"start": 458,
"end": 472,
"text": "Damerau (1964)",
"ref_id": "BIBREF3"
},
{
"start": 797,
"end": 821,
"text": "Kernigan and Gale, 1990;",
"ref_id": "BIBREF7"
},
{
"start": 822,
"end": 844,
"text": "Church and Gale, 1991;",
"ref_id": "BIBREF1"
},
{
"start": 845,
"end": 872,
"text": "Mayes and F. Damerau, 1991)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By estimating probabilities or weights for the different edit operations and conditioning on the left and right context for insertions and deletions and allowing multiple edit operations, high spelling correction accuracy has been achieved. At ACL 2000, Brill and Moore (2000) introduced a new error model, allowing generic string-to-string edits. This model reduced the error rate of the best previous model by nearly 50%. It proved advantageous to model substitutions of up to 5-letter sequences (e.g ent being mistyped as ant, ph as f, al as le, etc.) This model deals with phonetic errors significantly better than previous models since it allows a much larger context size.",
"cite_spans": [
{
"start": 254,
"end": 276,
"text": "Brill and Moore (2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However this model makes residual errors, many of which have to do with word pronunciation. For example, the following are triples of misspelling, correct word and (incorrect) guess that the Brill and Moore model made: edelvise edelweiss advise bouncie bouncy bounce latecks latex lacks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we take the approach of modeling phonetic errors explicitly by building a separate error model for phonetic errors. More specifically, we build two different error models using the Brill and Moore learning algorithm. One of them is a letter-based model which is exactly the Brill and Moore model trained on a similar dataset. The other is a phone-sequence-to-phone-sequence error model trained on the same data as the first model, but using the pronunciations of the correct words and the estimated pronunciations of the misspellings to learn phone-sequence-to-phone-sequence edits and estimate their probabilities. At classification time, AEbest list predictions of the two models are combined using a log linear model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A requirement for our model is the availability of a letter-to-phone model that can generate pronunciations for misspellings. We build a letter-to-phone model automatically from a dictionary. The rest of the paper is structured as follows: Section 2 describes the Brill and Moore model and briefly describes how we use it to build our error models. Section 3 presents our letter-to-phone model, which is the result of a series of improvements on a previously proposed N-gram letter-tophone model (Fisher, 1999) . Section 4 describes the training and test phases of our algorithm in more detail and reports on experiments comparing the new model to the Brill and Moore model. Section 6 contains conclusions and ideas for future work.",
"cite_spans": [
{
"start": 496,
"end": 510,
"text": "(Fisher, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many statistical spelling correction methods can be viewed as instances of the noisy channel model. The misspelling of a word is viewed as the result of corruption of the intended word as it passes through a noisy communications channel. The task of spelling correction is a task of finding, for a misspelling \u00db, a correct word \u00d6 \u00be , where is a given dictionary and \u00d6 is the most probable word to have been garbled into \u00db. Equivalently, the problem is to find a word \u00d6 for which",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brill and Moore Noisy Channel Spelling Correction Model",
"sec_num": "2"
},
{
"text": "\u00c8\u00b4\u00d6\u00b5\u00c8\u00b4\u00db \u00d6\u00b5 \u00c8\u00b4\u00db\u00b5 is maximized. Since the denominator is constant, this is the same as maximizing \u00c8\u00b4\u00d6\u00b5\u00c8\u00b4\u00db \u00d6\u00b5. In the terminology of noisy channel modeling, the distribution \u00c8\u00b4\u00d6\u00b5 is referred to as the source model, and the distribution \u00c8\u00b4\u00db \u00d6\u00b5 is the error or channel model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00b4\u00d6 \u00db\u00b5",
"sec_num": null
},
{
"text": "Typically, spelling correction models are not used for identifying misspelled words, only for proposing corrections for words that are not found in a dictionary. Notice, however, that the noisy channel model offers the possibility of correcting misspellings without a dictionary, as long as sufficient data is available to estimate the source model factors. For example, if \u00d6 Osama bin Laden and \u00db Ossama bin Laden, the model will predict that the correct spelling \u00d6 is more likely than the incorrect spelling \u00db, provided that \u00c8\u00b4\u00db\u00b5 \u00c8\u00b4\u00d6\u00b5 \u00c8\u00b4\u00db \u00d6\u00b5 \u00c8\u00b4\u00db \u00db\u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00b4\u00d6 \u00db\u00b5",
"sec_num": null
},
{
"text": "where \u00c8\u00b4\u00db \u00d6\u00b5 \u00c8\u00b4\u00db \u00db\u00b5 would be approximately the odds of doubling the s in Osama. We do not pursue this, here, however. Brill and Moore (2000) present an improved error model for noisy channel spelling correction that goes beyond single insertions, deletions, substitutions, and transpositions. The model has a set of parameters \u00c8\u00b4\u00ab \u00ac\u00b5 for letter sequences of lengths up to . An extension they presented has refined parameters \u00c8\u00b4\u00ab \u00ac \u00c8 \u00cb AE \u00b5 which also depend on the position of the substitution in the source word. According to this model, the misspelling is generated by the correct word as follows: First, a person picks a partition of the correct word and then types each partition independently, possibly making some errors. The probability for the generation of the misspelling will then be the product of the substitution probabilities for each of the parts in the partition. For example, if a person chooses to type the word bouncy and picks the partition boun cy, the probability that she mistypes this word as boun cie will be \u00c8\u00b4 \u00d3\u00d9\u00d2 \u00d3\u00d9\u00d2\u00b5\u00c8\u00b4 \u00dd\u00b5. The probability \u00c8\u00b4\u00db \u00d6\u00b5 is estimated as the maximum over all partitions of \u00d6 of the probability that \u00db is generated from \u00d6 given that partition.",
"cite_spans": [
{
"start": 118,
"end": 140,
"text": "Brill and Moore (2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00b4\u00d6 \u00db\u00b5",
"sec_num": null
},
{
"text": "We use this method to build an error model for letter strings and a separate error model for phone sequences. Two models are learned; one model LTR (standing for \"letter\") has a set of substitution probabilities \u00c8\u00b4\u00ab \u00ac\u00b5 where \u00ab and \u00ac are character strings, and another model PH (for \"phone\") has a set of substitution probabilities \u00c8\u00b4\u00ab \u00ac\u00b5 where \u00ab and \u00ac are phone sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00b4\u00d6 \u00db\u00b5",
"sec_num": null
},
{
"text": "We learn these two models on the same data set of misspellings and correct words. For LTR, we use the training data as is and run the Brill and Moore training algorithm over it to learn the parameters of LTR. For PH, we convert the misspelling/correctword pairs into pairs of pronunciations of the misspelling and the correct word, and run the Brill and Moore training algorithm over that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00c8\u00b4\u00d6 \u00db\u00b5",
"sec_num": null
},
{
"text": "For PH, we need word pronunciations for the correct words and the misspellings. As the misspellings are certainly not in the dictionary we need a letterto-phone converter that generates possible pronunciations for them. The next section describes our letter-to-phone model. Set Words Set Words Training 14,876 Training 106,650 Test 4,964 Test 30,003 Table 1 : Text-to-phone conversion data",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 366,
"text": "Set Words Set Words Training 14,876 Training 106,650 Test 4,964 Test 30,003 Table 1",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "\u00c8\u00b4\u00d6 \u00db\u00b5",
"sec_num": null
},
{
"text": "3 Letter-to-Phone Model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MS Speech",
"sec_num": null
},
{
"text": "There has been a lot of research on machine learning methods for letter-to-phone conversion. High accuracy is achieved, for example, by using neural networks (Sejnowski and Rosenberg, 1987) , decision trees (Jiang et al., 1997) , and AE-grams (Fisher, 1999) . We use a modified version of the method proposed by Fisher, incorporating several extensions resulting in substantial gains in performance. In this section we first describe how we do alignment at the phone level, then describe Fisher's model, and finally present our extensions and the resulting letterto-phone conversion accuracy. The machine learning algorithms for converting text to phones usually start off with training data in the form of a set of examples, consisting of letters in context and their corresponding phones (classifications). Pronunciation dictionaries are the major source of training data for these algorithms, but they do not contain information for correspondences between letters and phones directly; they have correspondences between sequences of letters and sequences of phones.",
"cite_spans": [
{
"start": 158,
"end": 189,
"text": "(Sejnowski and Rosenberg, 1987)",
"ref_id": "BIBREF9"
},
{
"start": 207,
"end": 227,
"text": "(Jiang et al., 1997)",
"ref_id": "BIBREF5"
},
{
"start": 243,
"end": 257,
"text": "(Fisher, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MS Speech",
"sec_num": null
},
{
"text": "A first step before running a machine learning algorithm on a dictionary is, therefore, alignment between individual letters and phones. The alignment algorithm is dependent on the phone set used. We experimented with two dictionaries, the NETtalk dataset and the Microsoft Speech dictionary. Statistics about them and how we split them into training and test sets are shown in Table 1 . The NETtalk dataset contains information for phone level alignment and we used it to test our algorithm for automatic alignment. The Microsoft Speech dictionary is not aligned at the phone level but it is much bigger and is the dictionary we used for learning our final letter-to-phone model.",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 385,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "MS Speech",
"sec_num": null
},
{
"text": "The NETtalk dictionary has been designed so that each letter correspond to at most one phone, so a word is always longer, or of the same length as, its pronunciation. The alignment algorithm has to decide which of the letters correspond to phones and which ones correspond to nothing (i.e., are silent). For example, the entry in NETtalk (when we remove the empties, which contain information for phone level alignment) for the word able is ABLE e b L. The correct alignment is A/e B/b L/L E/-, wheredenotes the empty phone. In the Microsoft Speech dictionary, on the other hand, each letter can naturally correspond to \u00bc, \u00bd, or \u00be phones. For example, the entry in that dictionary for able is ABLE ey b ax l. The correct alignment is A/ey B/b L/ax&l E/-. If we also allowed two letters as a group to correspond to two phones as a group, the correct alignment might be A/ey B/b LE/ax&l, but that would make it harder for the machine learning algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MS Speech",
"sec_num": null
},
{
"text": "Our alignment algorithm is an implementation of hard EM (Viterbi training) that starts off with heuristically estimated initial parameters for \u00c8\u00b4\u00d4 \u00d3\u00d2 \u00d7 \u00d0 \u00d8\u00d8 \u00d6\u00b5 and, at each iteration, finds the most likely alignment for each word given the parameters and then re-estimates the parameters collecting counts from the obtained alignments. Here \u00d4 \u00d3\u00d2 \u00d7 ranges over sequences of \u00bc (empty), \u00bd, and \u00be phones for the Microsoft Speech dictionary and \u00bc or \u00bd phones for NETtalk. The parameters \u00c8\u00b4\u00d4 \u00d3\u00d2 \u00d7 \u00d0 \u00d8\u00d8 \u00d6\u00b5 were initialized by a method similar to the one proposed in (Daelemans and van den Bosch, 1996) . Word frequencies were not taken into consideration here as the dictionary contains no frequency information.",
"cite_spans": [
{
"start": 559,
"end": 594,
"text": "(Daelemans and van den Bosch, 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MS Speech",
"sec_num": null
},
{
"text": "The method we started with was the N-gram model of Fisher (1999) . From training data, it learns rules that predict the pronunciation of a letter based on \u00d1 letters of left and \u00d2 letters of right context. The rules are of the following form:",
"cite_spans": [
{
"start": 51,
"end": 64,
"text": "Fisher (1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Letter-to-Phone Model",
"sec_num": "3.1"
},
{
"text": "\u00c4\u00d1 \u00cc \u00ca\u00d2 \u00d4 \u00bd \u00d4 \u00bd \u00d4 \u00be \u00d4 \u00be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Letter-to-Phone Model",
"sec_num": "3.1"
},
{
"text": "Here \u00c4\u00d1 stands for a sequence of \u00d1 letters to the left of \u00cc and \u00ca\u00d2 is a sequence of \u00d2 letters to the right. The number of letters in the context to the left and right varies. We used from \u00bc to letters on each side. For example, two rules learned for the letter B were:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Letter-to-Phone Model",
"sec_num": "3.1"
},
{
"text": "\u00c7\u00cc \u00bd \u00bc and \u00bc ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Letter-to-Phone Model",
"sec_num": "3.1"
},
{
"text": "meaning that in the first context the letter B is silent with probability \u00bd \u00bc, and in the second it is pronounced as with probability and is silent with probability \u00bc .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Letter-to-Phone Model",
"sec_num": "3.1"
},
{
"text": "Training this model consists of collecting counts for the contexts that appear in the data with the selected window size to the left and right. We collected counts for all configurations \u00c4\u00d1 \u00cc \u00ca\u00d2 for \u00d1 \u00be \u00bc \u00bd \u00be \u00bf , \u00d2 \u00be \u00bc \u00bd \u00be \u00bf that occurred in the data. The model is applied by choosing for each letter \u00cc the most probable translation as predicted by the most specific rule for the context of occurrence of the letter. For example, if we want to find how to pronounce the second b in abbot we would chose the empty phone because the first rule mentioned above is more specific than the second.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Letter-to-Phone Model",
"sec_num": "3.1"
},
{
"text": "We implemented five extensions to the initial model which together decreased the error rate of the letterto-phone model by around \u00be\u00bc\u00b1. The performance figures reported by Fisher (1999) are significantly higher than our figures using the basic model, which is probably due to the cleaner data used in their experiments and the differences in phoneset size.",
"cite_spans": [
{
"start": 171,
"end": 184,
"text": "Fisher (1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extensions",
"sec_num": "3.2"
},
{
"text": "The extensions we implemented are inspired largely by the work on letter-to-phone conversion using decision trees (Jiang et al., 1997) . The last extension, rescoring based on vowel fourgams, has not been proposed previously. We tested the algorithms on the NETtalk and Microsoft Speech dictionaries, by splitting them into training and test sets in proportion 80%/20% training-set to test-set size. We trained the letter-to-phone models using the training splits and tested on the test splits. We Table 2 : Letter-to-phone accuracies are reporting accuracy figures only on the NETtalk dataset since this dataset has been used extensively in building letter-to-phone models, and because phone accuracy is hard to determine for the nonphonetically-aligned Microsoft Speech dictionary. For our spelling correction algorithm we use a letterto-phone model learned from the Microsoft Speech dictionary, however.",
"cite_spans": [
{
"start": 114,
"end": 134,
"text": "(Jiang et al., 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 498,
"end": 505,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extensions",
"sec_num": "3.2"
},
{
"text": "The results for phone accuracy and word accuracy of the initial model and extensions are shown in Table 2. The phone accuracy is the percentage correct of all phones proposed (excluding the empties) and the word accuracy is the percentage of words for which pronunciations were guessed without any error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "For our data we noticed that the most specific rule that matches is often not a sufficiently good predictor. By linearly interpolating the probabilities given by the five most specific matching rules we decreased the word error rate by 14.3%. The weights for the individual rules in the top five were set to be equal. It seems reasonable to combine the predictions from several rules especially because the choice of which rule is more specific of two is arbitrary when neither is a substring of the other. For example, of the two rules with contexts and , where the first has \u00bc right context and the second has \u00bc left letter context, one heuristic is to choose the latter as more specific since right context seems more valuable than left (Fisher, 1999) . However this choice may not always be the best and it proves useful to combine predictions from several rules. In Table 2 the row labeled \"Interpolation of contexts\" refers to this extension of the basic model. Adding a symbol for interior of word produced a gain in accuracy. Prior to adding this feature, we had features for beginning and end of word. Explicitly modeling interior proved helpful and further decreased our error rate by 4.3%. The results after this improvement are shown in the third row of Table 2 .",
"cite_spans": [
{
"start": 740,
"end": 754,
"text": "(Fisher, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 871,
"end": 878,
"text": "Table 2",
"ref_id": null
},
{
"start": 1266,
"end": 1273,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "After linearly combining the predictions from the top matching rules we have a probability distribution over phones for each letter. It has been shown that modeling the probability of sequences of phones can greatly reduce the error (Jiang et al., 1997) . We learned a trigram phone sequence model and used it to re-score the AE-best predictions from the basic model. We computed the score for a sequence of phones given a sequence of letters, as follows:",
"cite_spans": [
{
"start": 233,
"end": 253,
"text": "(Jiang et al., 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Score\u00b4\u00d4 \u00bd \u00d4 \u00be \u00d4 \u00d2 \u00d0 \u00bd \u00d0 \u00be \u00d0 \u00d2 \u00b5 \u00d0\u00d3 \u00bd \u00d2 \u00c8\u00b4\u00d4 \u00d0 \u00bd \u00d0 \u00be \u00d0 \u00d2 \u00b5 \u2022 \u00ab \u00d0\u00d3 \u00bd \u00d2 \u00c8\u00b4\u00d4 \u00d4 \u00bd \u00d4 \u00be \u00b5 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Here the probabilities \u00c8\u00b4\u00d4 \u00d0 \u00bd \u00d0 \u00be \u00d0 \u00d2 \u00b5 are the distributions over phones that we obtain for each letter from combination of the matching rules. The weight \u00ab for the phone sequence model was estimated from a held-out set by a linear search. This model further improved our performance and the results it achieves are in the fourth row of Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 339,
"end": 346,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The final improvement is adding a term from a vowel fourgram language model to equation 1 with a weight \u00ac. The term is the log probability of the sequence of vowels in the word according to a fourgram model over vowel sequences learned from the data. The final accuracy we achieve is shown in the fifth row of the same table. As a comparison, the best accuracy achieved by Jiang et al. (1997) on NETalk using a similar proportion of training and test set sizes was \u00b1. Their system uses more sources of information, such as phones in the left context as features in the decision tree. They also achieve a large performance gain by combining multiple decision trees trained on separate portions of the training data. The accuracy of our letter-tophone model is comparable to state of the art systems. Further improvements in this component may lead to higher spelling correction accuracy.",
"cite_spans": [
{
"start": 373,
"end": 392,
"text": "Jiang et al. (1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Our combined error model gives the probability \u00c8 \u00c5 \u00b4\u00db \u00d6\u00b5 where w is the misspelling and r is a word in the dictionary. The spelling correction algorithm selects for a misspelling w the word r in the dictionary for which the product \u00c8\u00b4\u00d6\u00b5\u00c8 \u00c5 \u00b4\u00db \u00d6\u00b5 is maximized. In our experiments we used a uniform source language model over the words in the dictionary. Therefore our spelling correction algorithm selects the word \u00d6 that maximizes \u00c8 \u00c5 \u00b4\u00db \u00d6\u00b5. Brill and Moore (2000) showed that adding a source language model increases the accuracy significantly. They also showed that the addition of a language model does not obviate the need for a good error model and that improvements in the error model lead to significant improvements in the full noisy channel model. We build two separate error models, LTR and PH (standing for \"letter\" model and \"phone\" model). The letter-based model estimates a probability distribution \u00c8 \u00c4\u00cc \u00ca\u00b4\u00db \u00d6\u00b5 over words, and the phone-based model estimates a distribution \u00c8 \u00c8 \u00c0 \u00d4\u00d6\u00d3\u00d2 \u00db \u00d4\u00d6\u00d3\u00d2 \u00d6\u00b5 over pronunciations. Using the PH model and the letter-to-phone model, we derive a distribution \u00c8 \u00c8 \u00c0 \u00c4 \u00db \u00d6\u00b5 in a way to be made precise shortly. We combine the two models to estimate scores as follows:",
"cite_spans": [
{
"start": 442,
"end": 464,
"text": "Brill and Moore (2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "\u00cb \u00c5 \u00b4\u00db \u00d6\u00b5 \u00d0\u00d3 \u00c8 \u00c4\u00cc \u00ca\u00b4\u00db \u00d6\u00b5 \u2022 \u00d0\u00d3 \u00c8 \u00c8 \u00c0 \u00c4 \u00db \u00d6\u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "The \u00d6 that maximizes this score will also maximize the probability \u00c8 \u00c5 \u00b4\u00db \u00d6\u00b5. The probabilities \u00c8 \u00c8 \u00c0 \u00c4 \u00db \u00d6\u00b5 are computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "\u00c8 \u00c8 \u00c0 \u00c4 \u00db \u00d6\u00b5 \u00d4\u00d6\u00d3\u00d2 \u00d6 \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00d6 \u00db \u00d6\u00b5 \u00d4\u00d6\u00d3\u00d2 \u00d6 \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00d6 \u00d6\u00b5 \u00a2 \u00c8\u00b4\u00db \u00d4\u00d6\u00d3\u00d2 \u00d6 \u00d6 \u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "This equation is approximated by the expression for \u00c8 \u00c8 \u00c0 \u00c4 shown in Figure 1 after several simplifying assumptions. Figure 1 : Equation for approximation of \u00c8 \u00c8 \u00c0 \u00c4 taken to be equal for all possible pronunciations of \u00d6 in the dictionary. Next we assume independence of the misspelling from the right word given the pronunciation of the right word i.e. \u00c8\u00b4\u00db \u00d6 \u00d4 \u00d6 \u00d3 \u00d2 \u00d6\u00b5 \u00c8\u00b4\u00db \u00d4\u00d6\u00d3\u00d2 \u00d6\u00b5. By inversion of the conditional probability this is equal to \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00d6 \u00db\u00b5 multiplied by \u00c8\u00b4\u00db\u00b5 \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00d6\u00b5. Since we do not model these marginal probabilities, we drop the latter factor.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 1",
"ref_id": null
},
{
"start": 117,
"end": 125,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "Next the probability \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00d6 \u00db\u00b5 is expressed as \u00d4\u00d6\u00d3\u00d2 \u00db \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00db \u00d4\u00d6\u00d3\u00d2 \u00d6 \u00db\u00b5 which is approximated by the maximum term in the sum. After the following decomposition:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "\u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00db \u00d4\u00d6\u00d3\u00d2 \u00d6 \u00db\u00b5 \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00db \u00db\u00b5\u00a2\u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00d6 \u00db \u00d4\u00d6\u00d3\u00d2 \u00db\u00b5 \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00db \u00db\u00b5\u00a2\u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00d6 \u00d4\u00d6\u00d3\u00d2 \u00db\u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "where the second part represents a final independence assumption, we get the expression in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "The probabilities \u00c8\u00b4\u00d4\u00d6\u00d3\u00d2 \u00db \u00db\u00b5 are given by the letter-to-phone model. In the following subsections, we first describe how we train and apply the individual error models, and then we show performance results for the combined model compared to the letterbased error model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Pronunciation and Letter-Based Models",
"sec_num": "4"
},
{
"text": "The error model LTR was trained exactly as described originally by Brill and Moore (2000) . Given a training set of pairs \u00db \u00d6 the algorithm estimates a set of rewrite probabilities \u00d4\u00b4\u00ab \u00ac\u00b5 which are the basis for computing probabilities \u00c8 \u00c4\u00cc \u00ca\u00b4\u00db \u00d6\u00b5. ",
"cite_spans": [
{
"start": 67,
"end": 89,
"text": "Brill and Moore (2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Individual Error Models",
"sec_num": "4.1"
},
{
"text": "We tested our system and compared it to the Brill and Moore model on a dataset of around \u00bd\u00bc \u00bc\u00bc\u00bc pairs of misspellings and corresponding correct words, split into training and test sets. The exact data sizes are \u00bf word pairs in the training set and \u00bd \u00bd\u00be word pairs in the test set. This set is slightly different from the dataset used in Brill and Moore's experiments because we removed from the original dataset the pairs for which we did not have the correct word in the pronunciation dictionary. Both models LTR and PH were trained on the same training set. The interpolation weight that the combined model CMB uses is also set on the training set to maximize the classification accuracy. At test time we do not search through all possible words \u00d6 in the dictionary to find the one maximizing \u00cb \u00d3\u00d6 \u00c5 \u00b4\u00db \u00d6\u00b5. Rather, we compute the combination score only for candidate words \u00d6 that are in the top AE according to the \u00c8 \u00c4\u00cc \u00ca\u00b4\u00db \u00d6\u00b5 or are in the top AE according to \u00c8 \u00c8 \u00c0 \u00d4\u00d6\u00d3\u00d2 \u00db \u00d4\u00d6\u00d3\u00d2 \u00d6\u00b5 for any of the pronunciations of \u00d6 from the dictionary and any of the pronunciations for \u00db that were proposed by the letter-to-phone model. The letter-to-phone we considered the top \u00bf pronunciations of \u00db rather than a single most likely hypothesis. That is probably due to the fact that the \u00bf-best accuracy of the letter-to-phone model is significantly higher than its \u00bd-best accuracy. Table 3 shows the spelling correction accuracy when using the model LTR, PH, or both in combination. The table shows AE-best accuracy results. The AE-best accuracy figures represent the percent test cases for which the correct word was in the top AE words proposed by the model. We chose the context size of \u00bf for the LTR model as this context size maximized test set accuracy. Larger context sizes neither helped nor hurt accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 1370,
"end": 1377,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "As we can see from the table, the phone-based model alone produces respectable accuracy results considering that it is only dealing with word pronunciations. The error reduction of the combined model compared to the letters-only model is substantial: for 1-Best, the error reduction is over \u00be\u00bf\u00b1; for 2-Best, 3-Best, and 4-Best it is even higher, reaching over \u00b1 for 4-Best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "As an example of the influence of pronunciation modeling, in Table 4 we list some misspellingcorrect word pairs where the LTR model made an incorrect guess and the combined model CMB guessed accurately.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We have presented a method for using word pronunciation information to improve spelling correction accuracy. The proposed method substantially reduces the error rate of the previous best spelling correction model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "A subject of future research is looking for a better way to combine the two error models or building ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An improved error model for noisy channel spelling correction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 38th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "286--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill and R. C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proc. of the 38th Annual Meeting of the ACL, pages 286- 293.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Probability scoring for spelling correction",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1991,
"venue": "Statistics and Computing",
"volume": "1",
"issue": "",
"pages": "93--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Church and W. Gale. 1991. Probability scoring for spelling correction. In Statistics and Computing, vol- ume 1, pages 93-103.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Languageindependent data-oriented grapheme-to-phoneme conversion",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 1996,
"venue": "Progress in Speech Synthesis",
"volume": "",
"issue": "",
"pages": "77--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Daelemans and A. van den Bosch. 1996. Language- independent data-oriented grapheme-to-phoneme con- version. In Progress in Speech Synthesis, pages 77-90.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A technique for computer detection and correction of spelling errors",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Damerau",
"suffix": ""
}
],
"year": 1964,
"venue": "Communications of the ACM",
"volume": "7",
"issue": "",
"pages": "171--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Damerau. 1964. A technique for computer detection and correction of spelling errors. In Communications of the ACM, volume 7(3), pages 171-176.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A statistical text-to-phone function using ngrams and rules",
"authors": [
{
"first": "W",
"middle": [
"M"
],
"last": "Fisher",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "649--652",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. M. Fisher. 1999. A statistical text-to-phone function using ngrams and rules. In Proc. of the IEEE Inter- national Conference on Acoustics, Speech and Signal Processing, pages 649-652.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improvements on a trainable letter-to-sound converter",
"authors": [
{
"first": "L",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 5th European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Jiang, H.W. Hon, and X. Huang. 1997. Improvements on a trainable letter-to-sound converter. In Proceed- ings of the 5th European Conference on Speech Com- munication and Technology.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Techniques for automatically correcting words in text",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kuckich",
"suffix": ""
}
],
"year": 1992,
"venue": "ACM Computing Surveys, volume",
"volume": "24",
"issue": "4",
"pages": "377--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Kuckich. 1992. Techniques for automatically correct- ing words in text. In ACM Computing Surveys, volume 24(4), pages 377-439.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A spelling correction program based on a noisy channel model",
"authors": [
{
"first": "W",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Kernigan",
"suffix": ""
},
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
}
],
"year": 1990,
"venue": "Proc. of COLING-90",
"volume": "II",
"issue": "",
"pages": "205--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Church M. D. Kernigan and W. A. Gale. 1990. A spelling correction program based on a noisy channel model. In Proc. of COLING-90, volume II, pages 205- 211.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Conext based spelling correction",
"authors": [
{
"first": "F",
"middle": [],
"last": "Mayes",
"suffix": ""
}
],
"year": 1991,
"venue": "Information Processing and Management",
"volume": "27",
"issue": "",
"pages": "517--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Mayes and et al. F. Damerau. 1991. Conext based spelling correction. In Information Processing and Management, volume 27(5), pages 517-522.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Parallel networks that learn to pronounce english text",
"authors": [
{
"first": "T",
"middle": [
"J"
],
"last": "Sejnowski",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Rosenberg",
"suffix": ""
}
],
"year": 1987,
"venue": "Complex Systems",
"volume": "",
"issue": "",
"pages": "145--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. J. Sejnowski and C. R. Rosenberg. 1987. Parallel net- works that learn to pronounce english text. In Complex Systems, pages 145-168.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "These are : Combination of the predictions of several applicable rules by linear interpolation Rescoring of AE-best proposed pronunciations for a word using a trigram phone sequence language model Explicit distinction between middle of word versus start or end Rescoring of AE-best proposed pronunciations for a word using a fourgram vowel sequence language model",
"uris": null,
"type_str": "figure"
},
"TABREF3": {
"content": "<table><tr><td>Model</td><td>1-Best</td><td>2-Best</td><td>3-Best</td><td>4-Best</td></tr><tr><td colspan=\"2\">LTR 94.Error</td><td/><td/><td/></tr><tr><td colspan=\"2\">Reduction 23.8%</td><td>39.6%</td><td>40%</td><td>46.8%</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "21% 98.18% 98.90 % 99.06% PH 86.36% 93.65% 95.69 % 96.63% CMB 95.58% 98.90% 99.34% 99.50%"
},
"TABREF4": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Spelling Correction Accuracy Results model returned for each \u00db the \u00bf most probable pronunciations only. Our performance was better when"
},
"TABREF6": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Examples of Corrected Errors a single model that can recognize whether there is a phonetic or typographic error. Another interesting task is exploring the potential of our model in different settings such as the Web, e-mail, or as a specialized model for non-native English speakers of particular origin."
}
}
}
}