ACL-OCL / Base_JSON /prefixP /json /P91 /P91-1034.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P91-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:03:17.343820Z"
},
"title": "WORD-SENSE DISAMBIGUATION USING STATISTICAL METHODS",
"authors": [
{
"first": "Peter",
"middle": [
"F"
],
"last": "Brown",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center",
"location": {
"postBox": "P.O. Box 704",
"postCode": "10598",
"settlement": "Yorktown Heights",
"region": "NY"
}
},
"email": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Pietra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center",
"location": {
"postBox": "P.O. Box 704",
"postCode": "10598",
"settlement": "Yorktown Heights",
"region": "NY"
}
},
"email": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center",
"location": {
"postBox": "P.O. Box 704",
"postCode": "10598",
"settlement": "Yorktown Heights",
"region": "NY"
}
},
"email": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Thomas J. Watson Research Center",
"location": {
"postBox": "P.O. Box 704",
"postCode": "10598",
"settlement": "Yorktown Heights",
"region": "NY"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a statistical technique for assigning senses to words. An instance of a word is assigned a sense by asking a question about the context in which the word appears. The question is constructed to have high mutual information with the translation of that instance in another language. When we incorporated this method of assigning senses into our statistical machine translation system, the error rate of the system decreased by thirteen percent.",
"pdf_parse": {
"paper_id": "P91-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a statistical technique for assigning senses to words. An instance of a word is assigned a sense by asking a question about the context in which the word appears. The question is constructed to have high mutual information with the translation of that instance in another language. When we incorporated this method of assigning senses into our statistical machine translation system, the error rate of the system decreased by thirteen percent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An alluring aspect of the statistical ~pproach to machine translation rejuvenated by Brown et al. [Brown et al., 1988 , Brown et al., 1990 ] is the systematic framework it provides for attacking the problem of lexical disambiguation. For example, the system they describe translates the French sentence Je vais prendre la ddcision as I will make the decision, correctly interpreting prendre as make. The statistical translation model, which supplies English translations of French words, prefers the more common translation take, bnt the trigram language model recognizes that the three-word sequence make the decision, is much more probable than take the decision..",
"cite_spans": [
{
"start": 98,
"end": 117,
"text": "[Brown et al., 1988",
"ref_id": "BIBREF2"
},
{
"start": 118,
"end": 138,
"text": ", Brown et al., 1990",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "The system is not always so successfifl. It incorrectly renders Je vais prendre ma propre ddcision as 1 will take my own decision. The language model does not realize that take my own decision is improbable because take and decision no longer fall within a single trigram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "Errors such as this are common because the statistical models only capture local phenomena; if the context necessary to determine a translation falls outside the scope of the models, the word is likely to be translated incorrectly, t[owever, if the relevant context is encoded locally, the word should be translated correctly. We can achieve this within the traditional paradigm of analysis, transfer, and synthesis by incorporating into the analysis phase a sense-disambiguation component that assigns sense labels to French words. If prendre is labeled with one sense in the context of ddcision but with a different sense in other contexts, then the translation model will learn front trMning data that the first sense usually translates to make, whereas the other sense usuMly translates to take.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "Previous efforts a.t algorithmic disambiguation of word senses [Lesk, 1986 , White, 1988 , Ide and V6ronis, 1990 have concentrated on information that can be extracted from electronic dictionaries, and focus, therefore, on senses as determined by those dictionaries. llere, in contrast, we present a procedure for constructing a sense-disambiguation component that labels words so as to elucidate their translations in another language. We are con- cerned about senses as they occur in a dictionary only to the extent that those senses are translated differently. The French noun intdr~t, for example, is translated into German as either Zins or [nteresse according to its sense, but both of these senses are translated into English as interest, and so we make no attempt to distinguish them.",
"cite_spans": [
{
"start": 63,
"end": 74,
"text": "[Lesk, 1986",
"ref_id": null
},
{
"start": 75,
"end": 88,
"text": ", White, 1988",
"ref_id": null
},
{
"start": 89,
"end": 112,
"text": ", Ide and V6ronis, 1990",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "Following Brown et al. [Brown et al., 1990] , we choose as the translation of a French sentence F that sentence E for which Pr (E[F) is greatest. By Bayes' rule, Pr (ELF) = Pr (E) Pr Pr(F)",
"cite_spans": [
{
"start": 23,
"end": 43,
"text": "[Brown et al., 1990]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "Since the denominator does not depend on E, the sentence for which Pr (El/7') is greatest is also the sentence for which the product Pr (E) Pr (FIE) is greatest. The first factor in this product is a statistical characterization of the English language and the second factor is a statistical characterization of the process by which English sentences are translated into French. We can compute neither factors precisely. Rather, in statistical translation, we employ models from which we can obtain estimates of these values. We cM1 the model from which we compute Pr (E) the language model and that from which we compute Pr(FIE ) the translation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "The translation model used by Brown et al. [Brown et al., 1990] incorporates the concept of an alignment in which each word in E acts independently to produce some of the words in F. If we denote a typical alignment by A, then we can write the probability of F given E as a sum over all possible alignments: Pr (FIE) = ~ Pr (F, AlE ) .",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "[Brown et al., 1990]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "(2) A Although the number of possible alignments is a very rapidly growing function of the lengths of the French and English sentences, only a tiny fraction of the alignments contributes sub-stantiMly to the sum, and of these few, one makes the grea.test contribution. We ca.ll this most probable alignment the Viterbi alignment between E a.nd F.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "Tile identity of tile Viterbi alignment for a pair of sentences depends on the details of the translation model, but once the model is known, probable alignments can be discovered algoritlunically [Brown et al., 1991] . Brown et al. [Brown et al., 1990] , show an example of such an automatically derived alignment in their In a Viterbi alignment, a French word that is connected by a line to an English word is said to be aligned with that English word. Thus, in Figure 1 , Les is aligned with The, propositions with proposal, and so on. We call a p~ir of aligned words obtained in this way a connection.",
"cite_spans": [
{
"start": 197,
"end": 217,
"text": "[Brown et al., 1991]",
"ref_id": "BIBREF2"
},
{
"start": 233,
"end": 253,
"text": "[Brown et al., 1990]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 464,
"end": 472,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "From the Viterbi alignments for 1,002,165 pairs of short French and English sentences from the Canadian Hansard data [Brown et al., 1990] , we have extracted a set of 12,028,485 connections. Let p(e, f) be the probability that a connection chosen at random fi:om this set will connect the English word e to the French word f. Because each French word gives rise to exactly one connection, the right marginM of this distribution is identical to the distribution of French words in these sentences. The left marginal, however, is not the same as the distribution of English words: English words that tend to produce several French words at a time are overrepresented while those that tend to produce no French words are underrepresented.",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "[Brown et al., 1990]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "STATISTICAL TRANSLATION",
"sec_num": null
},
{
"text": "Using p(e, f) we can compute the mutuM information between a French word and its English mate in a connection. In this section, we discuss a method for labelling a word with a sense that depends on the context in which it appears in such a way as to increase the mutual information between the members of a connection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSES BASED ON BINARY QUESTIONS",
"sec_num": null
},
{
"text": "In the sentence Je vats prendre .ma propre ddeision, the French verb prendre should be translated as make because the obiect of prendre is ddcision. If we replace ddcision by voiture, then prendre should be translated as take to yield [ will take my own ear. In these examples, one can imagine assigning a sense to prendre by asking whether the first noun to the right of prendre is ddeision or voiture. We say that the noun to the right is the informant for prendre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSES BASED ON BINARY QUESTIONS",
"sec_num": null
},
{
"text": "In I1 doute que les ndtres gagnent, which means He doubts that we will win, the French word il should be translated as he. On the other hand, in II faut que les n6tres gagnent, which means It is necessary that we win, il should be translated as it. Here, we can determine which sense to assign to il by asking about the identity of the first verb to its right. Even though we cannot hope to determine the translation of il from this informant unambiguously, we can hope to obtain a significant amount of information about the translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSES BASED ON BINARY QUESTIONS",
"sec_num": null
},
{
"text": "As a final example, consider the English word is. In the sentence I think it is a problem, it is best to translate is as est as in Je pense que c'est un probl~me. However, this is certainly not true in the sentence [ think there is a problem, which translates as Je pense qu'il y aun probl~me. Here we can reduce the entropy of the distribution of the translation of is by asking if the word to the left is there. If so, then is is less likely to be translated as est than if not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSES BASED ON BINARY QUESTIONS",
"sec_num": null
},
{
"text": "Motivated by examples like these, we investigated a simple method of assigning two senses to a word w by asking a single binary question about one word of the context in which w appears. One does not know beforehand whether the informant will be the first noun to the right, the first verb to the right, or some other word in the context of w. However, one can construct a question for each of a number of candidate informant sites, and then choose the most informative question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSES BASED ON BINARY QUESTIONS",
"sec_num": null
},
{
"text": "Given a potential informant such as the first noun to the right, we can construct a question that has high mutual information with the translation of w by using the flip-flop algorithm devised by Nadas, Nahamoo, Picheny, and Poweli [Nadas et aL, 1991] . To understand their algorithm, first imagine that w is a French word and that English words which are possible translations of w have been divided into two classes. Consider the prol>lem of constructing 4. 1)inary question about the potential inform ant th a.t provides maximal inform ation about these two English word classes. If the French vocabulary is of size V, then there are 2 v possible questions, tlowever, using the splitting theorem of Breiman, Friedman, O1shen, and Stone [Breiman et al., 1984] , it is possible to find the most informative of these 2 v questions in time which is linear in V.",
"cite_spans": [
{
"start": 232,
"end": 251,
"text": "[Nadas et aL, 1991]",
"ref_id": "BIBREF5"
},
{
"start": 739,
"end": 761,
"text": "[Breiman et al., 1984]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SENSES BASED ON BINARY QUESTIONS",
"sec_num": null
},
{
"text": "The flip-flop Mgorithm begins by making an initiM assignment of the English translations into two classes, and then uses the splitting theorem to find the best question about the potential informant. This question divides the French vocabulary into two sets. One can then use the splitting theorem to find a division of the English translations of w into two sets which has maximal mutual information with the French sets. In the flip-flop algorithm, one alternates between splitting the French vocabulary into two sets and the English translations of w into two sets. After each such split, the mutual information between the French and English sets is at least as great as before the split. Since the mutual information is bounded by one bit, the process converges to a partition of the French vocabulary that has high mutual information with the translation of w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENSES BASED ON BINARY QUESTIONS",
"sec_num": null
},
{
"text": "We used the flip-flop algorithm in a pilot experiment in which we assigned two senses to each of the 500 most common English words and two senses to each of the 200 most common French words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A PILOT EXPERIMENT",
"sec_num": null
},
{
"text": "For a French word, we considered questions about seven informants: the word to the left, the word to the right, the first noun to the left, the first noun to the right, the first verb to the left, the first verb to the right, and the tense of either the current word, if it is a verb, or of the first verb to the left of the current word. For an English word, we only considered questions about the the word to the left and the word two to tim left. We restricted the English questions to the l)revious two words so that we could easily use them in our translation system which produces an English sentence from left to right. When a potential informant did not exist, because, say there was no noun to the left of some Probabilities of English translations Figure 2 : Senses for the French word prendre word in a particular sentence, we used the special word, TERM_WORD. To find the nouns and verbs in our French sentences, we used the tagging Mgorithm described by MeriMdo [Merialdo, 1990] . The box in the top of the figure shows the words which most frequently occupy that site, that is, tile nouns which appear to the right of prendre with a probability greater than one part in fifty. All instance of prendre is assigned the first or second sense depending on whether the first noun to the right appears in the leftha.nd or the right-hand column. So, for ex- Notice that the English verb to_make is three times as likely when prendre has the second sense as when it has the first sense. People make decisions, speeches, and acquaintances, they do not take them. Figure 3 shows our results for the verb vouloir. Here, the best informant is the tense of vouloir. The first sense is three times more likely than the second sense to translate as to_want, but twelve times less likely to translate as to_like. In polite English, one says I would like so and so more commonly than [ would want so and so. Figure 4 reduces the entropy of the translation of the French preposition depuis by .738 bits. When depuis is followed by an article, it translates with probability .772 to .since, and otherwise only with probability .016.",
"cite_spans": [
{
"start": 975,
"end": 991,
"text": "[Merialdo, 1990]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 758,
"end": 766,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1568,
"end": 1576,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 1905,
"end": 1913,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "A PILOT EXPERIMENT",
"sec_num": null
},
{
"text": "Finally, consider the English word cent. In our text, it is either a denomination of currency, in which case it is usually preceded by a number and translated as c., or it is the second half of per cent, in which case it is preceded by per and transla,ted along with per as ~0. The results in Figure 5 show that the algorithm has discovered this, and in so doing has reduced the entropy of the translation of cent by .378 bits. Pleased with these results, we incorporated sense-assignment questions for the 500 most common English words and 200 most common French words into our translation system. This system is an enhanced version of the one described by Brown et al. [Brown et al., 1990] in that it uses a trigram language model, and has a French vocabulary of 57,802 words, and an English vocabulary of 40,809 words. We translated 100 randomly selected Hansard sentences each of which is 10 words or less in length. We judged 45 of the resultant translations as acceptable as compared with 37 acceptable translations produced by the same system running without sense-disambiguation questions.",
"cite_spans": [
{
"start": 658,
"end": 691,
"text": "Brown et al. [Brown et al., 1990]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 293,
"end": 301,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "A PILOT EXPERIMENT",
"sec_num": null
},
{
"text": "Although our results are promising, this particular method of assigning senses to words is quite limited. It assigns at most two senses to a word, and thus can extract no more than one bit of information about the translation of that word. Since the entropy of the translation of a common word can be as high as five bits, there is reason to hope that using more senses will fitrther improve the performance of our system. Our method asks a single question about a single word of context. We can think of tlfis as the first question in a decision tree which can be extended to additional levels [Lucassen, 1983 , Lucassen and Mercer, 1984 , Breiman et al., 1984 , Bahl et al., 1989 . We are working on these and other improvements and hope to report better results in the future.",
"cite_spans": [
{
"start": 595,
"end": 610,
"text": "[Lucassen, 1983",
"ref_id": "BIBREF3"
},
{
"start": 611,
"end": 638,
"text": ", Lucassen and Mercer, 1984",
"ref_id": "BIBREF4"
},
{
"start": 639,
"end": 661,
"text": ", Breiman et al., 1984",
"ref_id": "BIBREF1"
},
{
"start": 662,
"end": 681,
"text": ", Bahl et al., 1989",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FUTURE WORK",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "A tree-based statistical language model for natural language speech recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "Breiman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Olshen",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Stone",
"suffix": ""
}
],
"year": 1984,
"venue": "IEEE Transactions on Acoustics, Speech and Signal Processing",
"volume": "37",
"issue": "",
"pages": "1001--1008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A tree-based statistical language model for natural language speech recognition. IEEE Transactions on Acoustics, Speech and Sig- nal Processing, 37:1001-1008. [Breiman et ai., 1984] Breiman, L., Fried- man, J. tI., Olshen, R. A., and Stone, C. J. (1984). Classification and Regres- sion Trees. Wadsworth & Brooks/Cole Ad- vanced Books & Software, Monterey, Cali- fornia.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A statistical apl)roach to machine translation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Roossin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 1988,
"venue": "A statistical approach to language translation. I!1 Proceedings of the 12th International Conference on Computational Linguistics",
"volume": "16",
"issue": "",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Brown et aL, 1990] Brown, P. F., Cocke, J., DellaPietra, S. A., DellaPietra, V. J., Je- linek, F., Lafferty, J. D., Mercer, R. L., and Roossin, P. S. (1990). A statistical ap- l)roach to machine translation. Computa- tional Linguistics, 16(2):79--85. [Brown et al., 1988] Brown, P. F., Cocke, J., DellaPietra, S. A., DellaPietra, V. J., Je- linek, F., Mercer, R. L., and Roossin, P. S. (1988). A statistical approach to language translation. I!1 Proceedings of the 12th In- ternational Conference on Computational Linguistics, Budapest, Hungary. [Brown et aL, 1991] Brown, P. F., DellaPi- etra, S. A., DellaPietta, V. J., and Mercer, R. L. (1991). Parameter estimation for ma- chine translation. In preparation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mapping dictionaires: A spreading activation approach. I:! Proccedil~!ls of the Sixth Annual Conferen~:e of the UII' Centre for the New Oxford English Dictionary and Text Research",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": ".",
"middle": [
"I"
],
"last": "V6ronis",
"suffix": ""
},
{
"first": "Canada",
"middle": [],
"last": "Waterloo",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Lucassen",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the SIGDOC Conference",
"volume": "",
"issue": "",
"pages": "52--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and V@onis, 1990] Ide, N. and V6ronis, .I. (1990). Mapping dictionaires: A spread- ing activation approach. I:! Proccedil~!ls of the Sixth Annual Conferen~:e of the UII' Centre for the New Oxford English Dictio- nary and Text Research, pages 52-6,t, Wa- terloo, Canada. [Lesk, 1986] Lesk, M. E. (1986). Auto- mated sense disambiguation using machine- readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceed- ings of the SIGDOC Conference. [Lncassen, 1983] Lucassen, J. M. (1983). Dis- covering phonemic baseforms automati- cally: an information theoretic approach. Technical Report RC 9833, IBM Research Division.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An information theoretic approach to automatic determination of phonemic baseforms",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Mercer ; Lucassen",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Mercer, 1984] Lucassen, J. M. and Mercer, R. L. (1984). An information theoretic approach to automatic determi- nation of phonemic baseforms. In Proceed- ings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 42.5.1-42.5.4, San Diego, California.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An iterative \"flip-flop\" approximation of the most informative split in the construction of decision trees",
"authors": [
{
"first": "]",
"middle": [],
"last": "Meria",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Merialdo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nadas",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nahamoo",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Picheny",
"suffix": ""
},
{
"first": "Powell",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "161--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meria]do, 1990] Merialdo, B. (1990). Tag- ging text with a probabilistic model. In Proceedii~gs of the IBM Natural Language ITL, pages 161-172, Paris, France. [Nadas et at., 1991] Nadas, A., Nahamoo, D., Picheny, M. A., and Powell, J. (1991). An iterative \"flip-flop\" approximation of the most informative split in the construc- tion of decision trees. In Proceedings of the IEEE International Conference on Acous- tics, Speech and Signal Processing, Toronto, Canada. [White, 1988] White, J. S. (1988). Deter- mination of lexical-semantic relations for multi-lingual terminology structures. In Relational Models of the Lexicon,. Cam- bridge University Press, Cambridge, OK.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 1: Alignment Example",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Figure 3. (For the reader's convenience, we ha.re reproduced that figure here asFigure 1.)",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "shows the question that was constr,cted for tile verb prendre. The noun to the right yielded the most information, .381 bits, about the English translation of prendre.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Senses for the French word vouloir ample, if the noun to the right of prendre is ddeision, parole, or eonnaissance, then prendre is assigned the second sense. The box at the bottom of the figure shows the most probable translations of each of the two senses.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Senses for the French word depuis Tile question in",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Senses for the English word cent",
"uris": null,
"num": null
}
}
}
}