| { |
| "paper_id": "J04-1001", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:57:13.034189Z" |
| }, |
| "title": "Word Translation Disambiguation Using Bilingual Bootstrapping", |
| "authors": [ |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "hangli@microsoft.com" |
| }, |
| { |
| "first": "Cong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This article proposes a new method for word translation disambiguation, one that uses a machinelearning technique called bilingual bootstrapping. In learning to disambiguate words to be translated, bilingual bootstrapping makes use of a small amount of classified data and a large amount of unclassified data in both the source and the target languages. It repeatedly constructs classifiers in the two languages in parallel and boosts the performance of the classifiers by classifying unclassified data in the two languages and by exchanging information regarding classified data between the two languages. Experimental results indicate that word translation disambiguation based on bilingual bootstrapping consistently and significantly outperforms existing methods that are based on monolingual bootstrapping.", |
| "pdf_parse": { |
| "paper_id": "J04-1001", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This article proposes a new method for word translation disambiguation, one that uses a machinelearning technique called bilingual bootstrapping. In learning to disambiguate words to be translated, bilingual bootstrapping makes use of a small amount of classified data and a large amount of unclassified data in both the source and the target languages. It repeatedly constructs classifiers in the two languages in parallel and boosts the performance of the classifiers by classifying unclassified data in the two languages and by exchanging information regarding classified data between the two languages. Experimental results indicate that word translation disambiguation based on bilingual bootstrapping consistently and significantly outperforms existing methods that are based on monolingual bootstrapping.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "We address here the problem of word translation disambiguation. If, for example, we were to attempt to translate the English noun plant, which could refer either to a type of factory or to a form of flora (i.e., in Chinese, either to [gongchang] or to [zhiwu] ), our goal would be to determine the correct Chinese translation. That is, word translation disambiguation is essentially a special case of word sense disambiguation (in the above example, gongchang would correspond to the sense of factory and zhiwu to the sense of flora). 1 We could view word translation disambiguation as a problem of classification. To perform the task, we could employ a supervised learning method, but since to do so would require human labeling of data, which would be expensive, bootstrapping would be a better choice. Yarowsky (1995) has proposed a bootstrapping method for word sense disambiguation. When applied to translation from English to Chinese, his method starts learning with a small number of English sentences that contain ambiguous English words and that are labeled with correct Chinese translations of those words. It then uses these classified sentences as training data to create a classifier (e.g., a decision list), which it uses to classify unclassified sentences containing the same ambiguous words. The output of this process is then used as additional training data. It also adopts the one-sense-per-discourse heuristic (Gale, Church, and Yarowsky 1992b) in classifying unclassified sentences. By repeating the above process, an accurate classifier for word translation disambiguation can be created. Because this method uses data in a single language (i.e., the source language in translation), we refer to it here as monolingual bootstrapping (MB).", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 245, |
| "text": "[gongchang]", |
| "ref_id": null |
| }, |
| { |
| "start": 252, |
| "end": 259, |
| "text": "[zhiwu]", |
| "ref_id": null |
| }, |
| { |
| "start": 535, |
| "end": 536, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 805, |
| "end": 820, |
| "text": "Yarowsky (1995)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 1430, |
| "end": 1464, |
| "text": "(Gale, Church, and Yarowsky 1992b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this paper, we propose a new method of bootstrapping, one that we refer to as bilingual bootstrapping (BB). Instead of using data in one language, BB uses data in two languages. In translation from English to Chinese, for example, BB makes use of unclassified data from both languages. It also uses a small number of classified data in English and, optionally, a small number of classified data in Chinese. The data in the two languages should be from the same domain but are not required to be exactly in parallel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "BB constructs classifiers for English-to-Chinese translation disambiguation by repeating the following two steps: (1) Construct a classifier for each of the languages on the basis of classified data in both languages, and (2) use the constructed classifier for each language to classify unclassified data, which are then added to the classified data of the language. We can use classified data in both languages in step (1), because words in one language have translations in the other, and we can transform data from one language into the other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We have experimentally evaluated the performance of BB in word translation disambiguation, and all of our results indicate that BB consistently and significantly outperforms MB. The higher performance of BB can be attributed to its effective use of the asymmetric relationship between the ambiguous words in the two languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Our study is organized as follows. In Section 2, we describe related work. Specifically, we formalize the problem of word translation disambiguation as that of classification based on statistical learning. As examples, we describe two such methods: one using decision lists and the other using naive Bayes. We also explain the Yarowsky disambiguation method, which is based on Monolingual Bootstrapping. In Section 3, we describe bilingual bootstrapping, comparing BB with MB, and discussing the relationship between BB and co-training. In Section 4, we describe our experimental results, and finally, in Section 5, we give some concluding remarks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Word translation disambiguation (in general, word sense disambiguation) can be viewed as a problem of classification and can be addressed by employing various supervised learning methods. For example, with such a learning method, an English sentence containing an ambiguous English word corresponds to an instance, and the Chinese translation of the word in the context (i.e., the word sense) corresponds to a classification decision (a label).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Many methods for word sense disambiguation based on supervised learning technique have been proposed. They include those using naive Bayes (Gale, Church, and Yarowsky 1992a) , decision lists (Yarowsky 1994) , nearest neighbor (Ng and Lee 1996) , transformation-based learning (Mangu and Brill 1997) , neural networks (Towell and Voorhees 1998) , Winnow (Golding and Roth 1999) , boosting (Escudero, Marquez, and Rigau 2000) , and naive Bayesian ensemble (Pedersen 2000) . The assumption behind these methods is that it is nearly always possible to determine the sense of an ambiguous word by referring to its context, and thus all of the methods build a classifier (i.e., a classification program) using features representing context information (e.g., surrounding context words). For other related work on translation disambiguation, see Brown et al. (1991) , Bruce and Weibe (1994) , Dagan and Itai (1994) , Lin (1997) , Pedersen and Bruce (1997) , Schutze (1998) , Kikui (1999) , Mihalcea and Moldovan (1999) , Koehn and Knight (2000) , and Zhou, Ding, and Huang (2001) .", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 173, |
| "text": "(Gale, Church, and Yarowsky 1992a)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 191, |
| "end": 206, |
| "text": "(Yarowsky 1994)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 226, |
| "end": 243, |
| "text": "(Ng and Lee 1996)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 276, |
| "end": 298, |
| "text": "(Mangu and Brill 1997)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 317, |
| "end": 343, |
| "text": "(Towell and Voorhees 1998)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 353, |
| "end": 376, |
| "text": "(Golding and Roth 1999)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 388, |
| "end": 423, |
| "text": "(Escudero, Marquez, and Rigau 2000)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 454, |
| "end": 469, |
| "text": "(Pedersen 2000)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 839, |
| "end": 858, |
| "text": "Brown et al. (1991)", |
| "ref_id": null |
| }, |
| { |
| "start": 861, |
| "end": 883, |
| "text": "Bruce and Weibe (1994)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 886, |
| "end": 907, |
| "text": "Dagan and Itai (1994)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 910, |
| "end": 920, |
| "text": "Lin (1997)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 936, |
| "end": 948, |
| "text": "Bruce (1997)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 951, |
| "end": 965, |
| "text": "Schutze (1998)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 968, |
| "end": 980, |
| "text": "Kikui (1999)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 983, |
| "end": 1011, |
| "text": "Mihalcea and Moldovan (1999)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1014, |
| "end": 1037, |
| "text": "Koehn and Knight (2000)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1040, |
| "end": 1072, |
| "text": "and Zhou, Ding, and Huang (2001)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Let us formulate the problem of word sense (translation) disambiguation as follows. Let E denote a set of words. Let \u03b5 denote an ambiguous word in E, and let e denote a context word in E. (Throughout this article, we use Greek letters to represent ambiguous words and italic letters to represent context words.) Let T \u03b5 denote the set of senses of \u03b5, and let t \u03b5 denote a sense in T \u03b5 . Let e \u03b5 stand for an instance representing a context of \u03b5, that is, a sequence of context words surrounding \u03b5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "e \u03b5 = (e \u03b5,1 , e \u03b5,2 , . . . , (\u03b5), . . . , e \u03b5,m ), e \u03b5,i \u2208 E, (i = 1, . . . , m)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For the example presented earlier, we have \u03b5 = plant, T \u03b5 = {1, 2}, where 1 represents the sense factory and 2 the sense flora. From the phrase \". . . computer manufacturing plant and adjacent. . . \" we obtain e \u03b5 = (. . . computer, manufacturing, (plant), and, adjacent, . . . ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For a specific \u03b5, we define a binary classifier for resolving each of its ambiguities in T \u03b5 in a general form as 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "P(t \u03b5 | e \u03b5 ), t \u03b5 \u2208 T \u03b5 and P(t \u03b5 | e \u03b5 ),t \u03b5 = T \u03b5 \u2212 {t \u03b5 }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where e \u03b5 denotes an instance representing a context of \u03b5. All of the supervised learning methods mentioned previously can automatically create such a classifier. To construct classifiers using supervised methods, we need classified data such as those in Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 255, |
| "end": 263, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Translation Disambiguation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Let us first consider the use of decision lists, as proposed in Yarowsky (1994) . Let f \u03b5 denote a feature of the context of \u03b5. A feature can be, for example, a word's occurrence immediately to the left of \u03b5. We define many such features. For each feature f \u03b5 , we use the classified data to calculate the posterior probability ratio of each sense t \u03b5 with respect to the feature as", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 79, |
| "text": "Yarowsky (1994)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u03bb(t \u03b5 | f \u03b5 ) = P(t \u03b5 | f \u03b5 ) P(t \u03b5 | f \u03b5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For each feature f \u03b5 , we create a rule consisting of the feature, the sense", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "arg max t\u03b5\u2208T\u03b5 \u03bb(t \u03b5 | f \u03b5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "and the score max", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "t\u03b5\u2208T\u03b5 \u03bb(t \u03b5 | f \u03b5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We sort the rules in descending order with respect to their scores, provided that the scores of the rules are larger than the default", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "max t\u03b5\u2208T\u03b5 P(t \u03b5 ) P(t \u03b5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The sorted rules form an if-then-else type of rule sequence, that is, a decision list. 3 For a new instance e \u03b5 , we use the decision list to determine its sense. The rule in the list whose feature is first satisfied in the context of e \u03b5 is applied in sense disambiguation. Examples of classified data (\u03b5 = plant).", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 88, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decision Lists", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Let us next consider the use of naive Bayesian classifiers. Given an instance e \u03b5 , we can calculate", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03bb * (e \u03b5 ) = max t\u03b5\u2208T\u03b5 P(t \u03b5 | e \u03b5 ) P(t \u03b5 | e \u03b5 ) = max t\u03b5\u2208T\u03b5 P(t \u03b5 )P(e \u03b5 | t \u03b5 ) P(t \u03b5 )P(e \u03b5 |t \u03b5 )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "according to Bayes' rule and select the sense", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "t * (e \u03b5 ) = arg max t\u03b5\u2208T\u03b5 P(t \u03b5 )P(e \u03b5 | t \u03b5 ) P(t \u03b5 )P(e \u03b5 |t \u03b5 )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In a naive Bayesian classifier, we assume that the words in e \u03b5 with a fixed t \u03b5 are independently generated from P(e \u03b5 | t \u03b5 ) and calculate", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "P(e \u03b5 | t \u03b5 ) = m i=1 P(e \u03b5,i | t \u03b5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Here P(e \u03b5 | t \u03b5 ) represents the conditional probability of e in the context of \u03b5 given t \u03b5 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We calculate P(e \u03b5 |t \u03b5 ) similarly. We can then calculate (1) and (2) with the obtained P(e \u03b5 | t \u03b5 ) and P(e \u03b5 |t \u03b5 ). The naive Bayesian ensemble method for word sense disambiguation, as proposed in Pedersen (2000) , employs a linear combination of several naive Bayesian classifiers constructed on the basis of a number of nested surrounding contexts 4", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 217, |
| "text": "Pedersen (2000)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "P(t \u03b5 | e \u03b5 ) = 1 h h i=1 P(t \u03b5 | e \u03b5,i ) e \u03b5,1 \u2282 \u2022 \u2022 \u2022 \u2282 e \u03b5,i \u2022 \u2022 \u2022 \u2282 e \u03b5,h = e \u03b5 (i = 1, . . . , h)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The naive Bayesian ensemble is reported to perform the best for word sense disambiguation with respect to a benchmark data set (Pedersen 2000) .", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 142, |
| "text": "(Pedersen 2000)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Bayesian Ensemble", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Since data preparation for supervised learning is expensive, it is desirable to develop bootstrapping methods. Yarowsky (1995) proposed such a method for word sense disambiguation, which we refer to as monolingual bootstrapping.", |
| "cite_spans": [ |
| { |
| "start": 111, |
| "end": 126, |
| "text": "Yarowsky (1995)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Let L \u03b5 denote a set of classified instances (labeled data) in English, each representing one context of \u03b5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "L \u03b5 = {(e \u03b5,1 , t \u03b5,1 ), (e \u03b5,2 , t \u03b5,2 ), . . . , (e \u03b5,k , t \u03b5,k )} t \u03b5,i \u2208 T \u03b5 (i = 1, 2, . . . , k)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "and U \u03b5 a set of unclassified instances (unlabeled data) in English, each representing one context of \u03b5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "U \u03b5 = {e \u03b5,1 , e \u03b5,2 , . . . , e \u03b5,l }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The instances in Figure 1 can be considered examples of L \u03b5 . Furthermore, we have", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 25, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "L E = \u03b5\u2208E L \u03b5 , U E = \u03b5\u2208E U \u03b5 , T = \u03b5\u2208E T \u03b5 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "An algorithm for monolingual bootstrapping is presented in Figure 2 . For a better comparison with bilingual bootstrapping, we have extended the method so that it", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 59, |
| "end": 67, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Input: E, T, L E , U E , Parameter: b, \u03b8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Repeat the following processes until unable to continue", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "1. 1 for each (\u03b5 \u2208 E) { 2 for each (t \u2208 T \u03b5 ) { 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "use L \u03b5 to create classifier:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "P(t | e \u03b5 ), t \u2208 T \u03b5 and P(t | e \u03b5 ),t \u2208 T \u03b5 \u2212 {t}; }} 2. 4 for each (\u03b5 \u2208 E) { 5 NU \u2190 {}; NL \u2190 {}; 6 for each (t \u2208 T \u03b5 ) { 7 S t \u2190 {}; 8 Q t \u2190 {};} 9 for each (e \u03b5 \u2208 U \u03b5 ){ 10 calculate \u03bb * (e \u03b5 ) = max t\u2208T\u03b5 P(t | e \u03b5 ) P(t | e \u03b5 ) ; 11 let t * (e \u03b5 ) = arg max t\u2208T\u03b5 P(t | e \u03b5 ) P(t | e \u03b5 ) ; 12 if (\u03bb * (e \u03b5 ) > \u03b8 & t * (e \u03b5 ) = t) 13", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "put e \u03b5 into S t ;} 14 for each (t \u2208 T \u03b5 ){ 15 sort e \u03b5 \u2208 S t in descending order of \u03bb * (e \u03b5 ) and put the top b elements into Q t ;} 16 for each (e \u03b5 \u2208 t Q t ){ 17 put e \u03b5 into NU and put (e \u03b5 , t performs disambiguation for all the words in E. Note that we can employ any kind of classifier here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "* (e \u03b5 )) into NL;} 18 L \u03b5 \u2190 L \u03b5 NL; 19 U \u03b5 \u2190 U \u03b5 \u2212 NU;}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "At step 1, for each ambiguous word \u03b5 we create binary classifiers for resolving its ambiguities (cf. lines 1-3 of Figure 2 ). At step 2, we use the classifiers for each word \u03b5 to select some unclassified instances from U \u03b5 , classify them, and add them to L \u03b5 (cf. lines 4-19). We repeat the process until all the data are classified.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 114, |
| "end": 122, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Lines 9-13 show that for each unclassified instance e \u03b5 , we classify it as having sense t if t's posterior odds are the largest among the possible senses and are larger than a threshold \u03b8. For each class t, we store the classified instances in S t . Lines 14-15 show that for each class t, we only choose the top b classified instances in terms of the posterior odds. For each class t, we store the selected top b classified instances in Q t . Lines 16-17 show that we create the classified instances by combining the instances with their classification labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "After line 17, we can employ the one-sense-per-discourse heuristic to further classify unclassified data, as proposed in Yarowsky (1995) . This heuristic is based on the observation that when an ambiguous word appears in the same text several times, its tokens usually refer to the same sense. In the bootstrapping process, for each newly classified instance, we automatically assign its class label to those unclassified instances that also contain the same ambiguous word and co-occur with it in the same text.", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 136, |
| "text": "Yarowsky (1995)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Hereafter, we will refer to this method as monolingual bootstrapping with one sense per discourse. This method can be viewed as a special case of co-training (Blum and Mitchell 1998) .", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 182, |
| "text": "(Blum and Mitchell 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Monolingual Bootstrapping", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Monolingual bootstrapping augmented with the one-sense-per-discourse heuristic can be viewed as a special case of co-training, as proposed by Blum and Mitchell (1998) (see also Collins and Singer 1999; and Nigam and Ghani 2000) . Cotraining conducts two bootstrapping processes in parallel and makes them collaborate with each other. More specifically, co-training begins with a small number of classified data and a large number of unclassified data. It trains two classifiers from the classified data, uses each of the two classifiers to classify some unclassified data, makes the two classifiers exchange their classified data, and repeats the process.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 166, |
| "text": "Blum and Mitchell (1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 177, |
| "end": 201, |
| "text": "Collins and Singer 1999;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 202, |
| "end": 227, |
| "text": "and Nigam and Ghani 2000)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Co-training", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "Bilingual bootstrapping makes use of a small amount of classified data and a large amount of unclassified data in both the source and the target languages in translation. It repeatedly constructs classifiers in the two languages in parallel and boosts the performance of the classifiers by classifying data in each of the languages and by exchanging information regarding the classified data between the two languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Figures 3 and 4 illustrate the process of bilingual bootstrapping. Figure 5 shows the translation relationship among the ambiguous words plant, zhiwu, and gongchang. There is a classifier for plant in English. There are also two classifiers, one each for zhiwu and gongchang, respectively, in Chinese. Sentences containing plant in English and sentences containing zhiwu and gongchang in Chinese are used.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 67, |
| "end": 75, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Basic Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the beginning, sentences P1 and P4 on the English side are assigned labels 1 and 2, respectively (Figure 3 ). On the Chinese side, sentences G1 and G3 are assigned labels 1 and 3, respectively, and sentences Z1 and Z3 are assigned labels 2 and 4, respectively. The four labels here correspond to the four links in Figure 5 . For example, label 1 represents the sense factory and label 2 represents the sense flora. Other sentences are ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 109, |
| "text": "(Figure 3", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 317, |
| "end": 325, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Basic Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Example of translation dictionary. not labeled. Bilingual bootstrapping uses labeled sentences P1, P4, G1, and Z1 to create a classifier for plant disambiguation (between label 1 and label 2). It also uses labeled sentences Z1, Z3, and P4 to create a classifier for zhiwu and uses labeled sentences G1, G3, and P1 to create a classifier for gongzhang. Bilingual bootstrapping next uses the classifier for plant to label sentences P2 and P5 ( Figure 4 ). It uses the classifier for zhiwu to label sentences Z2 and Z4, and uses the classifier for gongchang to label sentences G2 and G4. The process is repeated until we cannot continue.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 442, |
| "end": 450, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "To describe this process formally, let E denote a set of words in English, C a set of words in Chinese, and T a set of senses (links) in a translation dictionary as shown in Figure 5 . (Any two linked words can be translations of each other.) Mathematically, T is defined as a relation between E and C, that is, T \u2286 E \u00d7 C. Let \u03b5 stand for an ambiguous word in E, and \u03b3 an ambiguous word in C. Also let e stand for a context word in E, c a context word in C, and t a sense in T.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 174, |
| "end": 182, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "For an English word \u03b5, T \u03b5 = {t | t = (\u03b5, \u03b3 ), t \u2208 T} represents the set of \u03b5's possible senses (i.e., its links), and C \u03b5 = {\u03b3 | (\u03b5, \u03b3 ) \u2208 T} represents the Chinese words that can be translations of \u03b5 (i.e., Chinese words to which \u03b5 is linked). Similarly, for a Chinese word \u03b3, let", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "T \u03b3 = {t | t = (\u03b5 , \u03b3), t \u2208 T} and E \u03b3 = {\u03b5 | (\u03b5 , \u03b3) \u2208 T}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "For the example in Figure 5 , when \u03b5 = plant, we have T \u03b5 = {1, 2} and C \u03b5 = {gongchang, zhiwu}. When \u03b3 = gongchang, T \u03b3 = {1, 3} and E \u03b3 = {plant, mill}. When \u03b3 = zhiwu, T \u03b3 = {2, 4} and E \u03b3 = {plant, vegetable}. Note that gongchang and zhiwu share the senses {1, 2} with plant.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 27, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Let e \u03b5 denote an instance (a sequence of context words surrounding \u03b5) in English:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "e \u03b5 = (e \u03b5,1 , e \u03b5,2 , . . . , e \u03b5,m ), e \u03b5,i \u2208 E (i = 1, 2, . . . , m)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Let c \u03b3 denote an instance (a sequence of context words surrounding \u03b3) in Chinese:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "c \u03b3 = (c \u03b3,1 , c \u03b3,2 , . . . , c \u03b3,n , c \u03b3,i \u2208 C (i = 1, 2, . . . , n)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "For an English word \u03b5, a binary classifier for resolving each of the ambiguities in T \u03b5 is defined as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "P(t \u03b5 | e \u03b5 ), t \u03b5 \u2208 T \u03b5 and P(t \u03b5 | e \u03b5 ),t \u03b5 = T \u03b5 \u2212 {t \u03b5 }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Similarly, for a Chinese word \u03b3, a binary classifier is defined as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "P(t \u03b3 | c \u03b3 ), t \u03b3 \u2208 T \u03b3 and P(t \u03b3 | c \u03b3 ),t = T \u03b3 \u2212 {t \u03b3 }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Let L \u03b5 denote a set of classified instances in English, each representing one context of \u03b5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "L \u03b5 = {(e \u03b5,1 , t \u03b5,1 ), (e \u03b5,2 , t \u03b5,2 ), . . . , (e \u03b5,k , t \u03b5,k )}, t \u03b5,i \u2208 T \u03b5 (i = 1, 2, . . . , k)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "and U \u03b5 a set of unclassified instances in English, each representing one context of \u03b5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "U \u03b5 = {e \u03b5,1 , e \u03b5,2 , . . . , e \u03b5,l }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Similarly, we denote the sets of classified and unclassified instances with respect to \u03b3 in Chinese as L \u03b3 and U \u03b3 , respectively. Furthermore, we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "L E = \u03b5\u2208E L \u03b5 , L C = \u03b3\u2208C L \u03b3 , U E = \u03b5\u2208E U \u03b5 , U C = \u03b3\u2208C U \u03b3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "We also have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "T = \u03b5\u2208E T \u03b5 = \u03b3\u2208C T \u03b3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Sentences P1 and P4 in Figure 3 are examples of L \u03b5 . Sentences Z1, Z3 and G1, G3 are examples of L \u03b3 . We perform bilingual bootstrapping as described in Figure 6 . Note that we can, in principle, employ any kind of classifier here.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 23, |
| "end": 31, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 155, |
| "end": 163, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "The figure explains the process for English (left-hand side); the process for Chinese (right-hand side) behaves similarly. At step 1, for each ambiguous word \u03b5, we create binary classifiers for resolving its ambiguities (cf. lines 1-3). The main point here is that we use classified data from both languages to construct classifiers, as we describe in Section 3.2. For the example in Figure 3 , we use both L \u03b5 (sentences P1 and P4) and L \u03b3 , \u03b3 \u2208 C \u03b5 (sentences Z1 and G1) to construct a classifier resolving ambiguities in T \u03b5 = {1, 2}. Note that not only P1 and P4, but also Z1 and G1, are related to {1, 2}. At step 2, for each word \u03b5, we use its classifiers to select some unclassified instances from U \u03b5 , classify them, and add them to L \u03b5 (cf. lines 4-19). We repeat the process until we cannot continue.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 384, |
| "end": 392, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Lines 9-13 show that for each unclassified instance e \u03b5 , we use the classifiers to classify it into the class (sense) t if t's posterior odds are the largest among the possible classes and are larger than a threshold \u03b8. For each class t, we store the classified instances in S t . Lines 14-15 show that for each class t, we choose only the top b classified instances (in terms of the posterior odds), which are then stored in Q t . Lines 16-17 show that we create the classified instances by combining the instances with their classification labels. We note that after line 17 we can also employ the one-senseper-discourse heuristic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Although we can in principle employ any kind of classifier in BB, we use here naive Bayes (or naive Bayesian ensemble). We also use the EM algorithm in classified data transformation between languages. As will be made clear, this implementation of BB can naturally combine the features of naive Bayes (or naive Bayesian ensemble) and the features of EM. Hereafter, when we refer to BB, we mean this implementation of BB.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We explain the process for English (left-hand side of Figure 6) ; the process for Chinese (right-hand side of figure) behaves similarly. At step 1 in BB, we construct a naive Bayesian classifier as described in Figure 7 . At step 2, for each instance e \u03b5 , we use the classifier to calculate", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 54, |
| "end": 63, |
| "text": "Figure 6)", |
| "ref_id": null |
| }, |
| { |
| "start": 211, |
| "end": 219, |
| "text": "Figure 7", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u03bb * (e \u03b5 ) = max t\u03b5\u2208T\u03b5 P(t \u03b5 | e \u03b5 ) P(t \u03b5 | e \u03b5 ) = max t\u03b5\u2208T\u03b5 P(t \u03b5 )P(e \u03b5 | t \u03b5 ) P(t \u03b5 )P(e \u03b5 |t \u03b5 ) Figure 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Bilingual bootstrapping.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We estimate", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "P(e \u03b5 | t \u03b5 ) = m i=1 P(e \u03b5,i | t \u03b5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We estimate P(e \u03b5 |t \u03b5 ) similarly. We estimate P(e \u03b5 | t \u03b5 ) by linearly combining P (E) (e \u03b5 | t \u03b5 ) estimated from English and P (C) (e \u03b5 | t \u03b5 ) estimated from Chinese:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P(e \u03b5 | t \u03b5 ) = (1 \u2212 \u03b1 \u2212 \u03b2)P (E) (e \u03b5 | t \u03b5 ) + \u03b1P (C) (e \u03b5 | t \u03b5 ) + \u03b2P (U) (e \u03b5 )", |
| "eq_num": "( 3)" |
| } |
| ], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where 0 \u2264 \u03b1 \u2264 1, 0 \u2264 \u03b2 \u2264 1, \u03b1 + \u03b2 \u2264 1, and P (U) (e \u03b5 ) is a uniform distribution over E, which is used for avoiding zero probability. In this way, we estimate P(e \u03b5 | t \u03b5 ) using information from not only English, but also Chinese. We estimate P (E) (e \u03b5 | t \u03b5 ) with maximum-likelihood estimation (MLE) using L \u03b5 as data. The estimation of P (C) (e \u03b5 | t \u03b5 ) proceeds as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For the sake of readability, we rewrite P (C) (e \u03b5 | t \u03b5 ) as P(e | t). We define a finitemixture model of the form P(c | t) = e\u2208E P(c | e, t)P(e | t), and for a specific \u03b5 we assume that the data in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "L \u03b3 = {(c \u03b3,1 , t \u03b3,1 ), (c \u03b3,2 , t \u03b3,2 ), . . . , (c \u03b3,h , t \u03b3,h )}, t \u03b3,i \u2208 T \u03b3 (i = 1, . . . , h), \u2200\u03b3 \u2208 C \u03b5 estimate P (E) (e \u03b5 | t \u03b5 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "with MLE using L \u03b5 as data; estimate P (C) (e \u03b5 | t \u03b5 ) with EM algorithm using L \u03b3 for each \u03b3 \u2208 C \u03b5 as data; calculate P(e \u03b5 | t \u03b5 ) as a linear combination of P (E) (e \u03b5 | t \u03b5 ) and P (C) (e \u03b5 | t \u03b5 ); estimate P(t \u03b5 ) with MLE using L \u03b5 ; calculate P(e \u03b5 |t \u03b5 ) and P(t \u03b5 ) similarly. are generated independently from the model. We can therefore employ the expectationmaximization (EM) algorithm (Dempster, Laird, and Rubin 1977) to estimate the parameters of the model, including P(e | t). Note that e and c represent context words.", |
| "cite_spans": [ |
| { |
| "start": 399, |
| "end": 432, |
| "text": "(Dempster, Laird, and Rubin 1977)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Recall that E is a set of words in English, C is a set of words in Chinese, and T is a set of senses. For a specific English word e, C e = {c | (e, c ) \u2208 T} represents the Chinese words that are its possible translations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Initially, we set", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "P(c | e, t) = \uf8f1 \uf8f2 \uf8f3 1 |C e | , if c \u2208 C e 0, if c \u2208 C e P(e | t) = 1 |E| , e \u2208 E", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We next estimate the parameters by iteratively updating them, as described in Figure 8 , until they converge. Here f (c, t) stands for the frequency of c in the instances which have sense t. The context information in Chinese f (c, t \u03b5 ) is then \"transformed\" into the English version P (C) (e \u03b5 | t \u03b5 ) through the links in T. Figure 9 shows an example of estimating P(e \u03b5 | t \u03b5 ) with respect to the factory sense (i.e., sense 1). We first use sentences such as P1 in Figure 3 to estimate P (E) (e \u03b5 | t \u03b5 ) with MLE as described above. We next use sentences such as G1 to estimate P (C) (e \u03b5 | t \u03b5 ) as described above. Specifically, with the frequency data f (c, t \u03b5 ) and EM we can estimate P (C) (e \u03b5 | t \u03b5 ). Finally, we linearly combine P (E) (e \u03b5 | t \u03b5 ) and P (C) (e \u03b5 | t \u03b5 ) to obtain P(e \u03b5 | t \u03b5 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 78, |
| "end": 86, |
| "text": "Figure 8", |
| "ref_id": null |
| }, |
| { |
| "start": 328, |
| "end": 336, |
| "text": "Figure 9", |
| "ref_id": null |
| }, |
| { |
| "start": 470, |
| "end": 478, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "An Implementation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We note that monolingual bootstrapping is a special case of bilingual bootstrapping (consider the situation in which \u03b1 = 0 in formula (3)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison of BB and MB", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "BB can always perform better than MB. The asymmetric relationship between the ambiguous words in the two languages stands out as the key to the higher performance The EM algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison of BB and MB", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Parameter estimation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Example application of BB.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "of BB. By asymmetric relationship we mean the many-to-many mapping relationship between the words in the two languages, as shown in Figure 10 . Suppose that the classifier with respect to plant has two classes (denoted as A and B in Figure 10 ). Further suppose that the classifiers with respect to gongchang and zhiwu in Chinese each have two classes (C and D) and (E and F), respectively. A and D are equivalent to one another (i.e., they represent the same sense), and so are B and E.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 132, |
| "end": 141, |
| "text": "Figure 10", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 233, |
| "end": 242, |
| "text": "Figure 10", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "Assume that instances are classified after several iterations of BB as depicted in Figure 10 . Here, circles denote the instances that are correctly classified and crosses denote the instances that are incorrectly classified.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 83, |
| "end": 92, |
| "text": "Figure 10", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "Since A and D are equivalent to one another, we can transform the instances with D and use them to boost the performance of classification to A, because the misclassified instances (crosses) with D are those mistakenly classified from C, and they will not have much negative effect on classification to A, even though the translation from Chinese into English can introduce some noise. Similar explanations can be given for other classification decisions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "In contrast, MB uses only the instances in A and B to construct a classifier. When the number of misclassified instances increases (as is inevitable in bootstrapping), its performance will stop improving. This phenomenon has also been observed when MB is applied to other tasks (cf. Banko and Brill 2001; Pierce and Cardie 2001) .", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 304, |
| "text": "Banko and Brill 2001;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 305, |
| "end": 328, |
| "text": "Pierce and Cardie 2001)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "We note that there are similarities between BB and co-training. Both BB and co-training execute two bootstrapping processes in parallel and make the two processes collaborate with one another in order to improve their performance. The two processes look at different types of information in data and exchange the information in learning. However, there are also significant differences between BB and co-training. In co-training, the two processes use different features, whereas in BB, the two processes use different classes. In BB, although the features used by the two classifiers are transformed from one language into the other, they belong to the same space. In co-training, on the other hand, the features used by the two classifiers belong to two different spaces.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relationship between BB and Co-training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We have conducted two experiments on English-Chinese translation disambiguation. In this section, we will first describe the experimental settings and then present the results. We will also discuss the results of several follow-on experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Although it is possible to straightforwardly apply the algorithm of BB described in Section 3 to word translation disambiguation, here we use a variant of it better adapted to the task and for fairer comparison with existing technologies. The variant of BB we use has four modifications:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Disambiguation Using BB", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "It actually employs naive Bayesian ensemble rather than naive Bayes, because naive Bayesian ensemble generally performs better than naive Bayes (Pedersen 2000) .", |
| "cite_spans": [ |
| { |
| "start": 144, |
| "end": 159, |
| "text": "(Pedersen 2000)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "It employs the one-sense-per-discourse heuristic. It turns out that in BB with one sense per discourse, there are two layers of bootstrapping. On the top level, bilingual bootstrapping is performed between the two languages, and on the second level, co-training is performed within each language. (Recall that MB with one sense per discourse can be viewed as co-training.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "It uses only classified data in English at the beginning. That is to say, it requires exactly the same human labeling efforts as MB does.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "It individually resolves ambiguities on selected English words such as plant and interest. (Note that the basic algorithm of BB performs disambiguation on all the words in English and Chinese.) As a result, in the case of plant, for example, the classifiers with respect to gongchang and zhiwu make classification decisions only on D and E and not C and F (in Figure 10) , because it is not necessary to make classification decisions on C and F. In particular, it calculates \u03bb * (c) as \u03bb * (c) = P(c | t) and sets \u03b8 = 0 in the right-hand side of step 2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 360, |
| "end": 370, |
| "text": "Figure 10)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "We consider here two implementations of MB for word translation disambiguation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Disambiguation Using MB", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the first implementation, in addition to the basic algorithm of MB, we also use (1) naive Bayesian ensemble, (2) one sense per discourse, and (3) a small amount of classified data in English at the beginning. (We will denote this implementation as MB-B hereafter.) The second implementation is different from the first one only in (1). That is, it employs a decision list as the classifier. This implementation is exactly the one proposed in Yarowsky (1995) . (We will denote it as MB-D hereafter.) MB-B and MB-D can be viewed as the state-of-the-art methods for word translation disambiguation using bootstrapping.", |
| "cite_spans": [ |
| { |
| "start": 445, |
| "end": 460, |
| "text": "Yarowsky (1995)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Disambiguation Using MB", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We first applied BB, MB-B, and MB-D to translation disambiguation on the English words line and interest using a benchmark data set. 5 The data set consists mainly of articles from the Wall Street Journal and is prepared for conducting word sense disambiguation (WSD) on the two words (e.g., Pedersen 2000) . We collected from the HIT dictionary 6 the Chinese words that can be translations of the two English words; these are listed in Table 1 . One sense of an English word links to one group of Chinese words. (For the word interest, we used only its four major senses, because the remaining two minor senses occur in only 3.3% of the data.)", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 134, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 292, |
| "end": 306, |
| "text": "Pedersen 2000)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 437, |
| "end": 444, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: WSD Benchmark Data", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For each sense, we selected an English word that is strongly associated with the sense according to our own intuition (cf. Table 1) . We refer to this word as a seed word. For example, for the sense of money paid for the use of money, we selected the word rate. We viewed the seed word as a classified \"sentence,\" following a similar proposal in Yarowsky (1995) . In this way, for each sense we had a classified instance in English. As unclassified data in English, we collected sentences in news articles from a Web site (www.news.com), and as unclassified data in Chinese, we collected sentences in news articles from another Web site (news.cn.tom.com). Note that we need to use only the sentences containing the words in Table 1 . We observed that the distribution of the senses in the unclassified data was balanced. As test data, we used the entire benchmark data set. Table 2 shows the sizes of the data sets. Note that there are in general more unclassified sentences (and texts) in Chinese than in English, because one English word usually can link to several Chinese words (cf. Figure 5) .", |
| "cite_spans": [ |
| { |
| "start": 346, |
| "end": 361, |
| "text": "Yarowsky (1995)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 123, |
| "end": 131, |
| "text": "Table 1)", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 724, |
| "end": 731, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 874, |
| "end": 881, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 1087, |
| "end": 1096, |
| "text": "Figure 5)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: WSD Benchmark Data", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As the translation dictionary, we used the HIT dictionary, which contains about 76,000 Chinese words, 60,000 English words, and 118,000 senses (links). We then used the data to conduct translation disambiguation with BB, MB-B, and MB-D, as described in Sections 4.1 and Section 4.2. For both BB and MB-B, we used an ensemble of five naive Bayesian classifiers with window sizes of \u00b11, \u00b13, \u00b15, \u00b17, and \u00b19 words, and we set the parameters \u03b2, b, and \u03b8 to 0.2, 15, and 1.5, respectively. The parameters were tuned on the basis of our preliminary experimental results on MB-B; they were not tuned, however, for BB. We set the BB-specific parameter \u03b1 to 0.4, which meant that we weighted information from English and Chinese equally. Table 3 shows the translation disambiguation accuracies of the three methods as well as that of a baseline method in which we always choose the most frequent sense. Figures 11 and 12 show the learning curves of MB-D, MB-B, and BB. Figure 13 shows the accuracies of BB with different \u03b1 values. From the results, we see that BB consistently and significantly outperforms both MB-D and MB-B. The results from the sign test are statistically significant (p-value < 0.001). (For the sign test method, see, for example, Yang and Liu [1999] ). Table 4 shows the results achieved by some existing supervised learning methods with respect to the benchmark data (cf. Pedersen 2000) . Although BB is a method nearly equivalent to one based on unsupervised learning, it still performs favorably when compared with the supervised methods (note that since the experimental settings are different, the results cannot be directly compared).", |
| "cite_spans": [ |
| { |
| "start": 1242, |
| "end": 1261, |
| "text": "Yang and Liu [1999]", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1385, |
| "end": 1399, |
| "text": "Pedersen 2000)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 728, |
| "end": 735, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 893, |
| "end": 910, |
| "text": "Figures 11 and 12", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 959, |
| "end": 968, |
| "text": "Figure 13", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 1265, |
| "end": 1272, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: WSD Benchmark Data", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We also conducted translation on seven of the twelve English words studied in Yarowsky (1995) . Table 5 lists the words we used. ", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 93, |
| "text": "Yarowsky (1995)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 96, |
| "end": 103, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 2: Yarowsky's Words", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Learning curves with interest.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Learning curves with line. Learning curves with space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 12", |
| "sec_num": null |
| }, |
| { |
| "text": "Learning curves with tank. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "We investigated the reason for BB's outperforming MB and found that the explanation in Section 3.3 appears to be valid according to the following observations. 1. In a naive Bayesian classifier, words with large values of likelihood ratio P(e|t) P(e|t) will have strong influences on classification. We collected the words having the largest likelihood ratio with respect to each sense t in both BB and MB-B and found that BB obviously has more \"relevant words\" than MB-B. Here words relevant to a particular sense refer to the words that are strongly indicative of that sense according to human judgments. Table 8 shows the top 10 words in terms of likelihood ratio with respect to the interest rate sense in both BB and MB-B. The relevant words are italicized. Figure 21 shows the numbers of relevant words with respect to the four senses of interest in BB and MB-B.", |
| "cite_spans": [ |
| { |
| "start": 239, |
| "end": 252, |
| "text": "P(e|t) P(e|t)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 607, |
| "end": 614, |
| "text": "Table 8", |
| "ref_id": null |
| }, |
| { |
| "start": 763, |
| "end": 772, |
| "text": "Figure 21", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "2. From Figure 13 , we see that the performance of BB remains high or gets higher even when \u03b1 becomes larger than 0.4 (recall that \u03b2 was fixed at 0.2). This result strongly indicates that the information from Chinese has positive effects.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 8, |
| "end": 17, |
| "text": "Figure 13", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "3. One might argue that the higher performance of BB can be attributed to the larger amount of unclassified data it uses, and thus if we increase the amount of unclassified data for MB, it is likely that MB can perform as well as BB. We conducted an additional experiment and found that this is not the case. Figure 22 shows the accuracies achieved by MB-B as the amount of unclassified data increases. The plot shows that the accuracy of MB-B does not improve when the amount of unclassified ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 309, |
| "end": 318, |
| "text": "Figure 22", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "When more unclassified data available. data increases. Figure 22 plots again the results of BB as well as those of a method referred to as MB-C. In MB-C, we linearly combined two MB-B classifiers constructed with two different unclassified data sets, and we found that although the accuracies are improved in MB-C, they are still much lower than those of BB.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 55, |
| "end": 64, |
| "text": "Figure 22", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 22", |
| "sec_num": null |
| }, |
| { |
| "text": "Accuracy of text classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 9", |
| "sec_num": null |
| }, |
| { |
| "text": "MB-B (%) BB (%)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classes", |
| "sec_num": null |
| }, |
| { |
| "text": "Finance and industry 93.2 92.9 Finance and trade 78.4 78.6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classes", |
| "sec_num": null |
| }, |
| { |
| "text": "Accuracy of disambiguation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 10", |
| "sec_num": null |
| }, |
| { |
| "text": "With one sense per discourse 54.7 69.3 75.5 Without one sense per discourse 54.6 66.4 71.6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MB-D (%) MB-B (%) BB (%)", |
| "sec_num": null |
| }, |
| { |
| "text": "We have addressed here the problem of classification across two languages. Specifically we have considered the problem of bootstrapping. We find that when the task is word translation disambiguation between two languages, we can use the asymmetric relationship between the ambiguous words in the two languages to significantly boost the performance of bootstrapping. We refer to this approach as bilingual bootstrapping.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We have developed a method for implementing this bootstrapping approach that naturally combines the use of naive Bayes and the EM algorithm. Future work includes a theoretical analysis of bilingual bootstrapping (generalization error of BB, relationship between BB and co-training, etc.) and extensions of bilingual bootstrapping to more complicated machine translation tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5." |
| }, |
| { |
| "text": "In this article we always employ binary classifiers even there are multiple classes. 3 We note that there are two types of decision lists. One is defined as here; the other is defined as a conditional distribution over a partition of the feature space (cf.Li and Yamanishi 2002).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Here u \u2282 v denotes that u is a sub-sequence of v.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.d.umn.edu/\u223ctpederse/data.html. 6 This dictionary was created by Harbin Institute of Technology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://encarta.msn.com/default.asp. 8 http://www.whlib.ac.cn/sjk/bkqs.htm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "4. We have noticed that a key to BB's performance is the asymmetric relationship between the classes in the two languages. Therefore, we tested the performance of MB and BB when the classes in the two languages are symmetric (i.e., one-to-one mapping).We performed two experiments on text classification in which the categories were finance and industry, and finance and trade, respectively. We collected Chinese texts from the People's Daily in 1998 that had already been assigned class labels. We used half of them as unclassified training data in Chinese and the remaining as test data in Chinese. We also collected English texts from the Wall Street Journal. We used them as unlabeled training data in English. We used the class names (i.e., finance, industry, and trade, as seed data (classified data)). Table 9 shows the accuracies of text classification. From the results we see that when the classes are symmetric, BB cannot outperform MB.5. We also investigated the effect of the one-sense-per-discourse heuristic. Table 10 shows the performance of MB and BB on the word interest with and without the heuristic. We see that with the heuristic, the performance of both MB and BB is improved. Even without the heuristic, BB still performs better than MB with the heuristic.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 809, |
| "end": 816, |
| "text": "Table 9", |
| "ref_id": null |
| }, |
| { |
| "start": 1024, |
| "end": 1032, |
| "text": "Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| }, |
| { |
| "text": "We thank Ming Zhou, Ashley Chang and Yao Meng for their valuable comments and suggestions on an early draft of this article. We acknowledge the four anonymous reviewers of this article for their valuable comments and criticisms. We thank Michael Holmes, Mark Petersen, Kevin Knight, and Bob Moore for their checking of the English of this article. A previous version of this article appeared in Proceedings of the Fortieth Annual Meeting of the Association for Computational Linguistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "For each of the English words, we extracted about 200 sentences containing the word from the Encarta 7 English corpus and hand-labeled those sentences using our own Chinese translations. We used the labeled sentences as test data and the unlabeled sentences as unclassified data in English. Table 6 shows the data set sizes. We also used the sentences in the Great Encyclopedia 8 Chinese corpus as unclassified data in Chinese. We defined, for each sense, a seed word in English as a classified instance in English (cf. Table 5 ). We did not, however, conduct translation disambiguation on the words crane, sake, poach, axes, and motion, because the first four words do not frequently occur in the Encarta corpus, and the accuracy of choosing the major translation for the last word already exceeds 98%.We next applied BB, MB-B, and MB-D to word translation disambiguation. The parameter settings were the same as those in Experiment 1. Table 7 shows the disambiguation accuracies, and Figures 14-20 show the learning curves for the seven words.From the results, we see again that BB significantly outperforms MB-D and MB-B. Note that the results of MB-D here cannot be directly compared with those in Yarowsky (1995) , because the data used are different. Naive Bayesian ensemble did not perform well on the word duty, causing the accuracies of both MB-B and BB to deteriorate.", |
| "cite_spans": [ |
| { |
| "start": 1202, |
| "end": 1217, |
| "text": "Yarowsky (1995)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 291, |
| "end": 298, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 520, |
| "end": 527, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 937, |
| "end": 944, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 986, |
| "end": 999, |
| "text": "Figures 14-20", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Scaling to very very large corpora for natural language disambiguation", |
| "authors": [ |
| { |
| "first": "Michele", |
| "middle": [], |
| "last": "Banko", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "26--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Banko, Michele, and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 26-33, Toulouse, France.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Combining labeled and unlabeled data with co-training", |
| "authors": [ |
| { |
| "first": "Avrim", |
| "middle": [], |
| "last": "Blum", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [ |
| "M" |
| ], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 11th Annual Conference on Computational Learning Theory", |
| "volume": "", |
| "issue": "", |
| "pages": "92--100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blum, Avrim, and Tom M. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the 11th Annual Conference on Computational Learning Theory, pages 92-100, Madison, WI.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Word sense disambiguation using statistical methods", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "264--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mercer. 1991. Word sense disambiguation using statistical methods. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, pages 264-270, University of California, Berkeley.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Word-sense disambiguation using decomposable models", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Bruce", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Weibe", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "139--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bruce, Rebecca, and Janyce Weibe. 1994. Word-sense disambiguation using decomposable models. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 139-146, New Mexico State University, Las Cruces.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Unsupervised models for named entity classification", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collins, Michael, and Yoram Singer. 1999. Unsupervised models for named entity classification. In Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, University of Maryland, College Park.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Word sense disambiguation using a second language monolingual corpus", |
| "authors": [ |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Itai", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Computational Linguistics", |
| "volume": "20", |
| "issue": "4", |
| "pages": "563--596", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dagan, Ido, and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus. Computational Linguistics, 20(4):563-596.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Maximum likelihood from incomplete data via the EM algorithm", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "P" |
| ], |
| "last": "Dempster", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "M" |
| ], |
| "last": "Laird", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "B" |
| ], |
| "last": "Rubin", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Journal of the Royal Statistical Society B", |
| "volume": "39", |
| "issue": "", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dempster, A. P., N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1-38.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Boosting applied to word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Gerard", |
| "middle": [], |
| "last": "Escudero", |
| "suffix": "" |
| }, |
| { |
| "first": "Lluis", |
| "middle": [], |
| "last": "Marquez", |
| "suffix": "" |
| }, |
| { |
| "first": "German", |
| "middle": [], |
| "last": "Rigau", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 12th European Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "129--141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Escudero, Gerard, Lluis Marquez, and German Rigau. 2000. Boosting applied to word sense disambiguation. In Proceedings of the 12th European Conference on Machine Learning, pages 129-141, Barcelona.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A method for disambiguating word senses in a large corpus", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Computers and Humanities", |
| "volume": "26", |
| "issue": "", |
| "pages": "415--439", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gale, William, Kenneth Church, and David Yarowsky. 1992a. A method for disambiguating word senses in a large corpus. Computers and Humanities, 26:415-439.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "One sense per discourse", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of DARPA Speech and Natural Language Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "233--237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gale, William, Kenneth Church, and David Yarowsky. 1992b. One sense per discourse. In Proceedings of DARPA Speech and Natural Language Workshop, pages 233-237, Harriman, NY.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A Winnow-based approach to context-sensitive spelling correction", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [ |
| "R" |
| ], |
| "last": "Golding", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Machine Learning", |
| "volume": "34", |
| "issue": "", |
| "pages": "107--130", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Golding, Andrew R., and Dan Roth. 1999. A Winnow-based approach to context-sensitive spelling correction. Machine Learning, 34:107-130.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Resolving translation ambiguity using non-parallel bilingual corpora", |
| "authors": [ |
| { |
| "first": "Genichiro", |
| "middle": [], |
| "last": "Kikui", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of ACL '99 Workshop on Unsupervised Learning in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kikui, Genichiro. 1999. Resolving translation ambiguity using non-parallel bilingual corpora. In Proceedings of ACL '99 Workshop on Unsupervised Learning in Natural Language Processing, University of Maryland, College Park.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Estimating word translation probabilities from unrelated monolingual corpora using the EM algorithm", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 17th National Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "711--715", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koehn, Philipp, and Kevin Knight. 2000. Estimating word translation probabilities from unrelated monolingual corpora using the EM algorithm. In Proceedings of the 17th National Conference on Artificial Intelligence, pages 711-715, Austin, TX.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Text classification using ESC-based stochastic decision lists. Information Processing and Management", |
| "authors": [ |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Yamanishi", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "38", |
| "issue": "", |
| "pages": "343--361", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Hang, and Kenji Yamanishi. 2002. Text classification using ESC-based stochastic decision lists. Information Processing and Management, 38:343-361.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Using syntactic dependency as local context to resolve word sense ambiguity", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "64--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Dekang. 1997. Using syntactic dependency as local context to resolve word sense ambiguity. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 64-71, Universidad Nacional de Educaci\u00f3n a Distancia (UNED), Madrid.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatic rule acquisition for spelling correction", |
| "authors": [ |
| { |
| "first": "Lidia", |
| "middle": [], |
| "last": "Mangu", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 14th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "187--194", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mangu, Lidia, and Eric Brill. 1997. Automatic rule acquisition for spelling correction. In Proceedings of the 14th International Conference on Machine Learning, pages 187-194, Nashville, TN.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A method for word sense disambiguation of unrestricted text", |
| "authors": [ |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [ |
| "I" |
| ], |
| "last": "Moldovan", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "152--158", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihalcea, Rada, and Dan I. Moldovan. 1999. A method for word sense disambiguation of unrestricted text. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 152-158, University of Maryland, College Park.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach", |
| "authors": [ |
| { |
| "first": "Hwee", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Hian Beng", |
| "middle": [], |
| "last": "Tou", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "40--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ng, Hwee Tou, and Hian Beng Lee. 1996. Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 40-47, University of California, Santa Cruz.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Text classification from labeled and unlabeled documents using EM", |
| "authors": [ |
| { |
| "first": "Kamal", |
| "middle": [], |
| "last": "Nigam", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Thrun", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [ |
| "M" |
| ], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Machine Learning", |
| "volume": "39", |
| "issue": "", |
| "pages": "103--134", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nigam, Kamal, Andrew McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2-3):103-134.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Analyzing the effectiveness and applicability of co-training", |
| "authors": [ |
| { |
| "first": "Kamal", |
| "middle": [], |
| "last": "Nigam", |
| "suffix": "" |
| }, |
| { |
| "first": "Rayid", |
| "middle": [], |
| "last": "Ghani", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 9th International Conference on Information and Knowledge Management", |
| "volume": "", |
| "issue": "", |
| "pages": "86--93", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nigam, Kamal, and Rayid Ghani. 2000. Analyzing the effectiveness and applicability of co-training. In Proceedings of the 9th International Conference on Information and Knowledge Management, pages 86-93, McLean, VA.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A simple approach to building ensembles of naive Bayesian classifiers for word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pedersen, Ted. 2000. A simple approach to building ensembles of naive Bayesian classifiers for word sense disambiguation. In Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics, Seattle.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Limitations of co-training for natural language learning from large datasets", |
| "authors": [ |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Bruce", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "197--207", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pedersen, Ted, and Rebecca Bruce. 1997. Distinguishing word senses in untagged text. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 197-207, Providence, RI. Pierce, David, and Claire Cardie. 2001. Limitations of co-training for natural language learning from large datasets. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, Carnegie Mellon University, Pittsburgh.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Automatic word sense discrimination", |
| "authors": [ |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Schutze", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "1", |
| "pages": "97--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schutze, Hinrich. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97-124.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Disambiguating highly ambiguous words", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Towell", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [ |
| "M" |
| ], |
| "last": "Voorhees", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "1", |
| "pages": "125--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Towell, Geoffrey, and Ellen M. Voorhees. 1998. Disambiguating highly ambiguous words. Computational Linguistics, 24(1):125-146.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "A re-examination of text categorization methods", |
| "authors": [ |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "42--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang, Yiming, and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 42-49, Berkeley, CA.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "88--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yarowsky, David. 1994. Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 88-95, New Mexico State University, Las Cruces.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Unsupervised word sense disambiguation rivaling supervised methods", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "189--196", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yarowsky, David. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 189-196.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Improving translation selection with a new translation model trained by independent monolingual corpora", |
| "authors": [ |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Changning", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "International Journal of Computational Linguistics and Chinese Language Processing", |
| "volume": "6", |
| "issue": "1", |
| "pages": "1--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhou, Ming, Yuan Ding, and Changning Huang. 2001. Improving translation selection with a new translation model trained by independent monolingual corpora. International Journal of Computational Linguistics and Chinese Language Processing, 6(1):1-26.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "P1 . . . Nissan car and truck plant. . . (1) P2 . . . computer manufacturing plant and adjacent. . . (1) P3 . . . automated manufacturing plant in Fremont. . . (1) P4 . . . divide life into plant and animal kingdom. . . (2) P5 . . . thousands of plant and animal species. . . (2) P6 . . . zonal distribution of plant life. . .", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "Figure 1 Examples of classified data (\u03b5 = plant).", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "Monolingual bootstrapping.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF3": { |
| "text": "Bilingual bootstrapping (1).", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF4": { |
| "text": "Bilingual bootstrapping (2).", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF5": { |
| "text": "Creating a naive Bayesian classifier.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF6": { |
| "text": "-step: P(e | c, t) \u2190 P(c | e, t)P(e | t) e\u2208E P(c | e, t)P(e | t) M-step: P(c | e, t) \u2190 f (c, t)P(e | c, t) c\u2208C f (c, t)P(e | c, t) P(e | t) \u2190 c\u2208C f (c, t)P(e | c, t) c\u2208C f (c, t) Figure 8", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF7": { |
| "text": "Accuracies of BB with different \u03b1 values.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF8": { |
| "text": "Learning curves with bass. Learning curves with drug.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF9": { |
| "text": "Learning curves with duty. Learning curves with palm.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF10": { |
| "text": "Figure 18 Figure 19 Learning curves with plant. Learning curves with space.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF11": { |
| "text": "Number of relevant words.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "text": "Data descriptions in Experiment 1.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>(QJOLVK ZRUGV</td><td>&KLQHVH ZRUGV</td><td>6HQVHV</td><td>6HHG ZRUGV</td></tr><tr><td/><td/><td>UHDGLQHVV WR JLYH DWWHQWLRQ</td><td>VKRZ</td></tr><tr><td>LQWHUHVW</td><td/><td>PRQH\\ SDLG IRU WKH XVH RI PRQH\\ D VKDUH LQ FRPSDQ\\ RU EXVLQHVV</td><td>UDWH KROG</td></tr><tr><td/><td/><td>DGYDQWDJH DGYDQFHPHQW RU IDYRU</td><td>FRQIOLFW</td></tr><tr><td/><td/><td>D WKLQ IOH[LEOH REMHFW</td><td>FXW</td></tr><tr><td/><td/><td>ZULWWHQ RU VSRNHQ WH[W</td><td>ZULWH</td></tr><tr><td>OLQH</td><td/><td>WHOHSKRQH FRQQHFWLRQ IRUPDWLRQ RI SHRSOH RU WKLQJV</td><td>WHOHSKRQH ZDLW</td></tr><tr><td/><td/><td>DQ DUWLILFLDO GLYLVLRQ</td><td>EHWZHHQ</td></tr><tr><td/><td/><td>SURGXFW</td><td>SURGXFW</td></tr></table>" |
| }, |
| "TABREF1": { |
| "html": null, |
| "text": "Data set sizes in Experiment 1.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"2\">Unclassified sentences (texts)</td><td/></tr><tr><td>Words</td><td>English</td><td>Chinese</td><td>Test sentences</td></tr><tr><td colspan=\"2\">interest 1,927 (1,072)</td><td>8,811 (2,704)</td><td>2,291</td></tr><tr><td>line</td><td>3,666 (1,570)</td><td>5,398 (2,894)</td><td>4,148</td></tr></table>" |
| }, |
| "TABREF2": { |
| "html": null, |
| "text": "Accuracies of disambiguation in Experiment 1.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Words</td><td colspan=\"4\">Major (%) MB-D (%) MB-B (%) BB (%)</td></tr><tr><td>interest</td><td>54.6</td><td>54.7</td><td>69.3</td><td>75.5</td></tr><tr><td>line</td><td>53.5</td><td>55.6</td><td>54.1</td><td>62.7</td></tr><tr><td>Table 4</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Accuracies of supervised methods.</td><td/><td/></tr><tr><td/><td/><td colspan=\"3\">interest (%) line (%)</td></tr><tr><td colspan=\"2\">Naive Bayesian ensemble</td><td>89</td><td>88</td><td/></tr><tr><td colspan=\"2\">Naive Bayes</td><td>74</td><td>72</td><td/></tr><tr><td colspan=\"2\">Decision tree</td><td>78</td><td>-</td><td/></tr><tr><td colspan=\"2\">Neural network</td><td>-</td><td>76</td><td/></tr><tr><td colspan=\"2\">Nearest neighbor</td><td>87</td><td>-</td><td/></tr></table>" |
| }, |
| "TABREF3": { |
| "html": null, |
| "text": "Accuracies of disambiguation in Experiment 2. Top words for interest rate sense of interest.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"5\">Words Major (%) MB-D (%) MB-B (%) BB (%)</td></tr><tr><td>bass</td><td>61.0</td><td>57.0</td><td>89.0</td><td>92.0</td></tr><tr><td>drug</td><td>77.7</td><td>78.7</td><td>79.7</td><td>86.8</td></tr><tr><td>duty</td><td>86.3</td><td>86.8</td><td>72.0</td><td>75.1</td></tr><tr><td>palm</td><td>82.2</td><td>80.7</td><td>83.3</td><td>92.4</td></tr><tr><td>plant</td><td>71.6</td><td>89.3</td><td>95.4</td><td>95.9</td></tr><tr><td>space</td><td>64.5</td><td>83.3</td><td>84.3</td><td>87.8</td></tr><tr><td>tank</td><td>60.3</td><td>76.4</td><td>76.9</td><td>84.4</td></tr><tr><td>Total</td><td>71.9</td><td>78.8</td><td>82.9</td><td>87.8</td></tr><tr><td>Table 8</td><td/><td/><td/><td/></tr><tr><td>MB-B</td><td>BB</td><td/><td/><td/></tr><tr><td>payment</td><td>saving</td><td/><td/><td/></tr><tr><td>cut</td><td>payment</td><td/><td/><td/></tr><tr><td>earn</td><td>benchmark</td><td/><td/><td/></tr><tr><td>short</td><td>whose</td><td/><td/><td/></tr><tr><td>short-term</td><td>base</td><td/><td/><td/></tr><tr><td>yield</td><td>prefer</td><td/><td/><td/></tr><tr><td>u.s.</td><td>fixed</td><td/><td/><td/></tr><tr><td>margin</td><td>debt</td><td/><td/><td/></tr><tr><td>benchmark</td><td>annual</td><td/><td/><td/></tr><tr><td>regard</td><td>dividend</td><td/><td/><td/></tr></table>" |
| } |
| } |
| } |
| } |