{ "paper_id": "P06-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:24:42.339352Z" }, "title": "Estimating Class Priors in Domain Adaptation for Word Sense Disambiguation", "authors": [ { "first": "Yee", "middle": [ "Seng" ], "last": "Chan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "addrLine": "3 Science Drive 2", "postCode": "117543", "country": "Singapore" } }, "email": "chanys@comp.nus.edu.sg" }, { "first": "Hwee", "middle": [ "Tou" ], "last": "Ng", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "addrLine": "3 Science Drive 2", "postCode": "117543", "country": "Singapore" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Instances of a word drawn from different domains may have different sense priors (the proportions of the different senses of a word). This in turn affects the accuracy of word sense disambiguation (WSD) systems trained and applied on different domains. This paper presents a method to estimate the sense priors of words drawn from a new domain, and highlights the importance of using well calibrated probabilities when performing these estimations. By using well calibrated probabilities, we are able to estimate the sense priors effectively to achieve significant improvements in WSD accuracy.", "pdf_parse": { "paper_id": "P06-1012", "_pdf_hash": "", "abstract": [ { "text": "Instances of a word drawn from different domains may have different sense priors (the proportions of the different senses of a word). This in turn affects the accuracy of word sense disambiguation (WSD) systems trained and applied on different domains. This paper presents a method to estimate the sense priors of words drawn from a new domain, and highlights the importance of using well calibrated probabilities when performing these estimations. By using well calibrated probabilities, we are able to estimate the sense priors effectively to achieve significant improvements in WSD accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many words have multiple meanings, and the process of identifying the correct meaning, or sense of a word in context, is known as word sense disambiguation (WSD). Among the various approaches to WSD, corpus-based supervised machine learning methods have been the most successful to date. With this approach, one would need to obtain a corpus in which each ambiguous word has been manually annotated with the correct sense, to serve as training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, supervised WSD systems faced an important issue of domain dependence when using such a corpus-based approach. To investigate this, Escudero et al. (2000) conducted experiments using the DSO corpus, which contains sentences drawn from two different corpora, namely Brown Corpus (BC) and Wall Street Journal (WSJ). They found that training a WSD system on one part (BC or WSJ) of the DSO corpus and applying it to the other part can result in an accuracy drop of 12% to 19%. One reason for this is the difference in sense priors (i.e., the proportions of the different senses of a word) between BC and WSJ. For instance, the noun interest has these 6 senses in the DSO corpus: sense 1, 2, 3, 4, 5, and 8. In the BC part of the DSO corpus, these senses occur with the proportions: 34%, 9%, 16%, 14%, 12%, and 15%. However, in the WSJ part of the DSO corpus, the proportions are different: 13%, 4%, 3%, 56%, 22%, and 2%. When the authors assumed they knew the sense priors of each word in BC and WSJ, and adjusted these two datasets such that the proportions of the different senses of each word were the same between BC and WSJ, accuracy improved by 9%. In another work, Agirre and Martinez (2004) trained a WSD system on data which was automatically gathered from the Internet. The authors reported a 14% improvement in accuracy if they have an accurate estimate of the sense priors in the evaluation data and sampled their training data according to these sense priors. The work of these researchers showed that when the domain of the training data differs from the domain of the data on which the system is applied, there will be a decrease in WSD accuracy.", "cite_spans": [ { "start": 140, "end": 162, "text": "Escudero et al. (2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To build WSD systems that are portable across different domains, estimation of the sense priors (i.e., determining the proportions of the different senses of a word) occurring in a text corpus drawn from a domain is important. McCarthy et al. (2004) provided a partial solution by describing a method to predict the predominant sense, or the most frequent sense, of a word in a corpus. Using the noun interest as an example, their method will try to predict that sense 1 is the predominant sense in the BC part of the DSO corpus, while sense 4 is the predominant sense in the WSJ part of the corpus.", "cite_spans": [ { "start": 227, "end": 249, "text": "McCarthy et al. (2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our recent work (Chan and Ng, 2005b) , we directly addressed the problem by applying machine learning methods to automatically estimate the sense priors in the target domain. For instance, given the noun interest and the WSJ part of the DSO corpus, we attempt to estimate the proportion of each sense of interest occurring in WSJ and showed that these estimates help to improve WSD accuracy. In our work, we used naive Bayes as the training algorithm to provide posterior probabilities, or class membership estimates, for the instances in the target domain. These probabilities were then used by the machine learning methods to estimate the sense priors of each word in the target domain.", "cite_spans": [ { "start": 19, "end": 39, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, it is known that the posterior probabilities assigned by naive Bayes are not reliable, or not well calibrated (Domingos and Pazzani, 1996) . These probabilities are typically too extreme, often being very near 0 or 1. Since these probabilities are used in estimating the sense priors, it is important that they are well calibrated.", "cite_spans": [ { "start": 119, "end": 147, "text": "(Domingos and Pazzani, 1996)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we explore the estimation of sense priors by first calibrating the probabilities from naive Bayes. We also propose using probabilities from another algorithm (logistic regression, which already gives well calibrated probabilities) to estimate the sense priors. We show that by using well calibrated probabilities, we can estimate the sense priors more effectively. Using these estimates improves WSD accuracy and we achieve results that are significantly better than using our earlier approach described in (Chan and Ng, 2005b) .", "cite_spans": [ { "start": 522, "end": 542, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the following section, we describe the algorithm to estimate the sense priors. Then, we describe the notion of being well calibrated and discuss why using well calibrated probabilities helps in estimating the sense priors. Next, we describe an algorithm to calibrate the probability estimates from naive Bayes. Then, we discuss the corpora and the set of words we use for our experiments before presenting our experimental results. Next, we propose using the well calibrated probabilities of logistic regression to estimate the sense priors, and perform significance tests to compare our various results before concluding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To estimate the sense priors, or a priori probabilities of the different senses in a new dataset, we used a confusion matrix algorithm (Vucetic and Obradovic, 2001 ) and an EM based algorithm (Saerens et al., 2002) in (Chan and Ng, 2005b) . Our results in (Chan and Ng, 2005b) indicate that the EM based algorithm is effective in estimating the sense priors and achieves greater improvements in WSD accuracy compared to the confusion matrix algorithm. Hence, to estimate the sense priors in our current work, we use the EM based algorithm, which we describe in this section.", "cite_spans": [ { "start": 135, "end": 163, "text": "(Vucetic and Obradovic, 2001", "ref_id": "BIBREF17" }, { "start": 192, "end": 214, "text": "(Saerens et al., 2002)", "ref_id": "BIBREF16" }, { "start": 218, "end": 238, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" }, { "start": 256, "end": 276, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation of Priors", "sec_num": "2" }, { "text": "Most of this section is based on (Saerens et al., 2002) . Assume we have a set of labeled data D with n classes and a set of N independent instances", "cite_spans": [ { "start": 33, "end": 55, "text": "(Saerens et al., 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "\u00a1 \u00a2 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a6 \u00a7 \u00a6 \u00a9 \u00a4 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "from a new data set. The likelihood of these N instances can be defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00a1 \u00a2 \u00a3 \u00a4 \u00a6 \u00a7 \u00a6 \u00a7 \u00a6 \u00a4 \u00a2 \u00a3 ! \u00a1 \u00a2 \u00a3 \" $ # % & \u00a3 \u00a1 \u00a2 \u00a4 ( ' & 0 ) \u00a3 \" $ # % & \u00a3 \u00a1 \u00a2 2 1 ' & \u00a1 3 ' & 4 )", "eq_num": "(1)" } ], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "Assuming the within-class densities , do not change from the training set D to the new data set, we can define:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "\u00a1 \u00a2 6 1 ' & 7 \u00a1 \u00a2 5 1 ' & .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "To determine the a priori probability estimates 8 \u00a1 9 ' & of the new data set that will maximize the likelihood of (1) with respect to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "\u00a1 3 ' & ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "we can apply the iterative procedure of the EM algorithm. In effect, through maximizing the likelihood of (1), we obtain the a priori probability estimates as a by-product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "Let us now define some notations. When we apply a classifier trained on D on an instance \u00a2 drawn from the new data set D@ , we get", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "8 \u00a1 9 ' & 1 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": ", which we define as the probability of instance , the EM algorithm provides the following iterative steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "8 A B C \u00a1 9 ' & 1 \u00a2 8 \u00a1 9 ' & 1 \u00a2 8 \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00c4 \u00a9 C 8 \u00a1 \u00c4 \u00a9 C # \u00a9 \u00a3 8 \u00a1 3 ' 1 \u00a2 8 \u00a1 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 A C 8 \u00a1 A C (2) 8 A B \u00a3 C \u00a1 9 ' & % \u00a3 8 A B C \u00a1 3 ' & 1 \u00a2", "eq_num": "(3)" } ], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "where Equation 2represents the expectation Estep, Equation 3represents the maximization Mstep, and N represents the number of instances in D@ . Note that the probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "8 \u00a1 9 ' & 1 \u00a2 and 8 \u00a1 9 ' &", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "in Equation 2will stay the same throughout the iterations for each particular instance The denominator in Equation 2is simply a normalizing factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "The a posteriori", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "8 A B C \u00a1 3 ' & 1 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "and a priori probabilities . This iterative procedure will increase the likelihood of (1) at each step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Based Algorithm", "sec_num": "2.1" }, { "text": "If a classifier estimates posterior class probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using A Priori Estimates", "sec_num": "2.2" }, { "text": "8 \u00a1 3 ' & 1 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using A Priori Estimates", "sec_num": "2.2" }, { "text": "when presented with a new instance \u00a2 from D@ , it can be directly adjusted according to estimated a priori probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using A Priori Estimates", "sec_num": "2.2" }, { "text": "8 \u00a1 9 ' & on D@ : 8 ! \" $ # B & % \u00a1 3 ' & 1 \u00a2 8 \u00a1 9 ' & 1 \u00a2 8 \u00a1 A\u00a8 \u00a9 C 8 \u00a1 \u00c4 \u00a9 C 8 \u00a1 9 ' 1 \u00a2 8 \u00a1 A C 8 \u00a1 A C (4) where 8 \u00a1 9 ' & denotes the a priori probability of class ' & from D and 8 ' \" ( # B ) % \u00a1 9 ' & 1 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using A Priori Estimates", "sec_num": "2.2" }, { "text": "denotes the adjusted predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using A Priori Estimates", "sec_num": "2.2" }, { "text": "In our eariler work (Chan and Ng, 2005b) , the posterior probabilities assigned by a naive Bayes classifier are used by the EM procedure described in the previous section to estimate the sense priors", "cite_spans": [ { "start": 20, "end": 40, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Calibration of Probabilities", "sec_num": "3" }, { "text": "8 \u00a1 9 ' &", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calibration of Probabilities", "sec_num": "3" }, { "text": "in a new dataset. However, it is known that the posterior probabilities assigned by naive Bayes are not well calibrated (Domingos and Pazzani, 1996) .", "cite_spans": [ { "start": 120, "end": 148, "text": "(Domingos and Pazzani, 1996)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Calibration of Probabilities", "sec_num": "3" }, { "text": "It is important to use an algorithm which gives well calibrated probabilities, if we are to use the probabilities in estimating the sense priors. In this section, we will first describe the notion of being well calibrated before discussing why having well calibrated probabilities helps in estimating the sense priors. Finally, we will introduce a method used to calibrate the probabilities from naive Bayes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calibration of Probabilities", "sec_num": "3" }, { "text": "Assume for each instance \u00a2 , a classifier outputs a probability S \u00a9 \u00a1 \u00a2 between 0 and 1, of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Well Calibrated Probabilities", "sec_num": "3.1" }, { "text": "\u00a2 belonging to class ' & .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Well Calibrated Probabilities", "sec_num": "3.1" }, { "text": "The classifier is wellcalibrated if the empirical class membership probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Well Calibrated Probabilities", "sec_num": "3.1" }, { "text": "\u00a1 9 ' & 1 S \u00a9 \u00a1 \u00a2 E 1 0 (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Well Calibrated Probabilities", "sec_num": "3.1" }, { "text": "converges to the probability value S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Well Calibrated Probabilities", "sec_num": "3.1" }, { "text": "\u00a9 \u00a1 \u00a2 2 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Well Calibrated Probabilities", "sec_num": "3.1" }, { "text": "as the number of examples classified goes to infinity (Zadrozny and Elkan, 2002) . Intuitively, if we consider all the instances to which the classifier assigns a probability S \u00a9 \u00a1 \u00a2 of say 0.6, then 60% of these instances should be members of class ' & .", "cite_spans": [ { "start": 54, "end": 80, "text": "(Zadrozny and Elkan, 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Well Calibrated Probabilities", "sec_num": "3.1" }, { "text": "To see why using an algorithm which gives well calibrated probabilities helps in estimating the sense priors, let us rewrite Equation 3, the Mstep of the EM procedure, as the following: . Let BS , for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": "8 A B 3 \u00a3 C \u00a1 9 ' & % % 4 5 6 \u00a9 % 4 7 9 8 A @5 B \u00a9 AC E D C % G F 8 A B C \u00a1 9 ' & 1 \u00a2 (5) where S \u00a9 = H 0 \u00a9 \u00a3 \u00a7 \u00a4 \u00a7 \u00a6 \u00a6 \u00a7 \u00a6 \u00a9 \u00a4 A 0 3 I P denotes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": "T \u00a4 \u00a6 \u00a6 \u00a7 \u00a6 \u00a9 \u00a4 Q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": ", denote the set of instances in bin by definition and Equation (5) can be rewritten as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": "T . Note that 1 B \u00a3 1 U W V \u00a3 V \u00a3 V X U 1 BS 1 U W V X V \u00a3 V \u00a3 U 1 B I 1 = . Now, let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": "8 A B \u00a3 C \u00a1 9 ' & \u00a1 0 \u00a9 \u00a3 1 B \u00a3 1 U 2 V \u00a3 V \u00a3 V a U 0 I 1 B I 1 \u00a1 \u00a3 1 B \u00a3 1 U b V X V \u00a3 V a U I 1 B I 1 \u00a9 (6) Input: training set \u00a1 \u00a3 \u00a2 \u00a4 \u00a5 \u00a6 \u00a2 \u00a7 sorted in ascending order of \u00a1 \u00a2 Initialize \u00a9 \u00a2 \u00a5 \u00a6 \u00a2 While k such that \u00a9 \u00a4 \u00a4 \u00a9 \u00a2 ! \u00a9 \u00a2 \u00a4 \u00a4 \u00a9 \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": ", where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": "\u00a9 # # # \u00a9 \u00a2 and \u00a9 \u00a2 # # # \u00a9 \" % $ ' & ) ( 1 0 ) 2 3 \u00a7 Set 4 6 5 7 9 8 @ B A 7 \" C Replace \u00a9 \u00a4 D \u00a4 \u00a9 \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": "with m . Hence, using an algorithm which gives well calibrated probabilities helps in the estimation of sense priors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Being Well Calibrated Helps Estimation", "sec_num": "3.2" }, { "text": "Zadrozny and Elkan (2002) successfully used a method based on isotonic regression (Robertson et al., 1988) to calibrate the probability estimates from naive Bayes. To compute the isotonic regression, they used the pair-adjacent violators (PAV) (Ayer et al., 1955) algorithm, which we show in Figure 1 . Briefly, what PAV does is to initially view each data value as a level set. While there are two adjacent sets that are out of order (i.e., the left level set is above the right one) then the sets are combined and the mean of the data values becomes the value of the new level set.", "cite_spans": [ { "start": 82, "end": 106, "text": "(Robertson et al., 1988)", "ref_id": "BIBREF15" }, { "start": 244, "end": 263, "text": "(Ayer et al., 1955)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 292, "end": 300, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Isotonic Regression", "sec_num": "3.3" }, { "text": "PAV works on binary class problems. In a binary class problem, we have a positive class and a negative class. Now, let Figure 1 ) is associated with a lowest boundary value and a highest boundary value", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Isotonic Regression", "sec_num": "3.3" }, { "text": "E \u00a1 \u00a4 \u00a2 \u00a4 G F I H P F , where \u00a2 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a6 \u00a7 \u00a6 \u00a9 \u00a4 \u00a2 represent N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Isotonic Regression", "sec_num": "3.3" }, { "text": ". We performed 10-fold crossvalidation on the training data to assign values to . We then applied the PAV algorithm to obtain values for T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S", "sec_num": null }, { "text": ". To obtain the calibrated probability estimate for a test instance as the calibrated probability estimate. To apply PAV on a multiclass problem, we first reduce the problem into a number of binary class problems. For reducing a multiclass problem into a set of binary class problems, experiments in (Zadrozny and Elkan, 2002) suggest that the oneagainst-all approach works well. In one-againstall, a separate classifier is trained for each class is then learnt for each binary class problem and the probability estimates from each classifier are calibrated. Finally, the calibrated binary-class probability estimates are combined to obtain multiclass probabilities, computed by a simple normalization of the calibrated estimates from each binary classifier, as suggested by Zadrozny and Elkan (2002) .", "cite_spans": [ { "start": 300, "end": 326, "text": "(Zadrozny and Elkan, 2002)", "ref_id": "BIBREF18" }, { "start": 775, "end": 800, "text": "Zadrozny and Elkan (2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "S", "sec_num": null }, { "text": "In this section, we discuss the motivations in choosing the particular corpora and the set of words used in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of Dataset", "sec_num": "4" }, { "text": "The DSO corpus (Ng and Lee, 1996) contains 192,800 annotated examples for 121 nouns and 70 verbs, drawn from BC and WSJ. BC was built as a balanced corpus and contains texts in various categories such as religion, fiction, etc. In contrast, the focus of the WSJ corpus is on financial and business news. Escudero et al. (2000) exploited the difference in coverage between these two corpora to separate the DSO corpus into its BC and WSJ parts for investigating the domain dependence of several WSD algorithms. Following their setup, we also use the DSO corpus in our experiments.", "cite_spans": [ { "start": 23, "end": 33, "text": "Lee, 1996)", "ref_id": "BIBREF12" }, { "start": 304, "end": 326, "text": "Escudero et al. (2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "DSO Corpus", "sec_num": "4.1" }, { "text": "The widely used SEMCOR (SC) corpus (Miller et al., 1994) is one of the few currently available manually sense-annotated corpora for WSD. SEMCOR is a subset of BC. Since BC is a balanced corpus, and training a classifier on a general corpus before applying it to a more specific corpus is a natural scenario, we will use examples from BC as training data, and examples from WSJ as evaluation data, or the target dataset.", "cite_spans": [ { "start": 35, "end": 56, "text": "(Miller et al., 1994)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "DSO Corpus", "sec_num": "4.1" }, { "text": "Scalability is a problem faced by current supervised WSD systems, as they usually rely on manually annotated data for training. To tackle this problem, in one of our recent work (Ng et al., 2003) , we had gathered training data from parallel texts and obtained encouraging results in our evaluation on the nouns of SENSEVAL-2 English lexical sample task (Kilgarriff, 2001) . In another recent evaluation on the nouns of SENSEVAL-2 English all-words task (Chan and Ng, 2005a), promising results were also achieved using examples gathered from parallel texts. Due to the potential of parallel texts in addressing the issue of scalability, we also drew training data for our earlier sense priors estimation experiments (Chan and Ng, 2005b) from parallel texts. In addition, our parallel texts training data represents a natural domain difference with the test data of SENSEVAL-2 English lexical sample task, of which 91% is drawn from the British National Corpus (BNC).", "cite_spans": [ { "start": 178, "end": 195, "text": "(Ng et al., 2003)", "ref_id": "BIBREF13" }, { "start": 354, "end": 372, "text": "(Kilgarriff, 2001)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Texts", "sec_num": "4.2" }, { "text": "As part of our experiments, we followed the experimental setup of our earlier work (Chan and Ng, 2005b) , using the same 6 English-Chinese parallel corpora (Hong Kong Hansards, Hong Kong News, Hong Kong Laws, Sinorama, Xinhua News, and English translation of Chinese Treebank) , available from Linguistic Data Consortium. To gather training examples from these parallel texts, we used the approach we described in (Ng et al., 2003) and (Chan and Ng, 2005b) . We then evaluated our estimation of sense priors on the nouns of SENSEVAL-2 English lexical sample task, similar to the evaluation we conducted in (Chan and Ng, 2005b) . Since the test data for the nouns of SENSEVAL-3 English lexical sample task (Mihalcea et al., 2004) were also drawn from BNC and represented a difference in domain from the parallel texts we used, we also expanded our evaluation to these SENSEVAL-3 nouns.", "cite_spans": [ { "start": 83, "end": 103, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" }, { "start": 156, "end": 276, "text": "(Hong Kong Hansards, Hong Kong News, Hong Kong Laws, Sinorama, Xinhua News, and English translation of Chinese Treebank)", "ref_id": null }, { "start": 414, "end": 431, "text": "(Ng et al., 2003)", "ref_id": "BIBREF13" }, { "start": 436, "end": 456, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" }, { "start": 606, "end": 626, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" }, { "start": 705, "end": 728, "text": "(Mihalcea et al., 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Texts", "sec_num": "4.2" }, { "text": "Research by (McCarthy et al., 2004) highlighted that the sense priors of a word in a corpus depend on the domain from which the corpus is drawn. A change of predominant sense is often indicative of a change in domain, as different corpora drawn from different domains usually give different predominant senses. For example, the predominant sense of the noun interest in the BC part of the DSO corpus has the meaning \"a sense of concern with and curiosity about someone or something\". In the WSJ part of the DSO corpus, the noun interest has a different predominant sense with the meaning \"a fixed charge for borrowing money\", reflecting the business and finance focus of the WSJ corpus.", "cite_spans": [ { "start": 12, "end": 35, "text": "(McCarthy et al., 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Choice of Words", "sec_num": "4.3" }, { "text": "Estimation of sense priors is important when there is a significant change in sense priors between the training and target dataset, such as when there is a change in domain between the datasets. Hence, in our experiments involving the DSO corpus, we focused on the set of nouns and verbs which had different predominant senses between the BC and WSJ parts of the corpus. This gave us a set of 37 nouns and 28 verbs. For experiments involving the nouns of SENSEVAL-2 and SENSEVAL-3 English lexical sample task, we used the approach we described in (Chan and Ng, 2005b) of sampling training examples from the parallel texts using the natural (empirical) distribution of examples in the parallel texts. Then, we focused on the set of nouns having different predominant senses between the examples gathered from parallel texts and the evaluation data for the two SENSEVAL tasks. This gave a set of 6 nouns for SENSEVAL-2 and 9 nouns for SENSEVAL-3. For each noun, we gathered a maximum of 500 parallel text examples as training data, similar to what we had done in (Chan and Ng, 2005b) .", "cite_spans": [ { "start": 547, "end": 567, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" }, { "start": 1061, "end": 1081, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Choice of Words", "sec_num": "4.3" }, { "text": "Similar to our previous work (Chan and Ng, 2005b), we used the supervised WSD approach described in (Lee and Ng, 2002) for our experiments, using the naive Bayes algorithm as our classifier. Knowledge sources used include partsof-speech, surrounding words, and local collocations. This approach achieves state-of-the-art accuracy. All accuracies reported in our experiments are micro-averages over all test examples.", "cite_spans": [ { "start": 100, "end": 118, "text": "(Lee and Ng, 2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "In (Chan and Ng, 2005b), we used a multiclass naive Bayes classifier (denoted by NB) for each word. Following this approach, we noted the WSD accuracies achieved without any adjustment, in the column L under NB in Table 1 . The predictions", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 221, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "8 \u00a1 9 ' & 1 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "of these naive Bayes classifiers are then used in Equation 2and (3) to estimate the sense priors 8 \u00a1 9 ' & , before being adjusted by these estimated sense priors based on Equation (4). The resulting WSD accuracies after adjustment are listed in the column EM \u00a1 in Table 1 , representing the WSD accuracies achievable by following the approach we described in (Chan and Ng, 2005b) .", "cite_spans": [ { "start": 360, "end": 380, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 265, "end": 272, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "Next, we used the one-against-all approach to reduce each multiclass problem into a set of binary class problems. We trained a naive Bayes classifier for each binary problem and calibrated the probabilities from these binary classifiers. The WSD ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "Classifier NB NBcal Method L EM \u00a2 \u00a1 EM \u00a3 A \u00a4 L EM \u00a2 \u00a1 \u00a5 \u00a7 \u00a6 \" EM \u00a3 A \u00a4 DSO", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "\u00a2 \u00a1 \u00a5 \u00a7 \u00a6 \" L EM \u00a3 A \u00a4 L DSO nouns", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "11.6 1.2 (10.3%) 5.3 (45.7%) DSO verbs 10.3 2.6 (25.2%) 3.9 (37.9%) SE2 nouns 3.0 0.9 (30.0%) 1.2 (40.0%) SE3 nouns 3.7 3.4 (91.9%) 3.0 (81.1%) The results show that calibrating the probabilities improves WSD accuracy. In particular, EM \u00a1 \u00a9 S achieves the highest accuracy among the methods described so far. To provide a basis for comparison, we also adjusted the calibrated probabilities by the true sense priors Table 2 . Note that this represents the maximum possible increase in accuracy achievable provided we know these true sense priors ", "cite_spans": [], "ref_spans": [ { "start": 415, "end": 423, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "The experimental results show that the sense priors estimated using the calibrated probabilities of naive Bayes are effective in increasing the WSD accuracy. However, using a learning algorithm which already gives well calibrated posterior probabilities may be more effective in estimating the sense priors. One possible algorithm is logistic regression, which directly optimizes for getting approximations of the posterior probabilities. Hence, its probability estimates are already well calibrated (Zhang and Yang, 2004; Niculescu-Mizil and Caruana, 2005) .", "cite_spans": [ { "start": 500, "end": 522, "text": "(Zhang and Yang, 2004;", "ref_id": "BIBREF19" }, { "start": 523, "end": 557, "text": "Niculescu-Mizil and Caruana, 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In the rest of this section, we first conduct experiments to estimate sense priors using the predictions of logistic regression. Then, we perform significance tests to compare the various methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We trained logistic regression classifiers and evaluated them on the 4 datasets. However, the WSD accuracies of these unadjusted logistic regression classifiers are on average about 4% lower than those of the unadjusted naive Bayes classifiers. One possible reason is that being a discriminative learner, logistic regression requires more training examples for its performance to catch up to, and possibly overtake the generative naive Bayes learner (Ng and Jordan, 2001) .", "cite_spans": [ { "start": 450, "end": 471, "text": "(Ng and Jordan, 2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Using Logistic Regression", "sec_num": "6.1" }, { "text": "Although the accuracy of logistic regression as a basic classifier is lower than that of naive Bayes, its predictions may still be suitable for estimating sense priors. To gauge how well the sense priors are estimated, we measure the KL divergence between the true sense priors and the sense priors estimated by using the predictions of (uncalibrated) multiclass naive Bayes, calibrated naive Bayes, and logistic regression. These results are shown in Table 3 and the column EM", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 459, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Using Logistic Regression", "sec_num": "6.1" }, { "text": "\u00a3 A \u00a4 vs. NB-EM \u00a2 \u00a1 NBcal-EM \u00a1 \u00a5 \u00a7 \u00a6 \" vs. NB-EM \u00a2 \u00a1 \u00a1 NBcal-EM \u00a1 \u00a5 \u00a7 \u00a6 \" vs. NB-EM \u00a3 A \u00a4 \u00a1 \u00a1 NBcal-EM \u00a3 A \u00a4 vs. NB-EM \u00a2 \u00a1 NBcal-EM \u00a3 A \u00a4 vs. NB-EM \u00a3 A \u00a4 \u00a1 NBcal-EM \u00a3 A \u00a4 vs. NBcal-EM \u00a2 \u00a1 \u00a5 \u00a7 \u00a6 \" \u00a1 \u00a1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Logistic Regression", "sec_num": "6.1" }, { "text": "\u00a2 \u00a4 \u00a3 \u00a5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Logistic Regression", "sec_num": "6.1" }, { "text": "shows that using the predictions of logistic regression to estimate sense priors consistently gives the lowest KL divergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Logistic Regression", "sec_num": "6.1" }, { "text": "Results of the KL divergence test motivate us to use sense priors estimated by logistic regression on the predictions of the naive Bayes classifiers. To elaborate, we first use the probability estimates under NB. The relative improvements against using the true sense priors, based on the calibrated probabilities, are given in the column EM Table 2 . The results show that the sense priors provided by logistic regression are in general effective in further improving the results. In the case of DSO nouns, this improvement is especially significant.", "cite_spans": [], "ref_spans": [ { "start": 342, "end": 349, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Using Logistic Regression", "sec_num": "6.1" }, { "text": "\u00a2 \u00a6 \u00a3 \u00a5 L in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Logistic Regression", "sec_num": "6.1" }, { "text": "Paired t-tests were conducted to see if one method is significantly better than another. The t statistic of the difference between each test instance pair is computed, giving rise to a p value. The results of significance tests for the various methods on the 4 datasets are given in Table 4 , where the symbols \" \u00a7 \", \"\u00a8\", and \"\u00a9 \" correspond to p-value\u00a80.05, (0.01, 0.05], and F 0.01 respectively. The methods in Table 4 are represented in the form a1-a2, where a1 denotes adjusting the pre-dictions of which classifier, and a2 denotes how the sense priors are estimated. As an example, NBcal-EM \u00a2 \u00a4 \u00a3 \u00a5 specifies that the sense priors estimated by logistic regression is used to adjust the predictions of the calibrated naive Bayes classifier, and corresponds to accuracies in column EM", "cite_spans": [], "ref_spans": [ { "start": 283, "end": 290, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 414, "end": 421, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Significance Test", "sec_num": "6.2" }, { "text": "\u00a2 \u00a4 \u00a3 \u00a5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Significance Test", "sec_num": "6.2" }, { "text": "under NBcal in Table 1 . Based on the significance tests, the adjusted accuracies of EM \u00a1 and EM \u00a1 \u00a9 S in Table 1 are significantly better than their respective unadjusted L accuracies, indicating that estimating the sense priors of a new domain via the EM approach presented in this paper significantly improves WSD accuracy compared to just using the sense priors from the old domain.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 106, "end": 113, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Significance Test", "sec_num": "6.2" }, { "text": "represents our earlier approach in (Chan and Ng, 2005b) . The significance tests show that our current approach of using calibrated naive Bayes probabilities to estimate sense priors, and then adjusting the calibrated probabilities by these estimates (NBcal-EM \u00a1 \u00a9 S ) performs significantly better than NB-EM \u00a1 (refer to row 2 of Table 4 ). For DSO nouns, though the results are similar, the p value is a relatively low 0.06.", "cite_spans": [ { "start": 35, "end": 55, "text": "(Chan and Ng, 2005b)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 331, "end": 338, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "\u00a1", "sec_num": null }, { "text": "Using sense priors estimated by logistic regression further improves performance. For example, row 1 of Table 4 shows that adjusting the predictions of multiclass naive Bayes classifiers by sense priors estimated by logistic regression (NB-EM \u00a2 \u00a4 \u00a3 \u00a5 ) performs significantly better than using sense priors estimated by multiclass naive Bayes (NB-EM \u00a1 ). Finally, using sense priors estimated by logistic regression to adjust the predictions of calibrated naive Bayes (NBcal-EM \u00a2 \u00a4 \u00a3 \u00a5 ) in general performs significantly better than most other methods, achieving the best overall performance.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "\u00a1", "sec_num": null }, { "text": "In addition, we implemented the unsupervised method of (McCarthy et al., 2004) , which calculates a prevalence score for each sense of a word to predict the predominant sense. As in our earlier work (Chan and Ng, 2005b), we normalized the prevalence score of each sense to obtain estimated sense priors for each word, which we then used to adjust the predictions of our naive Bayes classifiers. We found that the WSD accuracies obtained with the method of (McCarthy et al., 2004) are on average 1.9% lower than our NBcal-EM \u00a2 \u00a4 \u00a3 \u00a5 method, and the difference is statistically significant.", "cite_spans": [ { "start": 55, "end": 78, "text": "(McCarthy et al., 2004)", "ref_id": "BIBREF8" }, { "start": 456, "end": 479, "text": "(McCarthy et al., 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "\u00a1", "sec_num": null }, { "text": "Differences in sense priors between training and target domain datasets will result in a loss of WSD accuracy. In this paper, we show that using well calibrated probabilities to estimate sense priors is important. By calibrating the probabilities of the naive Bayes algorithm, and using the probabilities given by logistic regression (which is already well calibrated), we achieved significant improvements in WSD accuracy over previous approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Though not shown, we also calculated the accuracies of these binary classifiers without calibration, and found them to be similar to the accuracies of the multiclass naive Bayes shown in the column L under NB inTable 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised WSD based on automatically retrieved examples: The importance of bias", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "David", "middle": [], "last": "Martinez", "suffix": "" } ], "year": 2004, "venue": "Proc. of EMNLP04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and David Martinez. 2004. Unsuper- vised WSD based on automatically retrieved exam- ples: The importance of bias. In Proc. of EMNLP04.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An empirical distribution function for sampling with incomplete information", "authors": [ { "first": "Miriam", "middle": [], "last": "Ayer", "suffix": "" }, { "first": "H", "middle": [ "D" ], "last": "Brunk", "suffix": "" }, { "first": "G", "middle": [ "M" ], "last": "Ewing", "suffix": "" }, { "first": "W", "middle": [ "T" ], "last": "Reid", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Silverman", "suffix": "" } ], "year": 1955, "venue": "Annals of Mathematical Statistics", "volume": "26", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miriam Ayer, H. D. Brunk, G. M. Ewing, W. T. Reid, and Edward Silverman. 1955. An empirical distri- bution function for sampling with incomplete infor- mation. Annals of Mathematical Statistics, 26(4).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Scaling up word sense disambiguation via parallel texts", "authors": [ { "first": "Yee", "middle": [], "last": "Seng Chan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2005, "venue": "Proc. of AAAI05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Seng Chan and Hwee Tou Ng. 2005a. Scaling up word sense disambiguation via parallel texts. In Proc. of AAAI05.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Word sense disambiguation with distribution estimation", "authors": [ { "first": "Yee", "middle": [], "last": "Seng Chan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2005, "venue": "Proc. of IJCAI05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Seng Chan and Hwee Tou Ng. 2005b. Word sense disambiguation with distribution estimation. In Proc. of IJCAI05.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Beyond independence: Conditions for the optimality of the simple Bayesian classifier", "authors": [ { "first": "Pedro", "middle": [], "last": "Domingos", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Pazzani", "suffix": "" } ], "year": 1996, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Domingos and Michael Pazzani. 1996. Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In Proc. of ICML-1996.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An empirical study of the domain dependence of supervised word sense disambiguation systems", "authors": [ { "first": "Gerard", "middle": [], "last": "Escudero", "suffix": "" }, { "first": "Lluis", "middle": [], "last": "Marquez", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2000, "venue": "Proc. of EMNLP/VLC00", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard Escudero, Lluis Marquez, and German Rigau. 2000. An empirical study of the domain dependence of supervised word sense disambiguation systems. In Proc. of EMNLP/VLC00.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "English lexical sample task description", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2001, "venue": "Proc. of SENSEVAL-2", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 2001. English lexical sample task description. In Proc. of SENSEVAL-2.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation", "authors": [ { "first": "Yoong Keok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2002, "venue": "Proc. of EMNLP02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoong Keok Lee and Hwee Tou Ng. 2002. An empir- ical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In Proc. of EMNLP02.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Finding predominant word senses in untagged text", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Koeling", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In Proc. of ACL04.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The senseval-3 english lexical sample task", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2004, "venue": "Proc. of SENSEVAL-3", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea, Timothy Chklovski, and Adam Kilgar- riff. 2004. The senseval-3 english lexical sample task. In Proc. of SENSEVAL-3.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using a semantic concordance for sense identification", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" }, { "first": "Shari", "middle": [], "last": "Landes", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Robert", "middle": [ "G" ], "last": "Thomas", "suffix": "" } ], "year": 1994, "venue": "Proc. of ARPA Human Language Technology Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identifica- tion. In Proc. of ARPA Human Language Technol- ogy Workshop.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes", "authors": [ { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2001, "venue": "Proc. of NIPS14", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Y. Ng and Michael I. Jordan. 2001. On dis- criminative vs. generative classifiers: A comparison of logistic regression and naive Bayes. In Proc. of NIPS14.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach", "authors": [ { "first": "Tou", "middle": [], "last": "Hwee", "suffix": "" }, { "first": "Hian Beng", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1996, "venue": "Proc. of ACL96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwee Tou Ng and Hian Beng Lee. 1996. Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. In Proc. of ACL96.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Exploiting parallel texts for word sense disambiguation: An empirical study", "authors": [ { "first": "Bin", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "Yee Seng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Chan", "suffix": "" } ], "year": 2003, "venue": "Proc. of ACL03", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwee Tou Ng, Bin Wang, and Yee Seng Chan. 2003. Exploiting parallel texts for word sense disambigua- tion: An empirical study. In Proc. of ACL03.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Predicting good probabilities with supervised learning", "authors": [ { "first": "Alexandru", "middle": [], "last": "Niculescu", "suffix": "" }, { "first": "-", "middle": [], "last": "Mizil", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 2005, "venue": "Proc. of ICML05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learn- ing. In Proc. of ICML05.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Chapter 1. Isotonic Regression", "authors": [ { "first": "Tim", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Wright", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Dykstra", "suffix": "" } ], "year": 1988, "venue": "Order Restricted Statistical Inference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Robertson, F. T. Wright, and R. L. Dykstra. 1988. Chapter 1. Isotonic Regression. In Order Restricted Statistical Inference. John Wiley & Sons.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure", "authors": [ { "first": "Marco", "middle": [], "last": "Saerens", "suffix": "" }, { "first": "Patrice", "middle": [], "last": "Latinne", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Decaestecker", "suffix": "" } ], "year": 2002, "venue": "Neural Computation", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Saerens, Patrice Latinne, and Christine De- caestecker. 2002. Adjusting the outputs of a clas- sifier to new a priori probabilities: A simple proce- dure. Neural Computation, 14(1).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Classification on data with biased class distribution", "authors": [ { "first": "Slobodan", "middle": [], "last": "Vucetic", "suffix": "" }, { "first": "Zoran", "middle": [], "last": "Obradovic", "suffix": "" } ], "year": 2001, "venue": "Proc. of ECML01", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slobodan Vucetic and Zoran Obradovic. 2001. Clas- sification on data with biased class distribution. In Proc. of ECML01.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Transforming classifier scores into accurate multiclass probability estimates", "authors": [ { "first": "Bianca", "middle": [], "last": "Zadrozny", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Elkan", "suffix": "" } ], "year": 2002, "venue": "Proc. of KDD02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bianca Zadrozny and Charles Elkan. 2002. Trans- forming classifier scores into accurate multiclass probability estimates. In Proc. of KDD02.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Probabilistic score estimation with piecewise logistic regression", "authors": [ { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2004, "venue": "Proc. of ICML04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Zhang and Yiming Yang. 2004. Probabilistic score estimation with piecewise logistic regression. In Proc. of ICML04.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "timates of the new a priori and a posteriori probabilities at step s of the iterative EM procedure. Assuming we initialize" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "step s in Equation(2)are simply the a posteriori probabilities in the conditions of the labeled data," }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "the set of posterior probability values for class imagine that we have Q bins, where each bin is associated with a specific 0 value. Now, distribute all the instances in the new dataset D@ into the" }, "FIGREF6": { "type_str": "figure", "uris": null, "num": null, "text": "the proportion of instances in D@ with true class label ' &" }, "FIGREF7": { "type_str": "figure", "uris": null, "num": null, "text": "examples and is the probability of \u00a2 belonging to the positive class, as predicted by a classifier. Further, let Q represent the true label of \u00a2 . For a binary class problem, we let Q" }, "FIGREF9": { "type_str": "figure", "uris": null, "num": null, "text": "and all other examples are treated as negative examples. A separate classifier" }, "FIGREF10": { "type_str": "figure", "uris": null, "num": null, "text": "data. The increase in WSD accuracy thus obtained is given in the column True L in" }, "FIGREF12": { "type_str": "figure", "uris": null, "num": null, "text": "Bayes classifier are then used in Equation (4) to obtain the adjusted predictions. The resulting WSD accuracy is shown in the column EM" }, "TABREF1": { "html": null, "content": "
DatasetTrue\u00a8L EM
", "type_str": "table", "num": null, "text": "Micro-averaged WSD accuracies using the various methods. The different naive Bayes classifiers are: multiclass naive Bayes (NB) and naive Bayes with calibrated probabilities (NBcal)." }, "TABREF2": { "html": null, "content": "
: Relative accuracy improvement based on cali-
brated probabilities.
accuracies of these calibrated naive Bayes classi-
fiers (denoted by NBcal) are given in the column L
under NBcal. 1 The predictions of these classifiers &
are then used to estimate the sense priors before being adjusted by these estimates based on \u00a1 9 ' , 8
Equation (4). The resulting WSD accuracies after adjustment are listed in column EM \u00a1 \u00a9 in Table S 1.
", "type_str": "table", "num": null, "text": "" }, "TABREF3": { "html": null, "content": "
, we list the increase
in WSD accuracy when adjusted by the sense pri-&
ors ing the EM procedure. The relative improvements \u00a1 3 ' which were automatically estimated us-8 &
obtained with using & ing \u00a1 9 ' ) are given as percentages in brackets. \u00a1 3 ' (compared against us-8
As an example, according to Table 1 for the DSO verbs, EM \u00a1 \u00a9 gives an improvement of 49.5% S 46.9% = 2.6% in WSD accuracy, and the rela-
tive improvement compared to using the true sense
priors is 2.6/10.3 = 25.2%, as shown in Table 2.
", "type_str": "table", "num": null, "text": "" }, "TABREF4": { "html": null, "content": "", "type_str": "table", "num": null, "text": "KL divergence between the true and estimated sense distributions." }, "TABREF6": { "html": null, "content": "
", "type_str": "table", "num": null, "text": "Paired t-tests between the various methods for the 4 datasets." } } } }