ACL-OCL / Base_JSON /prefixH /json /H92 /H92-1045.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H92-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:28:11.004500Z"
},
"title": "One Sense Per Discourse",
"authors": [
{
"first": "William",
"middle": [
"A"
],
"last": "Gale",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Bell Laboratories",
"institution": "",
"location": {
"addrLine": "600 Mountain Avenue Murray Hill",
"postCode": "07974-0636",
"region": "NJ"
}
},
"email": ""
},
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Bell Laboratories",
"institution": "",
"location": {
"addrLine": "600 Mountain Avenue Murray Hill",
"postCode": "07974-0636",
"region": "NJ"
}
},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Bell Laboratories",
"institution": "",
"location": {
"addrLine": "600 Mountain Avenue Murray Hill",
"postCode": "07974-0636",
"region": "NJ"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "It is well-known that there are polysemous words like sentence whose \"meaning\" or \"sense\" depends on the context of use. We have recently reported on two new word-sense disambiguation systems, one trained on bilingual material (the Canadian Hansards) and the other trained on monolingual material (Roget's Thesaurus and Grolier's Encyclopedia). As this work was nearing completion, we observed a very strong discourse effect. That is, if a polysemous word such as sentence appears two or more times in a well-written discourse, it is extremely likely that they will all share the same sense. This paper describes an experiment which confirmed this hypothesis and found that the tendency to share sense in the same discourse is extremely strong (98%). This result can be used as an additional source of constraint for improving the performance of the word-sense disambiguation algorithm. In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint.",
"pdf_parse": {
"paper_id": "H92-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "It is well-known that there are polysemous words like sentence whose \"meaning\" or \"sense\" depends on the context of use. We have recently reported on two new word-sense disambiguation systems, one trained on bilingual material (the Canadian Hansards) and the other trained on monolingual material (Roget's Thesaurus and Grolier's Encyclopedia). As this work was nearing completion, we observed a very strong discourse effect. That is, if a polysemous word such as sentence appears two or more times in a well-written discourse, it is extremely likely that they will all share the same sense. This paper describes an experiment which confirmed this hypothesis and found that the tendency to share sense in the same discourse is extremely strong (98%). This result can be used as an additional source of constraint for improving the performance of the word-sense disambiguation algorithm. In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Although there has been a long history of work on word-sense disambiguation, much of the work has been stymied by difficulties in acquiring appropriate testing and training materials. AI approaches have tended to focus on \"toy\" domains because of the difficulty in acquiring large lexicons. So too, statistical approaches, e.g., Kelly and Stone (1975) , Black (1988) , have tended to focus on a relatively small set of polysemous words because they have depended on extremely scarce handtagged materials for use in testing and training.",
"cite_spans": [
{
"start": 329,
"end": 351,
"text": "Kelly and Stone (1975)",
"ref_id": "BIBREF7"
},
{
"start": 354,
"end": 366,
"text": "Black (1988)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Deprivation",
"sec_num": "2.1."
},
{
"text": "We have achieved considerable progress recently by using a new source of testing and training materials and the application of Bayesian discrimination methods. Rather than depending on small amounts of hand-tagged text, we have been making use of relatively large amounts of parallel text, text such as the Canadian Hansards, which are available in multiple languages. The translation can often be used in lieu of hand-labeling. For example, consider the polysemous word sentence, which has two major senses: (1) a judicial sentence, and (2), a syntactic sentence. We can collect a number of sense (1) examples by extracting instances that are translated as peine, and we can collect a number of sense (2) examples by extracting instances that are translated as phrase. In this way,",
"cite_spans": [
{
"start": 702,
"end": 705,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Deprivation",
"sec_num": "2.1."
},
{
"text": "we have been able to acquire a considerable amount of testing and training material for developing and testing our disambiguation algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Deprivation",
"sec_num": "2.1."
},
{
"text": "The use of bilingual materials for discrimination decisions in machine tranlation has been discussed by Brown and others (1991) , and by Dagan, Itai, and Schwall (1991) . The use of bilingual materials for an essentially monolingual purpose, sense disambiguation, is similar in method, but differs in purpose.",
"cite_spans": [
{
"start": 104,
"end": 127,
"text": "Brown and others (1991)",
"ref_id": null
},
{
"start": 137,
"end": 168,
"text": "Dagan, Itai, and Schwall (1991)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Deprivation",
"sec_num": "2.1."
},
{
"text": "Surprisingly good results can be achieved using Bayesian discrimination methods which have been used very successfully in many other applications, especially author identification (Mosteller and Wallace, 1964 ) and information retrieval (IR) (Salton, 1989, section 10.3). Our word-sense disambiguation algorithm uses the words in a 100-word context 1 surrounding the polysemous word very much like the other two applications use the words in a test document.",
"cite_spans": [
{
"start": 180,
"end": 208,
"text": "(Mosteller and Wallace, 1964",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Discrimination",
"sec_num": "2.2."
},
{
"text": "Information Retreival (IR):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Discrimination",
"sec_num": "2.2."
},
{
"text": "Pr(wlret) I1 'r(wlirrd) w in doe",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Discrimination",
"sec_num": "2.2."
},
{
"text": "lit is common to use very small contexts (e.g., 5-words) based on the observation that people do not need very much context in order to performance the disambiguation task. In contrast, we use much larger contexts (e.g., 100 words). Although people may be able to make do with much less context, we believe the machine needs all the help it can get, and we have found that the larger context makes the task much easier. In fact, we have been able to measure information at extremely large distances (10,000 words away from the polysemous word in question), though obviously most of the useful information appears relatively near the polysemous word (e.g., within the first 100 words or so). Needless to say, our 100-word contexts are considerably larger than the smaller 5-word windows that one normally finds in the literature. This model treats the context as a bag of words and ignores a number of important linguistic factors such as word order and collocations (correlations among words in the context). Nevertheless, even with these oversimplifications, the model still contains an extremely large number of parameters: 2V ~ 200,000. It is a non-trivial task to estimate such a large number of parameters, especially given the sparseness of the training data. The training material typically consists of approximately 12,000 words of text (100 words words of context for 60 instances of each of two senses). Thus, there are more than 15 parameters to be estimated from each data point. Clearly, we need to be fairly careful given that we have so many parameters and so little evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Discrimination",
"sec_num": "2.2."
},
{
"text": "In principle, the conditional probabilities Pr(toklsense ) can be estimated by selecting those parts of the entire corpus which satisfy the required conditions (e.g., 100word contexts surrounding instances of one sense of senfence, counting the frequency of each word, and dividing the counts by the total number of words satisfying the conditions. However, this estimate, which is known as the maximum likelihood estimate (MLE), has a number of well-known problems. In particular, it will assign zero probability to words that do not happen to appear in the sample, which is obviously unacceptable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "2.3."
},
{
"text": "We will estimate Pr(toklsense ) by interpolating between local probabilities, probabilities computed over the 100word contexts, and global probabilities, probabilities computed over the entire corpus. There is a trade-off between measurement error and bias error. The local probabilities tend to be more prone to measurement error whereas the global probabilities tend to be more prone to bias error. We seek to determine the relevance of the larger corpus to the conditional sample in order to find the optimal trade-off between bias and measurement error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "2.3."
},
{
"text": "The interpolation procedure makes use of a prior expectation of how much we expect the local probabilities to differ from the global probabilities. In their author identification work Mosteller and Wallace \"expect[ed] both authors to have nearly identical rates for almost any word\" (p. 61). In fact, just as they had anticipated, we have found that only 2% of the vocabulary in the Federalist corpus has significantly different probabilities depending on the author. In contrast, we expect fairly large differences in the sense disambiguation application. Approximately 20% of the vocabulary in the Hansards has a local probability that is significantly different from its global probability. Since the prior expectation depends so much on the application, we set the prior for a particular application by estimating the fraction of the vocabulary whose local probabilities differ significantly from the global probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "2.3."
},
{
"text": "We have looked at six polysemous nouns in some detail: duty, drug, land, language, position and sentence, as shown in Table 1 . The final column shows that performance is quite encouraging. of Canadian Hansards. We were somewhat surprised to discover that these two conditions are actually fairly stringent, and that there are a remarkably small number of polysemous words which (1) can be disambiguated by looking at the French translation, and (2) appear 150 or more times in two or more senses. ",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "A Small Study",
"sec_num": "2.4."
},
{
"text": "At first, we thought that the method was completely dependent on the availability of parallel corpora for training. This has been a problem since parallel text remains somewhat difficult to obtain in large quantity, and what little is available is often fairly unbalanced and unrepresentative of general language. Moreover, the assumption that differences in translation correspond to differences in word-sense has always been somewhat suspect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on Monolingual Material",
"sec_num": "2.5."
},
{
"text": "Recently, Yarowsky (1991) has found a way to train on the Roget's Thesaurus (Chapman, 1977) and Grolier's Encyclopedia (1991) instead of the Hansards, thus circumventing many of the objections to our use of the Hansards. Yarowsky's method inputs a 100-word context surrounding a polysemous word and scores each of the 1042 Roget Categories by:",
"cite_spans": [
{
"start": 10,
"end": 25,
"text": "Yarowsky (1991)",
"ref_id": "BIBREF10"
},
{
"start": 76,
"end": 91,
"text": "(Chapman, 1977)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training on Monolingual Material",
"sec_num": "2.5."
},
{
"text": "H Pr(wlRoget Categoryi)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on Monolingual Material",
"sec_num": "2.5."
},
{
"text": "w in contest Table 2 shows some results for the polysemous noun crane.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Training on Monolingual Material",
"sec_num": "2.5."
},
{
"text": "Each of the 1042 models, Pr(wlRoget Categoryl), is trained by interpolating between local probabilities and global probabilities just as before. However, the local probabilities are somewhat more difficult to obtain in this case since we do not have a corpus tagged with Roget Categories, and therefore, it may not be obvious how to extract subsections of the corpus meeting the local conditions. Consider the Roget Category:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on Monolingual Material",
"sec_num": "2.5."
},
{
"text": "TOOLS/MACHINERY (348). Ideally, we would extract 100-word contexts in the 10 million word Grolier Encyclopedia surrounding words in category 348 and use these to compute the local probabilities. Since we don't have a tagged corpus, Yarowsky suggested extracting contexts around all words in category 348 and weighting appropriately in order to compensate for the fact that some of these contexts should not have been included in the training set. Table 3 below shows a sample of the 30,924 concordances for the words in category 348. ",
"cite_spans": [],
"ref_spans": [
{
"start": 447,
"end": 454,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Training on Monolingual Material",
"sec_num": "2.5."
},
{
"text": "As this work was nearing completion, we observed that senses tend to appear in clumps. In particular, it appeared to be extremely unusual to find two or more senses of a polysemous word in the same discourse. 2 A simple (but non-blind) preliminary experiment provided some suggestive evidence confirming the hypothesis. A random sample of 108 nouns was extracted for further study. A panel of three judges (the authors of this paper) were given 100 sets of concordance lines. Each set showed all of the instances of one of the test words in a particular Grolier's article. The judges were asked to indicate if the set of concordance lines used the same sense or not. Only 6 of 300 w,-ticle-judgements were judged to contain multiple senses of one of the test words. All three judges were convinced after grading 100 articles that there was considerable validity to the hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PER DISCOURSE",
"sec_num": null
},
{
"text": "With this promising preliminary verification, the following blind test was devised. Five subjects (the three authors and two of their colleagues) were given a questionnaire starting with a set of definitions selected from The questionnaire contained a total of 82 pairs of concordance lines for 9 polysemous words: antenna, campaign, deposit, drum, hull, interior, knife, landscape, and marine. 54 of the 82 pairs were selected from the same discourse. The remaining 28 pairs were introduced as a control to force the judges to say that some pairs were different; they were selected from different discourses, and were checked by hand as an attempt to assure that they did not happen to use the same sense. The judges found it quite easy to decide whether the pair used the same sense or not. Table 4 shows that there was very high agreement among the judges. With the exception of judge 2, all of the judges agreed with the majority opinion in all but one or two of the 82 cases. The agreement rate was 96.8%, averaged over all judges, or 99.1%, averaged over the four best judges. 99.1%",
"cite_spans": [],
"ref_spans": [
{
"start": 793,
"end": 800,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "PER DISCOURSE",
"sec_num": null
},
{
"text": "As we had hoped, the experiment did, in fact, confirm the one-sense-per-discourse hypothesis. Of 54 pairs selected from the same article, the majority opinion found that 51 shared the same sense, and 3 did not. ~",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PER DISCOURSE",
"sec_num": null
},
{
"text": "We conclude that with probability about 94% (51/54), two polysemous nouns drawn from the same article will have the same sense. In fact, the experiment tested a particularly difficult case, since it did not include any unambiguous words. If we assume a mixture of 60% unambiguous words and 40% polysemous words, then the probability moves from 94% to 100% x .60 + 94% x .40 98%. In other words, there is a very strong tendency (98%) for multiple uses of a word to share the same sense in well-written coherent discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PER DISCOURSE",
"sec_num": null
},
{
"text": "One might ask if this result is specific to Grolier's or to good writing or some other factor. The first author looked at the usage of these same nine words in the Brown Corpus, which is believed to be a more balanced sample of general language and which is also more widely available than Grolier's and is therefore more amenable to replication. The Brown Corpus consists of 500 discourse fragments of 2000 words, each. We were able to find 259 concordance lines like the ones above, showing two instances of one of the nine test words selected from the same discourse fragment. However, four of the nine test words are not very interesting in the Brown Corpus antenna, drum, hull, and knife, since only one sense is observed. There were 106 pairs for the remaining five words: campaign, deposit, interior, landscape, and marine. The first author found that 102 of the 106 pairs were used in the same sense. Thus, it appears that one-sense-per-discourse tendency is also fairly strong in the Brown Corpus (102/106 ~ 96%), as well as in the Grolier's Encyclopedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PER DISCOURSE",
"sec_num": null
},
{
"text": "There seem to be two applications for the one-sense-perdiscourse observation: first it can be used as an additional source of constraint for improving the performance of the word-sense disambiguation algorithm, and 3In contrast, of the 28 control pairs, the majority opinion found that only 1 share the same sense, and 27 did not. secondly, it could be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint. Thus far, we have been more interested in the second use: establishing a group of examples for which we had an approximate ground truth. Rather than tagging each instance of a polysemous word one-by-one, we can select discourses with large numbers of the polysemous word of interest and tag all of the instances in one fell swoop. Admittedly, this procedure will introduce a small error rate since the one-sense-per-discourse tendency is not quite perfect, but thus far, the error rate has not been much of a problem. This procedure has enabled us to tag a much larger test set than we would have been able to do otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IMPLICATIONS",
"sec_num": "4."
},
{
"text": "Having tagged as many words as we have (all instances of 97 words in the Grolier's Encyclopedia), we are now in a position to question some widely held assumptions about the distribution of polysemy. In particular, it is commonly believed that most words are highly polysemous, but in fact, most words (both by token and by type) have only one sense, as indicated in Table 5 below. Even for those words that do have more than one possible sense, it rarely takes anywhere near log2senses bits to select the appropriate sense since the distribution of senses is generally quite skewed. Perhaps the word-sense disambiguation problem is not as difficult as we might have thought. ",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "IMPLICATIONS",
"sec_num": "4."
},
{
"text": "In conclusion, it appears that our hypothesis is correct; well-written discourses tend to avoid multiple senses of a polysemous word. This result can be used in two basic ways: (1) as an additional source of constraint for improving the performance of a word-sense disambiguation algorithm, and (2) as an aide in collecting annotated test materials for evaluating disamhiguation algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "5."
},
{
"text": "2This hypothesis might help to explain some of the long-range effects mentioned in the previous footnote.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Experiment in Computational Discrimination of English Word Senses",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 1988,
"venue": "IBM Journal of Research and Development",
"volume": "32",
"issue": "",
"pages": "185--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, Ezra (1988), \"An Experiment in Computa- tional Discrimination of English Word Senses,\" IBM Journal of Research and Development, v 32, pp 185- 194.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word Sense Disambiguation using Statistical Methods",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings off the 29th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "264--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, Peter, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer (1991), \"Word Sense Dis- ambiguation using Statistical Methods,\" Proceed- ings off the 29th Annual Meeting of the Association for Computational Linguistics, pp 264-270.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Roger's International Thesaurus",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Chapman",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chapman, Robert (1977). Roger's International Thesaurus (Fourth Edition), Harper and Row, New York.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Two Languages are more Informative than One",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Itai",
"suffix": ""
},
{
"first": "Ulrike",
"middle": [],
"last": "Schwall",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "130--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, Ido, Alon Itai, and Ulrike Schwall (1991), \"Two Languages are more Informative than One,\" Proceedings of the 29th Annual Meeting of the Asso- ciation for Computational Linguistics, pp 130-137.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Discrimination Decisions for 100,000-Dimensional Spaces",
"authors": [
{
"first": "Church",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "Yarowsky",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1992,
"venue": "AT&T Statistical Research Report",
"volume": "",
"issue": "103",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, Church, and Yarowsky, 1992, \"Discrimination Decisions for 100,000-Dimensional Spaces\" AT&T Statistical Research Report No. 103.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "New Grolier's Electronic Encyclopedia",
"authors": [
{
"first": "",
"middle": [],
"last": "Grolier's Inc",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grolier's Inc. (1991) New Grolier's Electronic En- cyclopedia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantic Interpretation and the Resolution of Ambiguity",
"authors": [
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirst, G. (1987), Semantic Interpretation and the Resolution of Ambiguity, Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Computer Recognition of English Word Senses",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "Phillip",
"middle": [],
"last": "Stone",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelly, Edward, and Phillip Stone (1975), Com- puter Recognition of English Word Senses, North- Holland, Amsterdam.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "ference and Disputed Authorship: The Federalist",
"authors": [
{
"first": "Fredrick",
"middle": [],
"last": "Mosteller",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 1964,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mosteller, Fredrick, and David Wallace (1964) In- ference and Disputed Authorship: The Federalist, Addison-Wesley, Reading, Massachusetts.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic Text Processing",
"authors": [
{
"first": "G",
"middle": [],
"last": "Sutton",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sutton, G. (1989) Automatic Text Processing, Addison-Wesley Publishing Co.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word-Sense Disamhiguation Using Statistical Models of Roget's Categories Trained on Large Corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky, David (1991) \"Word-Sense Disamhigua- tion Using Statistical Models of Roget's Categories Trained on Large Corpora\", submitted to COLING- 92.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"2\">Six Polysemous Words</td><td/><td/></tr><tr><td>English</td><td>French</td><td>sense</td><td>N</td><td>%</td></tr><tr><td>duty</td><td>droit</td><td>tax</td><td>1114</td><td>97</td></tr><tr><td/><td>devoir</td><td>obligation</td><td>691</td><td>84</td></tr><tr><td>drug</td><td colspan=\"2\">m~dicament medical</td><td>2992</td><td>84</td></tr><tr><td/><td>drogue</td><td>illicit</td><td>855</td><td>97</td></tr><tr><td>land</td><td>terre</td><td>property</td><td>1022</td><td>86</td></tr><tr><td/><td>pays</td><td>country</td><td>386</td><td>89</td></tr><tr><td colspan=\"2\">language langue</td><td>medium</td><td>3710</td><td>90</td></tr><tr><td/><td>langage</td><td>style</td><td>170</td><td>91</td></tr><tr><td>position</td><td>position</td><td>place</td><td>5177</td><td>82</td></tr><tr><td/><td>poste</td><td>job</td><td>577</td><td>86</td></tr><tr><td>sentence</td><td>peine</td><td>judicial</td><td>296</td><td>97</td></tr><tr><td/><td>phrase</td><td>grammatical</td><td colspan=\"2\">148 100</td></tr></table>",
"type_str": "table",
"html": null,
"text": "These nouns were selected because they could be disambiguated by looking at their French translation in the Canadian Hansards (unlike a polysemous word such as interest whose French translation inter~1 is just as ambiguous as the English source). In addition, for testing methodology, it is helpful that the corpus contain plenty of instances of each sense. The second condition, for example, would exclude bank, perhaps the canonical example of a polysemous noun, since there are very few instances of the \"river\" sense of bank in our corpus",
"num": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Automatic Sense Tagging Using Roget's Categories Input Output Treadmills attached to cranes were used to lift heavy objects TOOLS/MACHINERY (348) and for supplying power for cranes, hoists, and lifts. The centrifug TOOLS/MACHINERY (348) Above this height, a tower crane is often used. This comprises TOOLS/MACHINERY (348) elaborate courtship rituals cranes build a nest of vegetation on ANIMALS/INSECTS (414) are more closely related to cranes and rails. They range in length ANIMALS/INSECTS (414) low trees..PP At least five crane species are in danger of extincti ANIMALS/INSECTS (414)",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>CARVING .SB The gutter</td><td>adz has a concave blade for form</td></tr><tr><td>equipment such as a hydraulic</td><td>shovel capable of lifting 26 cubic</td></tr><tr><td>Resembling a power</td><td>shovel mounted on a floating hull,</td></tr><tr><td>equipment, valves for nuclear</td><td>generators, oil-refinery turbines,</td></tr><tr><td>8000 BC, flint-edged wooden</td><td/></tr><tr><td>el-penetrating carbide-tipped</td><td/></tr><tr><td>heightens the colors .SB</td><td/></tr><tr><td>traditional ABC method and</td><td/></tr><tr><td>center of rotation .PP A tower</td><td/></tr><tr><td colspan=\"2\">contribute weight 1/k to frequency sums. Although</td></tr><tr><td colspan=\"2\">the training materials still contain a substantial level of</td></tr><tr><td colspan=\"2\">noise, we have found that the resulting models work</td></tr><tr><td colspan=\"2\">remarkably well, nontheless. Yarowsky (1991) reports</td></tr><tr><td colspan=\"2\">93% correct disambiguation, averaged over the following</td></tr><tr><td colspan=\"2\">words selected from the word-sense disambiguation liter-</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Some Concordances for TOOLS/MACHINERY and consequently, not all of their contexts should be included in the training set for category 348. In particular, lines 7, 8 and 10 inTable 3illustrate the problem. If one of these spurious senses was frequent and dominated the set of examples, the situation could be disastrous. An attempt is made to weight the concordance data to minimize this effect and to make the sample representative of all tools and machinery, not just the more common ones. If a word such as drill occurs k times in the corpus, all words in the context of drill",
"num": null
},
"TABREF5": {
"content": "<table><tr><td colspan=\"2\">: Strong Agreement</td><td/></tr><tr><td>Judge</td><td>n</td><td>%</td></tr><tr><td>1</td><td colspan=\"2\">82 100.0%</td></tr><tr><td>2</td><td>72</td><td>87.8%</td></tr><tr><td>3</td><td>81</td><td>98.7%</td></tr><tr><td>4</td><td colspan=\"2\">82 100.0%</td></tr><tr><td>5</td><td>80</td><td>97.6%</td></tr><tr><td>Average</td><td/><td>96.8%</td></tr><tr><td colspan=\"2\">Average (without Judge 2)</td><td/></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF6": {
"content": "<table><tr><td>1</td><td>67</td><td>7569 0</td></tr><tr><td>2</td><td>16</td><td>2552 0.58</td></tr><tr><td>3</td><td>7</td><td>1313 0.56</td></tr><tr><td>4</td><td>5</td><td>1252 1.2</td></tr><tr><td>5</td><td>1</td><td>1014 0.43</td></tr><tr><td>6</td><td>1</td><td>594 1.3</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Distribution of Polysemy senses types tokens avg. entropy",
"num": null
}
}
}
}