ACL-OCL / Base_JSON /prefixP /json /P01 /P01-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P01-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:30:14.754754Z"
},
"title": "Mapping Lexical Entries in a Verbs Database to WordNet Senses",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Green",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD",
"country": "USA"
}
},
"email": "rgreen@umiacs.umd.edu"
},
{
"first": "Lisa",
"middle": [],
"last": "Pearl",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD",
"country": "USA"
}
},
"email": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD",
"country": "USA"
}
},
"email": "bonnie@umiacs.umd.edu"
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD",
"country": "USA"
}
},
"email": "resnik\u00a5@umiacs.umd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes automatic techniques for mapping 9611 entries in a database of English verbs to Word-Net senses. The verbs were initially grouped into 491 classes based on syntactic features. Mapping these verbs into WordNet senses provides a resource that supports disambiguation in multilingual applications such as machine translation and cross-language information retrieval. Our techniques make use of (1) a training set of 1791 disambiguated entries, representing 1442 verb entries from 167 classes; (2) word sense probabilities, from frequency counts in a tagged corpus; (3) semantic similarity of WordNet senses for verbs within the same class; (4) probabilistic correlations between WordNet data and attributes of the verb classes. The best results achieved 72% precision and 58% recall, versus a lower bound of 62% precision and 38% recall for assigning the most frequently occurring WordNet sense, and an upper bound of 87% precision and 75% recall for human judgment.",
"pdf_parse": {
"paper_id": "P01-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes automatic techniques for mapping 9611 entries in a database of English verbs to Word-Net senses. The verbs were initially grouped into 491 classes based on syntactic features. Mapping these verbs into WordNet senses provides a resource that supports disambiguation in multilingual applications such as machine translation and cross-language information retrieval. Our techniques make use of (1) a training set of 1791 disambiguated entries, representing 1442 verb entries from 167 classes; (2) word sense probabilities, from frequency counts in a tagged corpus; (3) semantic similarity of WordNet senses for verbs within the same class; (4) probabilistic correlations between WordNet data and attributes of the verb classes. The best results achieved 72% precision and 58% recall, versus a lower bound of 62% precision and 38% recall for assigning the most frequently occurring WordNet sense, and an upper bound of 87% precision and 75% recall for human judgment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Our goal is to map entries in a lexical database of 4076 English verbs automatically to Word-Net senses (Miller and Fellbaum, 1991) , (Fellbaum, 1998) to support such applications as ma-chine translation and cross-language information retrieval. For example, the verb drop is multiply ambiguous, with many potential translations in Spanish: bajar, caerse, dejar caer, derribar, disminuir, echar, hundir, soltar, etc . The database specifies a set of interpretations for drop, depending on its context in the source-language (SL). Inclusion of WordNet senses in the database enables the selection of an appropriate verb in the target language (TL). Final selection is based on a frequency count of WordNet senses across all classes to which the verb belongs-e.g., disminuir is selected when the WordNet sense corresponds to the meaning of drop in Prices dropped.",
"cite_spans": [
{
"start": 104,
"end": 131,
"text": "(Miller and Fellbaum, 1991)",
"ref_id": "BIBREF12"
},
{
"start": 134,
"end": 150,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF7"
},
{
"start": 332,
"end": 415,
"text": "Spanish: bajar, caerse, dejar caer, derribar, disminuir, echar, hundir, soltar, etc",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our task differs from standard word sense disambiguation (WSD) in several ways. First, the words to be disambiguated are entries in a lexical database, not tokens in a text corpus. Second, we take an \"all-words\" rather than a \"lexical-sample\" approach (Kilgarriff and Rosenzweig, 2000) : All words in the lexical database \"text\" are disambiguated, not just a small number for which detailed knowledge is available. Third, we replace the contextual data typically used for WSD with information about verb senses encoded in terms of thematic grids and lexical-semantic representations from . Fourth, whereas a single word sense for each token in a text corpus is often assumed, the absence of sentential context leads to a situation where several WordNet senses may be equally appropriate for a database entry. Indeed, as distinctions between WordNet senses can be fine-grained (Palmer, 2000) , it may be unclear, even in context, which sense is meant.",
"cite_spans": [
{
"start": 252,
"end": 285,
"text": "(Kilgarriff and Rosenzweig, 2000)",
"ref_id": "BIBREF9"
},
{
"start": 876,
"end": 890,
"text": "(Palmer, 2000)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The verb database contains mostly syntactic in-formation about its entries, much of which applies at the class level within the database. Word-Net, on the other hand, is a significant source for information about semantic relationships, much of which applies at the \"synset\" level (\"synsets\" are WordNet's groupings of synonymous word senses). Mapping entries in the database to their corresponding WordNet senses greatly extends the semantic potential of the database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use an existing classification of 4076 English verbs, based initially on English Verbs Classes and Alternations (Levin, 1993) and extended through the splitting of some classes into subclasses and the addition of new classes. The resulting 491 classes (e.g., \"Roll Verbs, Group I\", which includes drift, drop, glide, roll, swing) are referred to here as Levin+ classes. As verbs may be assigned to multiple Levin+ classes, the actual number of entries in the database is larger, 9611.",
"cite_spans": [
{
"start": 115,
"end": 128,
"text": "(Levin, 1993)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "2"
},
{
"text": "Following the model of , each Levin+ class is associated with a thematic grid (henceforth abbreviated \u00a6 -grid), which summarizes a verb's syntactic behavior by specifying its predicate argument structure. For example, the Levin+ class \"Roll Verbs, Group I\" is associated with the \u00a6 -grid [th goal], in which a theme and a goal are used (e.g., The ball dropped to the ground). 1 Each \u00a6 -grid specification corresponds to a Grid class. There are 48 Grid classes, with a one-to-many relationship between Grid and Levin+ classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "2"
},
{
"text": "WordNet, the lexical resource to which we are mapping entries from the lexical database, groups synonymous word senses into \"synsets\" and structures the synsets into part-of-speech hierarchies. Our mapping operation uses several other data elements pertaining to WordNet: semantic relationships between synsets, frequency data, and syntactic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "2"
},
{
"text": "Seven semantic relationship types exist between synsets, including, for example, antonymy, hyperonymy, and entailment. Synsets are often related to a half dozen or more other synsets; they 1 There is also a Levin+ class \"Roll Verbs, Group II\" which is associated with the \u00a7 -grid [th particle(down)], in which a theme and a particle 'down' are used (e.g., The ball dropped down). may be related to multiple synsets through a single relationship or may be related to a single synset through multiple relationship types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "2"
},
{
"text": "Our frequency data for WordNet senses is derived from SEMCOR-a semantic concordance incorporating tagging of the Brown corpus with WordNet senses. 2 Syntactic patterns (\"frames\") are associated with each synset, e.g., Somebody s something; Something s; Somebody s somebody into V-ing something. There are 35 such verb frames in WordNet and a synset may have only one or as many as a half dozen or so frames assigned to it.",
"cite_spans": [
{
"start": 147,
"end": 148,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "2"
},
{
"text": "Our mapping of verbs in Levin+ classes to WordNet senses relies in part on the relation between thematic roles in Levin+ and verb frames in WordNet. Both reflect how many and what kinds of arguments a verb may take. However, constructing a direct mapping between \u00a6 -grids and WordNet frames is not possible, as the underlying classifications differ in significant ways. The correlations between the two sets of data are better viewed probabilistically. Table 1 illustrates the relation between Levin+ classes and WordNet for the verb drop. In our multilingual applications (e.g., lexical selection in machine translation), the Grid information provides a context-based means of associating a verb with a Levin+ class according to its usage in the SL sentence. The WordNet sense possibilities are thus pared down during SL analysis, but not sufficiently for the final selection of a TL verb. For example, Levin+ class 9.4 has three possible Word-Net senses for drop. However, the WordNet sense 8 is not associated with any of the other classes; thus, it is considered to have a higher \"information content\" than the others. The upshot is that the lexical-selection routine prefers dejar caer over other translations such as derribar and bajar. ",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 460,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "2"
},
{
"text": "We began with the lexical database of (Dorr and Jones, 1996) , which contains a significant number of WordNet-tagged verb entries. Some of the assignments were in doubt, since class splitting had occurred subsequent to those assignments, with all old WordNet senses carried over to new subclasses. New classes had also been added since the manual tagging. It was determined that the tagging for only 1791 entries-including 1442 verbs in 167 classes-could be considered stable; for these entries, 2756 assignments of WordNet senses had been made. Data for these entries, taken from both WordNet and the verb lexicon, constitute the training data for this study.",
"cite_spans": [
{
"start": 38,
"end": 60,
"text": "(Dorr and Jones, 1996)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "The following probabilities were generated from the training data: This is the probability that if one synset is related to another through a particular relationship type, then a verb mapped to the first synset will belong to the same Grid class as a verb mapped to the second synset. Computed values generally range between .3 and .35.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "\u00a9 \u00a2 ! # \" $ % ' & ) ( 1 0 3 2 4 56 8 7 @ 9 3 A C B # D F E G B G H P I Q 5 56 R 7 S 9 T I Q 5 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "% b a d c % f e g h % G i \" $ & ' ) ( p 0 3 2 q 4 r 56 8 7 @ 9 s A u t w v D E x t w v H I Q 5 56 8 7 S 9 Q I Q 5 , where U 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "is as above, except that sX is mapped to by a verb in Levin+ class L+X and sY is mapped to by a verb in Levin+ class L+Y . This is the probability that if one synset is related to another through a particular relationship type, then a verb mapped to the first synset will belong to the same Levin+ class as a verb mapped to the second synset. Computed values generally range between .25 and .3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "y $ ( F \" G a % ! # \" $ & ' ( 1 0 3 ' 4 56 P S A d @ 1 I Q 5 56 P S 1 I Q 5 , where \u00a6 ' e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "is the occurrence of the entire \u00a6 -grid f for verb entry g and cf 8 e is the occurrence of the entire frame sequence h for a WordNet sense to which verb entry g is mapped. This is the probability that a verb in a Levin+ class is mapped to a WordNet verb sense with some specific combination of frames. Values average only .11, but in some cases the probability is 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "j i e k # c F \" G a % G i \" $ & ' ) ( p 0 ' 4 56 P S A d @ 1 I Q 5 56 P S 1 I Q 5 , where \u00a6 ' e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "is the occurrence of the single \u00a6 -grid component f for verb entry g and cf 8 e is the occur-rence of the single frame h for a WordNet sense to which verb entry g is mapped. This is the probability that a verb in a Levin+ class with a particular \u00a6 -grid component (possibly among others) is mapped to a WordNet verb sense assigned a specific frame (possibly among others). Values average .20, but in some cases the probability is 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "m l n \u00a3 ) ! p o r q s T G i \" $ ' & ) ( p 0 # t u 4 56 8 v w x I y 5 56 8 v I Q 5 , where z F {",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "is an occurrence of tag W (for a particular synset) in SEMCOR and z e is an occurrence of any of a set of tags for verb g in SEMCOR, with W being one of the senses possible for verb g . This probability is the prior probability of specific WordNet verb senses. Values average .11, but in some cases the probability is 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "In addition to the foregoing data elements, based on the training set, we also made use of a semantic similarity measure, which reflects the confidence with which a verb, given the total set of verbs assigned to its Levin+ class, is mapped to a specific WordNet sense. This represents an implementation of a class disambiguation algorithm (Resnik, 1999a) , modified to run against the WordNet verb hierarchy. 5 We also made a powerful \"same-synset assumption\": If (1) two verbs are assigned to the same Levin+ class, (2) one of the verbs g X has been mapped to a specific WordNet sense W X , and (3) the other verb g \u00a2 Y has a WordNet sense W T Y synonymous with W X , then g Y should be mapped to W Y . Since WordNet groups synonymous word senses into \"synsets,\" W X and W T Y would correspond to the same synset. Since Levin+ verbs are mapped to WordNet senses via their corresponding synset identifiers, when the set of conditions enumerated above are met, the two verb entries would be mapped to the same WordNet synset.",
"cite_spans": [
{
"start": 339,
"end": 354,
"text": "(Resnik, 1999a)",
"ref_id": "BIBREF17"
},
{
"start": 409,
"end": 410,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "As an example, the two verbs tag and mark have been assigned to the same Levin+ class. In WordNet, each occurs in five synsets, only one in which they both occur. If tag has a WordNet synset assigned to it for the Levin+ class it shares with mark, and it is the synset that covers senses 5 The assumption underlying this measure is that the appropriate word senses for a group of semantically related words should themselves be semantically related. Given WordNet's hierarchical structure, the semantic similarity between two WordNet senses corresponds to the degree of informativeness of the most specific concept that subsumes them both.",
"cite_spans": [
{
"start": 288,
"end": 289,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "of both tag and mark, we can safely assume that that synset is also appropriate for mark, since in that context, the two verb senses are synonymous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "3"
},
{
"text": "Subsequent to the culling of the training set, several processes were undertaken that resulted in full mapping of entries in the lexical database to WordNet senses. Much, but not all, of this mapping was accomplished manually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Each entry whose WordNet senses were assigned manually was considered by at least two coders, one coder who was involved in the entire manual assignment process and the other drawn from a handful of coders working independently on different subsets of the verb lexicon. In the manual tagging, if a WordNet sense was considered appropriate for a lexical entry by any one of the coders, it was assigned. Overall, 13452 Word-Net sense assignments were made. Of these, 51% were agreed upon by multiple coders. The kappa coefficient (| ) of intercoder agreement was .47 for a first round of manual tagging and (only) .24 for a second round of more problematic cases. 6 While the full tagging of the lexical database may make the automatic tagging task appear superfluous, the low rate of agreement between coders and the automatic nature of some of the tagging suggest there is still room for adjustment of WordNet sense assignments in the verb database. On the one hand, even the higher of the kappa coefficients mentioned above is significantly lower than the standard suggested for good reliability (|} ) or even the level where tentative conclusions may be drawn ( r | ) (Carletta, 1996) , (Krippendorff, 1980) . On the other hand, if the automatic assignments agree with human coding at levels comparable to the degree of agreement among humans, it may be used to identify current assignments that need review 6 The kappa statistic measures the degree to which pairwise agreement of coders on a classification task surpasses what would be expected by chance; the standard definition of this coefficient is:",
"cite_spans": [
{
"start": 662,
"end": 663,
"text": "6",
"ref_id": null
},
{
"start": 1170,
"end": 1186,
"text": "(Carletta, 1996)",
"ref_id": "BIBREF3"
},
{
"start": 1189,
"end": 1209,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF10"
},
{
"start": 1410,
"end": 1411,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "3 3 Q 3 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": ", where is the actual percentage of agreement and and to suggest new assignments for consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "In addition, consistency checking is done more easily by machine than by hand. For example, the same-synset assumption is more easily enforced automatically than manually. When this assumption is implemented for the 2756 senses in the training set, another 967 sense assignments are generated, only 131 of which were actually assigned manually. Similarly, when this premise is enforced on the entirety of the lexical database of 13452 assignments, another 5059 sense assignments are generated. If the same-synset assumption is valid and if the senses assigned in the database are accurate, then the human tagging has a recall of no more than 73%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Because a word sense was assigned even if only one coder judged it to apply, human coding has been treated as having a precision of 100%. However, some of the solo judgments are likely to have been in error. To determine what proportion of such judgments were in reality precision failures, a random sample of 50 WordNet senses selected by only one of the two original coders was investigated further by a team of three judges. In this round, judges rated WordNet senses assigned to verb entries as falling into one of three categories: definitely correct, definitely incorrect, and arguable whether correct. As it turned out, if any one of the judges rated a sense definitely correct, another judge independently judged it definitely correct; this accounts for 31 instances. In 13 instances the assignments were judged definitely incorrect by at least two of the judges. No consensus was reached on the remaining 6 instances. Extrapolating from this sample to the full set of solo judgments in the database leads to an estimate that approximately 1725 (26% of 6636 solo judgments) of those senses are incorrect. This suggests that the precision of the human coding is approximately 87%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The upper bound for this task, as set by human performance, is thus 73% recall and 87% precision. The lower bound, based on assigning the WordNet sense with the greatest prior probability, is 38% recall and 62% precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Recent work (Van Halteren et al., 1998) has demonstrated improvement in part-of-speech tag-ging when the outputs of multiple taggers are combined. When the errors of multiple classifiers are not significantly correlated, the result of combining votes from a set of individual classifiers often outperforms the best result from any single classifier. Using a voting strategy seems especially appropriate here: The measures outlined in Section 3 average only 41% recall on the training set, but the senses picked out by their highest values vary significantly.",
"cite_spans": [
{
"start": 12,
"end": 39,
"text": "(Van Halteren et al., 1998)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Strategies",
"sec_num": "5"
},
{
"text": "The investigations undertaken used both simple and aggregate voters, combined using various voting strategies. The simple voters were the 7 measures previously introduced. 7 In addition, three aggregate voters were generated: (1) the product of the simple measures (smoothed so that zero values wouldn't offset all other measures); (2) the weighted sum of the simple measures, with weights representing the percentage of the training set assignments correctly identified by the highest score of the simple probabilities; and (3) the maximum score of the simple measures.",
"cite_spans": [
{
"start": 172,
"end": 173,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Strategies",
"sec_num": "5"
},
{
"text": "Using these data, two different types of voting schemes were investigated. The schemes differ most significantly on the circumstances under which a voter casts its vote for a WordNet sense, the size of the vote cast by each voter, and the circumstances under which a WordNet sense was selected. We will refer to these two schemes as Majority Voting Scheme and Threshold Voting Scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Strategies",
"sec_num": "5"
},
{
"text": "Although we do not know in advance how many WordNet senses should be assigned to an entry in the lexical database, we assume that, in general, there is at least one. In line with this intuition, one strategy we investigated was to have both simple and aggregate measures cast a vote for whichever sense(s) of a verb in a Levin+ class received the highest (non-zero) value for that measure. Ten variations are given here: Table 2 gives recall and precision measures for all variations of this voting scheme, both with and without enforcement of the same-synset assumption. If we use the harmonic mean of recall and precision as a criterion for comparing results, the best voting scheme is MajAggr, with 58% recall and 72% precision without enforcement of the same-synset assumption. Note that if the samesynset assumption is correct, the drop in precision that accompanies its enforcement mostly reflects inconsistencies in human judgments in the training set; the true precision value for MajAggr after enforcing the same-synset assumption is probably close to 67%.",
"cite_spans": [],
"ref_spans": [
{
"start": 421,
"end": 428,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Majority Voting Scheme",
"sec_num": "5.1"
},
{
"text": "Of the simple voters, only PriorProb and Sem-Sim are individually strong enough to warrant discussion. Although PriorProb was used to establish our lower bound, SemSim proves to be the stronger voter, bested only by MajAggr (the majority vote of SimpleProd and SimpleWtdSum) in voting that enforces the same-synset assumption. Both PriorProb and SemSim provide better results than the majority vote of all 7 simple voters (Ma-jSimpleSgl) and the majority vote of all 21 pairs of simple voters (MajSimplePair). Moreover, the inclusion of MajSimpleSgl and MajSimplePair in a majority vote with MajAggr (in MajSgl+Aggr Variation W/O SS W/ SS R P R P PriorProb 38% 62% 45% 46% SemSim 56% 71% 60% 55% SimpleProd 51% 74% 57% 55% SimpleWtdSum 53% 77% 58% 56% MajSimpleSgl 23% 71% 30% 48% MajSimplePair 38% 60% 45% 43% MajAggr 58% 72% 63% 53% Maj3Best 52% 78% 57% 57% MajSgl+Aggr 44% 74% 50% 54% MajPair+Aggr 49% 77% 55% 57% Table 3 : Recall (R) and Precision (P) for Threshold Voting Scheme and MapPair+Aggr, respectively) turn in poorer results than MajAggr alone.",
"cite_spans": [],
"ref_spans": [
{
"start": 917,
"end": 924,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Majority Voting Scheme",
"sec_num": "5.1"
},
{
"text": "The poor performance of MajSimpleSgl and MajSimplePair do not point, however, to a general failure of the principle that multiple voters are better than individual voters. SimpleProd, the product of all simple measures, and SimpleWtd-Sum, the weighted sum of all simple measures, provide reasonably strong results, and a majority vote of the both of them (MajAggr) gives the best results of all. When they are joined by SemSim in Maj3Best, they continue to provide good results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Majority Voting Scheme",
"sec_num": "5.1"
},
{
"text": "The bottom line is that SemSim makes the most significant contribution of any single simple voter, while the product and weighted sums of all simple voters, in concert with each other, provide the best results of all with this voting scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Majority Voting Scheme",
"sec_num": "5.1"
},
{
"text": "The second voting strategy first identified, for each simple and aggregate measure, the threshold value at which the product of recall and precision scores in the training set has the highest value if that threshold is used to select WordNet senses. During the voting, if a WordNet sense has a higher score for a measure than its threshold, the measure votes for the sense; otherwise, it votes against it. The weight of the measure's vote is the precisionrecall product at the threshold. This voting strategy has the advantage of taking into account each individual attribute's strength of prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threshold Voting Scheme",
"sec_num": "5.2"
},
{
"text": "Five variations on this basic voting scheme were investigated. In each, senses were selected if their vote total exceeded a variation-specific threshold. Table 3 summarizes recall and precision for these variations at their optimal vote thresholds.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Threshold Voting Scheme",
"sec_num": "5.2"
},
{
"text": "In the AutoMap+ variation, Grid and Levin+ probabilities abstain from voting when their values are zero (a common occurrence, because of data sparsity in the training set); the samesynset assumption is automatically implemented. AutoMap-differs in that it disregards the Grid and Levin+ probabilities completely. The Triples variation places the simple and composite measures into three groups, the three with the highest weights, the three with the lowest weights, and the middle or remaining three. Voting first occurs within the group, and the group's vote is brought forward with a weight equaling the sum of the group members' weights. This variation also adds to the vote total if the sense was assigned in the training data. The Combo variation is like Triples, but rather than using the weights and thresholds calculated for the single measures from the training data, this variation calculates weights and thresholds for combinations of two, three, four, five, six, and, seven measures. Finally, the Combo&Auto variation adds the same-synset assumption to the previous variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threshold Voting Scheme",
"sec_num": "5.2"
},
{
"text": "Although not evident in Table 3 because of rounding, AutoMap-has slightly higher values for both recall and precision than does AutoMap+, giving it the highest recall-precision product of the threshold voting schemes. This suggests that the Grid and Levin+ probabilities could profitably be dropped from further use.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Threshold Voting Scheme",
"sec_num": "5.2"
},
{
"text": "Of the more exotic voting variations, Triples voting achieved results nearly as good as the Au-toMap voting schemes, but the Combo schemes fell short, indicating that weights and thresholds are better based on single measures than combinations of measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threshold Voting Scheme",
"sec_num": "5.2"
},
{
"text": "The voting schemes still leave room for improvement, as the best results (58% recall and 72% precision, or, optimistically, 63% recall and 67% precision) fall shy of the upper bound of 73% recall and 87% precision for human coding. 9 At the same time, these results are far better than the lower bound of 38% recall and 62% precision for the most frequent WordNet sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "As has been true in many other evaluation studies, the best results come from combining classifiers (MajAggr): not only does this variation use a majority voting scheme, but more importantly, the two voters take into account all of the simple voters, in different ways. The next-best results come from Maj3Best, in which the three best single measures vote. We should note, however, that the single best measure, the semantic similarity measure from SemSim, lags only slightly behind the two best voting schemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "This research demonstrates that credible word sense disambiguation results can be achieved without recourse to contextual data. Lexical resources enriched with, for example, syntactic information, in which some portion of the resource is hand-mapped to another lexical resource may be rich enough to support such a task. The degree of success achieved here also owes much to the confluence of WordNet's hierarchical structure and SEMCOR tagging, as used in the computation of the semantic similarity measure, on the one hand, and the classified structure of the verb lexicon, which provided the underlying groupings used in that measure, on the other hand. Even where one measure yields good results, several data sources needed to be combined to enable its success.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "For further information see the WordNet manuals, section 7, SEMCOR at http://www.cogsci.princeton.edu.3 This lexical-selection approach is an adaptation of the notion of reduction in entropy, measured by information gain(Mitchell, 1997). Using information content to quantify the \"value\" of a node in the WordNet hierarchy has also been used for measuring semantic similarity in a taxonomy(Resnik, 1999b). More recently, context-based models of disambiguation have been shown to represent significant improvements over the baseline(Bangalore and Rambow, 2000),(Ratnaparkhi, 2000).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The full set of Spanish translations is selected from WordNet associations developed in the EuroWordNet effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "is the expected percentage of agreement, averaged over all pairs of assignments. Several adjustments in the computation of the kappa coefficient were made necessary by the possible assignment of multiple senses for each verb in a Levin+ class, since without prior knowledge of how many senses are to be assigned, there is no basis on which to compute 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Only 6 measures (including the semantic similarity measure) were set out in the earlier section; the measures total 7 because Indv frame probability is used in two different ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A pair cast a vote for a sense if, among all the senses of a verb, a specific sense had the highest value for both measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The criteria for the majority voting schemes preclude their assigning more than 2 senses to any single database entry. Controlled relaxation of these criteria may achieve somewhat better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are supported, in part, by PFF/PECASE Award IRI-9629108, DOD Contract MDA904-96-C-1250, DARPA/ITO Contracts N66001-97-C-8540 and N66001-00-28910, and a National Science Foundation Graduate Research Fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Corpus-Based Lexical Choice in Natural Language Generation",
"authors": [],
"year": null,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corpus-Based Lexical Choice in Natural Language Generation. In Proceedings of the ACL, Hong Kong.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Relationships among Knowledge Structures: Vocabulary Integration within a Subject Domain",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
},
{
"first": "Carol",
"middle": [
"A"
],
"last": "Bean",
"suffix": ""
}
],
"year": 2001,
"venue": "Relationships in the Organization of Knowledge",
"volume": "",
"issue": "",
"pages": "81--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider and Carol A. Bean. 2001. Re- lationships among Knowledge Structures: Vocabu- lary Integration within a Subject Domain. In C.A. Bean and R. Green, editors, Relationships in the Organization of Knowledge, pages 81-98. Kluwer, Dordrecht.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Assessing Agreement on Classification Tasks: The Kappa Statistic",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Lingustics",
"volume": "22",
"issue": "2",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Carletta. 1996. Assessing Agreement on Classi- fication Tasks: The Kappa Statistic. Computational Lingustics, 22(2):249-254, June.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Robust Lexical Acquisition: Word Sense Disambiguation to Increase Recall and Precision",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie J. Dorr and Douglas Jones. 1996. Robust Lex- ical Acquisition: Word Sense Disambiguation to In- crease Recall and Precision. Technical report, Uni- versity of Maryland, College Park, MD.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deriving Verbal and Compositional Lexical Aspect for NLP Applications",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "Mari",
"middle": [
"Broman"
],
"last": "Olsen",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL-97)",
"volume": "",
"issue": "",
"pages": "151--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie J. Dorr and Mari Broman Olsen. 1997. De- riving Verbal and Compositional Lexical Aspect for NLP Applications. In Proceedings of the 35th Annual Meeting of the Association for Com- putational Linguistics (ACL-97), pages 151-158, Madrid, Spain, July 7-12.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Spanish EuroWordNet and LCS-Based Interlingual MT",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "M",
"middle": [
"Antonia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Castell\u00f3n",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Workshop on Interlinguas in",
"volume": "",
"issue": "",
"pages": "19--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie J. Dorr, M. Antonia Mart\u00ed, and Irene Castell\u00f3n. 1997. Spanish EuroWordNet and LCS-Based In- terlingual MT. In Proceedings of the Workshop on Interlinguas in MT, MT Summit, New Mexico State University Technical Report MCCS-97-314, pages 19-32, San Diego, CA, October.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Comparing Sets of Semantic Relations in Ontologies",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": null,
"venue": "The Semantics of Relationships: An Interdisciplinary Perspective. Book manuscript submitted for review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy. In press. Comparing Sets of Semantic Relations in Ontologies. In R. Green, C.A. Bean, and S. Myaeng, editors, The Semantics of Rela- tionships: An Interdisciplinary Perspective. Book manuscript submitted for review.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Framework and Results for English SENSEVAL",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rosenzweig",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "34",
"issue": "",
"pages": "15--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Kilgarriff and J. Rosenzweig. 2000. Framework and Results for English SENSEVAL. Computers and the Humanities, 34:15-48.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Content Analysis: An Introduction to Its Methodology",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Krippendorff. 1980. Content Analysis: An In- troduction to Its Methodology. Sage, Beverly Hills.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "English Verb Classes and Alternations: A Preliminary Investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English Verb Classes and Alter- nations: A Preliminary Investigation. University of Chicago Press, Chicago, IL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semantic Networks of English",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1991,
"venue": "Lexical and Conceptual Semantics",
"volume": "",
"issue": "",
"pages": "197--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller and Christiane Fellbaum. 1991. Se- mantic Networks of English. In Beth Levin and Steven Pinker, editors, Lexical and Conceptual Se- mantics, pages 197-229. Elsevier Science Publish- ers, B.V., Amsterdam, The Netherlands.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Machine Learning",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Mitchell. 1997. Machine Learning. McGraw Hill.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using WordNet to Posit Hierarchical Structure in Levin's Verb Classes",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Mari Broman Olsen",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Workshop on Interlinguas in MT, MT Summit",
"volume": "",
"issue": "",
"pages": "99--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mari Broman Olsen, Bonnie J. Dorr, and David J. Clark. 1997. Using WordNet to Posit Hierarchical Structure in Levin's Verb Classes. In Proceedings of the Workshop on Interlinguas in MT, MT Sum- mit, New Mexico State University Technical Report MCCS-97-314, pages 99-110, San Diego, CA, Oc- tober.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Consistent Criteria for Sense Distinctions",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "34",
"issue": "",
"pages": "217--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer. 2000. Consistent Criteria for Sense Distinctions. Computers and the Humanities, 34:217-222.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Trainable methods for surface natural language generation",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the ANLP-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 2000. Trainable methods for sur- face natural language generation. In Proceedings of the ANLP-NAACL, Seattle, WA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Disambiguating noun groupings with respect to wordnet senses",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural Language Processing Using Very Large Corpora",
"volume": "",
"issue": "",
"pages": "77--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1999a. Disambiguating noun group- ings with respect to wordnet senses. In S. Arm- strong, K. Church, P. Isabelle, E. Tzoukermann S. Manzi, and D. Yarowsky, editors, Natural Lan- guage Processing Using Very Large Corpora, pages 77-98. Kluwer Academic, Dordrecht.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1999,
"venue": "In Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "95--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1999b. Semantic similarity in a taxon- omy: An information-based measure and its appli- cation to problems of ambiguity in natural language. In Journal of Artificial Intelligence Research, num- ber 11, pages 95-130.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving data-driven wordclass tagging by system combination",
"authors": [
{
"first": "Jakub",
"middle": [],
"last": "Hans Van Halteren",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "491--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Van Halteren, Jakub Zavrel, and Walter Daele- mans. 1998. Improving data-driven wordclass tag- ging by system combination. In Proceedings of the 36th Annual Meeting of the Association for Compu- tational Linguistics and the 17th International Con- ference on Computational Linguistics, pages 491- 497.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Product of all simple measures SimpleWtdSum: Weighted sum of all simple measures MajSimpleSgl: Majority vote of all (7) simple voters MajSimplePair: Majority vote of all (21) pairs of simple voters 8 MajAggr: Majority vote of SimpleProd and SimpleWtdSum Maj3Best: Majority vote of SemSim, Sim-pleProd, and SimpleWtdSum MajSgl+Aggr: Majority vote of MajSim-pleSgl and MajAggr MajPair+Aggr: Majority vote of MajSim-plePair and MajAggr",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Recall (R) and Precision (P) for Majority</td></tr><tr><td colspan=\"3\">Voting Scheme, Before (W/O) and After (W/) En-</td></tr><tr><td colspan=\"3\">forcement of the Same-Synset (SS) Assumption</td></tr><tr><td>Variation</td><td>R</td><td>P</td></tr><tr><td>AutoMap+</td><td colspan=\"2\">61% 54%</td></tr><tr><td>AutoMap-</td><td colspan=\"2\">61% 54%</td></tr><tr><td>Triples</td><td colspan=\"2\">63% 52%</td></tr><tr><td>Combo</td><td colspan=\"2\">53% 44%</td></tr><tr><td colspan=\"3\">Combo&amp;Auto 59% 45%</td></tr></table>"
}
}
}
}