ACL-OCL / Base_JSON /prefixS /json /S01 /S01-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S01-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:26.737658Z"
},
"title": "Anaphora Resolution with Word Sense Disambiguation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Udita Preiss",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory J J Thomson A venue Cambridge CB3 OFD United Kingdom",
"institution": "",
"location": {}
},
"email": "judita.preiss@cl.cam.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a simple word sense disambiguation system equipped with the Kennedy and Boguraev (1996) anaphora resolution algorithm, evaluated on the SENSEVAL-2 English all-words task. The system relies on the structure of the WordNet hierarchy to pick optimal senses for nouns in the text. Since anaphoric references are known to indicate the topic of the text (Boguraev et al., 1998), they may aid disambiguation.",
"pdf_parse": {
"paper_id": "S01-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a simple word sense disambiguation system equipped with the Kennedy and Boguraev (1996) anaphora resolution algorithm, evaluated on the SENSEVAL-2 English all-words task. The system relies on the structure of the WordNet hierarchy to pick optimal senses for nouns in the text. Since anaphoric references are known to indicate the topic of the text (Boguraev et al., 1998), they may aid disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We investigate the effect of repeating pronominalized nouns in the input to our Word Sense Disambiguation (WSD) algorithm (Preiss, 2001) . The WSD algorithm is based on the WordNet 1.7 hierarchy (Miller et al., 1990) , and assigns (WordNet) senses to all nouns. The enriched version we evaluate in this paper makes use of our re-implementation of an anaphora resolution algorithm of Kennedy and Boguraev (1996) .",
"cite_spans": [
{
"start": 122,
"end": 136,
"text": "(Preiss, 2001)",
"ref_id": "BIBREF9"
},
{
"start": 195,
"end": 216,
"text": "(Miller et al., 1990)",
"ref_id": "BIBREF8"
},
{
"start": 383,
"end": 410,
"text": "Kennedy and Boguraev (1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "If, as claimed by Boguraev et al. (1998) , the topic of the discourse is thus repeated, then the main topic words will be more likely to be disambiguated correctly. The subsequent WSD algorithm makes use of this extra topic information, and this will in turn affect the disambiguation of all other nouns in the discourse.",
"cite_spans": [
{
"start": 18,
"end": 40,
"text": "Boguraev et al. (1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The system is evaluated on the English allwords task in SENSEVAL-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2.1 Overview of the Algorithm Our WSD algorithm has three components, as depicted in Figure 1 . Taking as input the test data parsed using the Briscoe and Carroll (1993) parser (which uses the grammar described in Carroll and Briscoe (1996) ), the first step is to identify and discard the pleonastic pronouns. Our pleonastic component is described in section 2.2.",
"cite_spans": [
{
"start": 143,
"end": 169,
"text": "Briscoe and Carroll (1993)",
"ref_id": "BIBREF3"
},
{
"start": 214,
"end": 240,
"text": "Carroll and Briscoe (1996)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithms",
"sec_num": "2"
},
{
"text": "In the next phase (section 2.3), third person pronouns are resolved to a noun antecedent and replaced in the text by the noun antecedent. The purpose of this is to increase the number of topic words in the text, to aid the disambiguation of other nouns. This approach assumes firstly that pronouns refer mainly to topic words, and secondly that repeating topic words in the text helps overall disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithms",
"sec_num": "2"
},
{
"text": "The final phase of the algorithm is the WSD component, described in section 2.4. Using simulated annealing, it attempts to find a sense assignment for every noun that minimizes an overall 'distance' function using the WordNet hierarchy. In addition, for the repeated nouns added in the previous phase, the senses are tied together. This means that if the sense of one word in a tie is changed during simulated annealing, the sense of all words in the tie are simultaneously changed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithms",
"sec_num": "2"
},
{
"text": "The advantage of this approach can be shown on the following discourse: The parrot, like the chicken, is kept by people as a domesticated bird. It can speak. Suppose firstly that there is no anaphora resolution phase. The words parrot, chicken, person, bird are given to the word sense disambiguation algorithm, and the system chooses senses which are related to people (parrot in the sense of mimicking people, chicken a wimp and so on). This is clearly incorrect. Now suppose we resolve the pronoun it to parrot, and repeat the word parrot in the text. Now the words parrot, chicken, person, bird, parrot are passed to the WSD system (where the two parrots are sense-tied together), and the system now chooses the correct bird-related senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithms",
"sec_num": "2"
},
{
"text": "It can be a pleonastic pronoun (pronoun with no antecedent), for example in the sentence: It is raining. We label the pronoun it as pleonastic if it is a subject of a raising verb (these were extracted from the ANLT lexicon (Boguraev and Briscoe, 1987) ) or if it was used in conjunctions with the verb to be and one of a particular set of adjectives (for example It is possible to go to town.).",
"cite_spans": [
{
"start": 224,
"end": 252,
"text": "(Boguraev and Briscoe, 1987)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pleonastic Pronouns Component",
"sec_num": "2.2"
},
{
"text": "The component was evaluated on a manually anaphorically resolved portion of the BNC (the initial 2000 sentences of w01). It has a precision (proportion of pronouns deemed pleonastic which really are pleonastic) of 94% and recall (proportion of pleonastic pronouns recognized as pleonastic) of 61%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pleonastic Pronouns Component",
"sec_num": "2.2"
},
{
"text": "The pronominal anaphora resolution is carried out by our re-implementation of the Kennedy and Boguraev (Kennedy and Boguraev, 1996) anaphora resolution algorithm. This algorithm is based on that of Lappin and Leass (Lappin and Leass, 1994 ), but does not require a full parse. It treats the cases of third person pronouns and lexical anaphors. 1 Its cited accuracy is 75% on general corpora (Kennedy and Boguraev, 1996) , but note that their published algorithm uses the LINGSOFT morphosyntactic tagger.",
"cite_spans": [
{
"start": 103,
"end": 131,
"text": "(Kennedy and Boguraev, 1996)",
"ref_id": "BIBREF6"
},
{
"start": 215,
"end": 238,
"text": "(Lappin and Leass, 1994",
"ref_id": "BIBREF7"
},
{
"start": 391,
"end": 419,
"text": "(Kennedy and Boguraev, 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Anaphora Resolution Component",
"sec_num": "2.3"
},
{
"text": "The algorithm creates coreference classes which join together words which are believed by the algorithm to be referring to the same object. These classes are assigned a salience value based on the presence of the features in Table 1 . The salience value of a class is the sum of the feature weights of its members, scaled down by the number of sentences ago that the feature last occurred. The correct antecedent is chosen to be the closest word from the coreference\u2022 class with the highest salience.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 233,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Anaphora Resolution Component",
"sec_num": "2.3"
},
{
"text": "We define a notion of distance between any two WordNet noun senses which is based on the Resnik (1999) , it is naive to assume that the distance between any two nodes in the hierarchy is equal. We therefore assign a weight w to every noun sense x:",
"cite_spans": [
{
"start": 89,
"end": 102,
"text": "Resnik (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "weight(x) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "number of children below x in hierarchy total nodes in hierarchy This is used to define the distance between two distinct noun senses x andy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "dist(x, y) = min weight(z)-~weight(x)-~weight(y) zEh(x)nh(y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "where h ( s) denotes the hypernym chain of noun sense s. 3 If the hypernym chains of x and y do not intersect, the distance is set to the maximum value of 1. In Preiss (2001), we investigated scaling the distance function such that for noun senses x and y at positions in the corpus n and m respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "d . *( ) _ dist(x, y) 1St x, y -I I n-m 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "Note that we do not explicitly use a window of surrounding nouns, but the In -ml denominator means that contributions from far away nouns are usually negligible. We showed that it was not possible to guess the optimal value of a Pleonastic Anaphora WSD component resolution Figure 1 : Integration of components in advance for any set of texts covered in SEM-COR. However, averaged over all words there is a slight peak around a = 1, so this is the value we take. The distance between two adjacent nodes in the hierarchy may now not be equal. To illustrate this, consider the following example adapted from a paper of Resnik (1999) . In WordNet 1.7 (prerelease), VALVE is the parent node of SAFETY VALVE, and MACHINE is the parent of INFORMATION PROCESSING SYSTEM. However, the intuitive distance between the first pair of nodes seems to be less than the distance between the second pair. Using our distance function outlined above, the distance between SAFETY VALVE and VALVE is 0.000121, while the distance between INFORMATION PROCESS-ING SYSTEM and MACHINE is 0.00229. This is depicted in Figure 2 .",
"cite_spans": [
{
"start": 617,
"end": 630,
"text": "Resnik (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 274,
"end": 282,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1091,
"end": 1099,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "We want to assign precisely one sense to each noun in the text; we call this a path. We find the 'optimal' path by simulated annealing (Bertsimas and Tsitsiklis, 1992) . Simulated annealing is a probabilistic method for finding the global optimum of a function which may have a number of local optima. We define the function to be minimized, the energy function, to be the sum of all the pairwise scaled distances.",
"cite_spans": [
{
"start": 135,
"end": 167,
"text": "(Bertsimas and Tsitsiklis, 1992)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "Our version of simulated annealing starts with a randomly chosen path which it attempts to improve. It performs a number of iterations in which it randomly chooses a word and then chooses a new sense for this word. 4 If this change is an improvement in terms of the energy function, it is kept. Otherwise, it may or may not be accepted depending on the current value of the temperature. Over time the temperature decreases, making it less likely to keep changes that increase the energy. The algorithm terminates when no changes were made in the last 1000 iterations.",
"cite_spans": [
{
"start": 215,
"end": 216,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "When simulated annealing terminates, it outputs what it deems the optimal sense assignment for all the nouns in the text. For a more detailed description of the WSD algorithm, please refer to Preiss (2001) .",
"cite_spans": [
{
"start": 192,
"end": 205,
"text": "Preiss (2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "This algorithm was implemented inC and executed on a Pentium III 500MHz. Each text took 1 hour to initialize, and 2 hours to perform 20 runs of simulated annealing. A majority vote then decided the sense assignment. Ties 1 363 1698 38 2 575 2098 46 3 340 1495 60 Table 2 : Test data for the English all words task",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 284,
"text": "Ties 1 363 1698 38 2 575 2098 46 3 340 1495 60 Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "WSD Component",
"sec_num": "2.4"
},
{
"text": "The WSD component enhanced with the anaphora resolution algorithm was submitted for the English all-words task in SENSEVAL-2. The test data for this task consisted of three articles, and information gathered from each article is displayed in Table 2 . The words column shows the number of words marked as nouns by the part of speech tagger in the parser. The senses column contains the total number of senses for all of these words. The ties column shows the number ofties inthe text, where each tie contains a noun and some pronouns that refer to it. The system achieved 44% precision and 20% recall fine-grained, and 45.2% precision and 20.5% recall coarse-grained. ",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "We would like to investigate the performance of the WSD system with and without anaphora resolution, with a view to also extending links in text to other entities. Although the precision of the pleonastic component is currently quite high, we intend to boost recall possibly by including some of the rules devised by Lappin and Leass (1994) .",
"cite_spans": [
{
"start": 317,
"end": 340,
"text": "Lappin and Leass (1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "4"
},
{
"text": "\u2022 This work was supported by the EPSRC while the author was at the University of Sheffield.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the SENSEVAL-2 task we identify nouns by using an enhanced version of the GATE tagger and lemmatizer(Cunningham et al., 1995)._ 3 The hypernym chain of s consists of the word s, the parent' word of s, the grandparent of s, etc, all the way to a root word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We slightly skewed the probability distribution of the senses towards the more frequent sense. The probability of the nth sense is proportional to ~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The system assigns senses to all nouns but to no other part of speech. It also has no mechanism for marking a word undecidable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "I would like to thank John Carroll for parsing the SENSEVAL-2 corpus for me.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Simulated annealing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bertsimas",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsitsiklis",
"suffix": ""
}
],
"year": 1992,
"venue": "Probability and Algorithms",
"volume": "",
"issue": "",
"pages": "17--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bertsimas and J. Tsitsiklis. 1992. Simulated annealing. In Probability and Algorithms, pages 17-29. National Academy Press, Wash- ington, D. C ..",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Large lexicons for natural language processing: utilising the grammar coding system of the longman dictionary of contemporary english",
"authors": [
{
"first": "B",
"middle": [
"K"
],
"last": "Boguraev",
"suffix": ""
},
{
"first": "E",
"middle": [
"J"
],
"last": "Briscoe",
"suffix": ""
}
],
"year": 1987,
"venue": "Computational Linguistics",
"volume": "13",
"issue": "4",
"pages": "219--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B.K. Boguraev and E.J. Briscoe. 1987. Large lexicons for natural language processing: util- ising the grammar coding system of the longman dictionary of contemporary english. Computational Linguistics, 13(4):219-240.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dynamic presentation of document content for rapid on-line skimming",
"authors": [
{
"first": "B",
"middle": [],
"last": "Boguraev",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bellamy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Brawer",
"suffix": ""
},
{
"first": "Y",
"middle": [
"Y"
],
"last": "Wong",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Swartz",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of AAAI Spring Symposium on Intelligent Text Summarisation",
"volume": "",
"issue": "",
"pages": "118--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Boguraev, C. Kennedy, R. Bellamy, S. Brawer, Y. Y. Wong, and J. Swartz. 1998. Dynamic presentation of document content for rapid on-line skimming. In Proceedings of AAAI Spring Symposium on Intelligent Text Summarisation, pages 118-128.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generalised probabilistic LR parsing of natural language (corpora) with unification-based grammars",
"authors": [
{
"first": "E",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "25--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Briscoe and J. Carroll. 1993. Generalised probabilistic LR parsing of natural language (corpora) with unification-based grammars. Computational Linguistics, 19(1):25-60.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Apportioning development effort in a probabilistic LR parsing system through evaluation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the ACL SIGDAT Conference on Empirical M ethodsin Natural Language Processing",
"volume": "",
"issue": "",
"pages": "92--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carroll and T. Briscoe. 1996. Apportion- ing development effort in a probabilistic LR parsing system through evaluation. In Pro- ceedings of the ACL SIGDAT Conference on Empirical M ethodsin Natural Language Pro- cessing, pages 92-100.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A general architecture for text engineering (GATE) -a new approach to language R&D",
"authors": [
{
"first": "H",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Cunningham, R. Gaizauskas, and Y. Wilks. 1995. A general architecture for text engi- neering (GATE) -a new approach to lan- guage R&D. Technical Report CS-95-21, University of Sheffield.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Anaphora for everyone: Pronominal anaphora resolution without a parser",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Boguraev",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING'96}",
"volume": "",
"issue": "",
"pages": "113--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Kennedy and B. Boguraev. 1996. Anaphora for everyone: Pronominal anaphora resolu- tion without a parser. In Proceedings of the 16th International Conference on Computa- tional Linguistics (COLING'96}, pages 113- 118.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An algorithm for pronominal anaphora resolution",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Leass",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "535--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Lappin and H. Leass. 1994. An algorithm for pronominal anaphora resolution. Compu- tational Linguistics, 20(4):535-561.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Introduction to Word-Net: An on-line lexical database",
"authors": [
{
"first": "G",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Felbaum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of Lexicography",
"volume": "3",
"issue": "4",
"pages": "235--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Miller, R. Beckwith, C. Felbaum, D. Gross, and K. Miller. 1990. Introduction to Word- Net: An on-line lexical database. Journal of Lexicography, 3(4):235-244.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Local versus global context for WSD of nouns",
"authors": [
{
"first": "J",
"middle": [],
"last": "Preiss",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of CL UK 4",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Preiss. 2001. Local versus global context for WSD of nouns. In Proceedings of CL UK 4, pages 1-8.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Artificial Intelligence Research",
"volume": "11",
"issue": "",
"pages": "95--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Resnik. 1999. Semantic similarity in a tax- onomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of Artificial Intel- ligence Research, 11:95-130.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Distance between adjacent nodes",
"num": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Condition</td><td>Weight</td></tr><tr><td>Current sentence</td><td>100</td></tr><tr><td>Current context</td><td>50</td></tr><tr><td>Subject</td><td>80</td></tr><tr><td>Existential construct</td><td>70</td></tr><tr><td>Possessive</td><td>65</td></tr><tr><td>Direct object</td><td>50</td></tr><tr><td>Indirect object</td><td>40</td></tr><tr><td>Oblique</td><td>30</td></tr><tr><td>Non embedded</td><td>80</td></tr><tr><td>Non adjunct</td><td>50</td></tr><tr><td colspan=\"2\">Table 1: Salience values</td></tr><tr><td>WordNet hierarchy.</td><td/></tr></table>",
"text": "Lexical anaphors are reflexives and reciprocals.",
"num": null
}
}
}
}