| { |
| "paper_id": "S07-1009", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:23:16.851392Z" |
| }, |
| "title": "SemEval-2007 Task 10: English Lexical Substitution Task", |
| "authors": [ |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Sussex Falmer", |
| "location": { |
| "postCode": "BN1 9QH", |
| "settlement": "East Sussex", |
| "country": "UK" |
| } |
| }, |
| "email": "dianam@sussex.ac.uk" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Rome \"", |
| "location": { |
| "addrLine": "La Sapienza\" Via Salaria, 113", |
| "postCode": "00198", |
| "settlement": "Roma", |
| "country": "Italy" |
| } |
| }, |
| "email": "navigli@di.uniroma1.it" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper we describe the English Lexical Substitution task for SemEval. In the task, annotators and systems find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. Participating systems are free to use any lexical resource. There is a subtask which requires identifying cases where the word is functioning as part of a multiword in the sentence and detecting what that multiword is.", |
| "pdf_parse": { |
| "paper_id": "S07-1009", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper we describe the English Lexical Substitution task for SemEval. In the task, annotators and systems find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. Participating systems are free to use any lexical resource. There is a subtask which requires identifying cases where the word is functioning as part of a multiword in the sentence and detecting what that multiword is.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Word sense disambiguation (WSD) has been described as a task in need of an application. Whilst researchers believe that it will ultimately prove useful for applications which need some degree of semantic interpretation, the jury is still out on this point. One problem is that WSD systems have been tested on fine-grained inventories, rendering the task harder than it need be for many applications (Ide and Wilks, 2006) . Another significant problem is that there is no clear choice of inventory for any given task (other than the use of a parallel corpus for a specific language pair for a machine translation application).", |
| "cite_spans": [ |
| { |
| "start": 399, |
| "end": 420, |
| "text": "(Ide and Wilks, 2006)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The lexical substitution task follows on from some previous ideas (McCarthy, 2002) to examine the capabilities of WSD systems built by researchers on a task which has potential for NLP applications. Finding alternative words that can occur in given contexts would potentially be use-ful to many applications such as question answering, summarisation, paraphrase acquisition (Dagan et al., 2006 ), text simplification and lexical acquisition (McCarthy, 2002) . Crucially this task does not specify the inventory for use beforehand to avoid bias to one predefined inventory and makes it easier for those using automatically acquired resources to enter the arena. Indeed, since the systems in SemEval did not know the candidate substitutes for a word before hand, the lexical resource is evaluated as much as the context based disambiguation component.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 82, |
| "text": "(McCarthy, 2002)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 374, |
| "end": 393, |
| "text": "(Dagan et al., 2006", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 441, |
| "end": 457, |
| "text": "(McCarthy, 2002)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task involves a lexical sample of nouns, verbs, adjectives and adverbs. Both annotators and systems select one or more substitutes for the target word in the context of a sentence. The data was selected from the English Internet Corpus of English produced by Sharoff (2006) from the Internet (http://corpus.leeds.ac.uk/internet.html). This is a balanced corpus similar in flavour to the BNC, though with less bias to British English, obtained by sampling data from the web. Annotators are not provided with the PoS (noun, verb, adjective or adverb) but the systems are. Annotators can provide up to three substitutes but all should be equally as good. They are instructed that they can provide a phrase if they can't think of a good single word substitute. They can also use a slightly more general word if that is close in meaning. There is a \"NAME\" response if the target is part of a proper name and \"NIL\" response if annotators cannot think of a good substitute. The subjects are also asked to identify if they feel the target word is an integral part of a phrase, and what that phrase was. This option was envisaged for evaluation of multiword detection. Annotators did sometimes use it for paraphrasing a phrase with another phrase. However, for an item to be considered a constituent of a multiword, a majority of at least 2 annotators had to identify the same multiword. 1 The annotators were 5 native English speakers from the UK. They each annotated the entire dataset. All annotations were semi-automatically lemmatised (substitutes and identified multiwords) unless the lemmatised version would change the meaning of the substitute or if it was not obvious what the canonical version of the multiword should be.", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 277, |
| "text": "Sharoff (2006)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1383, |
| "end": 1384, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task set up", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The data set comprises 2010 sentences, 201 target words each with 10 sentences. We released 300 for the trial data and kept the remaining 1710 for the test release. 298 of the trial, and 1696 of the test release remained after filtering items with less than 2 non NIL and non NAME responses and a few with erroneous PoS tags. The words included were selected either manually (70 words) from examination of a variety of lexical resources and corpora or automatically (131) using information in these lexical resources. Words were selected from those having a number of different meanings, each with at least one synonym. Since typically the distribution of meanings of a word is strongly skewed (Kilgarriff, 2004) , for the test set we randomly selected 20 words in each PoS for which we manually selected the sentences 2 (we refer to these words as MAN) whilst for the remaining words (RAND) the sentences were selected randomly.", |
| "cite_spans": [ |
| { |
| "start": 694, |
| "end": 712, |
| "text": "(Kilgarriff, 2004)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Selection", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Since we have sets of substitutes for each item and annotator, pairwise agreement was calculated between each pair of sets (p1, p2 \u2208 P ) from each pos-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter Annotator Agreement", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "sible pairing (P ) as p 1 ,p 2 \u2208P p 1 \u2229p 2 p 1 \u222ap 2 |P |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter Annotator Agreement", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "1 Full instructions given to the annotators are posted at http://www.informatics.susx.ac.uk/research/nlp/mccarthy/files/ instructions.pdf.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter Annotator Agreement", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "2 There were only 19 verbs due to an error in automatic selection of one of the verbs picked for manual selection of sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter Annotator Agreement", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Pairwise inter-annotator agreement was 27.75%. 73.93% had modes, and pairwise agreement with the mode was 50.67%. Agreement is increased if we remove one annotator who typically gave 2 or 3 substitutes for each item, which increased coverage but reduced agreement. Without this annotator, interannotator agreement was 31.13% and 64.7% with mode.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter Annotator Agreement", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Multiword detection pairwise agreement was 92.30% and agreement on the identification of the exact form of the actual multiword was 44.13%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter Annotator Agreement", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We have 3 separate subtasks 1) best 2) oot and 3) mw which we describe below. 3 In the equations and results tables that follow we use P for precision, R for recall, and M ode P and M ode R where we calculate precision and recall against the substitute chosen by the majority of annotators, provided that there is a majority.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 79, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let H be the set of annotators, T be the set of test items with 2 or more responses (non NIL or NAME) and h i be the set of responses for an item i \u2208 T for annotator h \u2208 H.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For each i \u2208 T we calculate the mode (m i ) i.e. the most frequent response provided that there is a response more frequent than the others. The set of items where there is such a mode is referred to as T M . Let A (and AM ) be the set of items from T (or T M ) where the system provides at least one substitute. Let a i : i \u2208 A (or a i : i \u2208 AM ) be the set of guesses from the system for item i. For each i we calculate the multiset union (H i ) for all h i for all h \u2208 H and for each unique type (res) in H i will have an associated frequency (f req res ) for the number of times it appears in H i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For example: Given an item (id 9999) for happy;a supposing the annotators had supplied answers as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "annotator responses 1 glad merry 2 glad 3 cheerful glad 4 merry 5 jovial then H i would be glad glad glad merry merry cheerful jovial. The res with associated frequencies would be glad 3 merry 2 cheerful 1 and jovial 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "best measures This requires the best file produced by the system which gives as many guesses as the system believes are fitting, but where the credit for each correct guess is divided by the number of guesses. The first guess in the list is taken as the best guess (bg).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P = a i :i\u2208A res\u2208a i f reqres |a i | |H i | |A| (1) R = a i :i\u2208T res\u2208a i f reqres |a i | |H i | |T | (2) M ode P = bg i \u2208AM 1 if bg = m i |AM | (3) M ode R = bg i \u2208T M 1 if bg = m i |T M |", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A system is permitted to provide more than one response, just as the annotators were. They can do this if they are not sure which response is better, however systems will maximise the score if they guess the most frequent response from the annotators. For P and R the credit is divided by the number of guesses that a system makes to prevent a system simply hedging its bets by providing many responses. The credit is also divided by the number of responses from annotators. This gives higher scores to items with less variation. We want to emphasise test items with better agreement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Using the example for happy;a id 9999 above, if the system's responses for this item was glad; cheerful the credit for a 9999 in the numerator of P and R would be 3+1 2 7 = .286 For M ode P and M ode R we use the system's first guess and compare this to the mode of the annotators responses on items where there was a response more frequent than the others.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "oot measures This allows a system to make up to 10 guesses. The credit for each correct guess is not divided by the number of guesses. This allows for the fact that there is a lot of variation for the task and we only have 5 annotators. With 10 guesses there is a better chance that the systems find the responses of these 5 annotators. There is no ordering of the guesses and the M ode scores give credit where the mode was found in one of the system's 10 guesses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P = a i :i\u2208A res\u2208a i f reqres |H i | |A| (5) R = a i :i\u2208T res\u2208a i f reqres |H i | |T | (6) M ode P = a i :i\u2208AM 1 if any guess \u2208 a i = m i |AM | (7) M ode R = a i :i\u2208T M 1 if any guess \u2208 a i = m i |T M |", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "mw measures For this measure, a system must identify items where the target is part of a multiword and what the multiword is. The annotators do not all have linguistics background, they are simply asked if the target is an integral part of a phrase, and if so what the phrase is. Sometimes this option is used by the subjects for paraphrasing a phrase of the sentence, but typically it is used when there is a multiword. For scoring, a multiword item is one with a majority vote for the same multiword with more than 1 annotator identifying the multiword. Let M W be the subset of T for which there is such a multiword response from a majority of at least 2 annotators. Let mw i \u2208 M W be the multiword identified by majority vote for item i. Let M W sys be the subset of T for which there is a multiword response from the system and mwsys i be a multiword specified by the system for item i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "detection P = mwsys i \u2208M W sys 1 if mw i exists at i |M W sys| (9) detection R = mwsys i \u2208M W 1 if mw i exists at i |M W |", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "identif ication P =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "mwsys i \u2208M W sys 1 if mwsys i = mw i |M W sys|", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "identif ication R =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "mwsys i \u2208M W 1 if mwsys i = mw i |M W | (12)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We produced baselines using WordNet 2.1 (Miller et al., 1993a ) and a number of distributional similarity measures. For the WordNet best baseline we found the best ranked synonym using the criteria 1 to 4 below in order. For WordNet oot we found up to 10 synonyms using criteria 1 to 4 in order until 10 were found:", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 61, |
| "text": "(Miller et al., 1993a", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1. Synonyms from the first synset of the target word, and ranked with frequency data obtained from the BNC (Leech, 1992) .", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 120, |
| "text": "(Leech, 1992)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "2. synonyms from the hypernyms (verbs and nouns) or closely related classes (adjectives) of that first synset, ranked with the frequency data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "3. Synonyms from all synsets of the target word, and ranked using the BNC frequency data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "4. synonyms from the hypernyms (verbs and nouns) or closely related classes (adjectives) of all synsets of the target, ranked with the BNC frequency data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We also produced best and oot baselines using the distributional similarity measures l1, jaccard, cosine, lin (Lin, 1998) and \u03b1SD (Lee, 1999) 4 . We took the word with the largest similarity (or smallest distance for \u03b1SD and l1) for best and the top 10 for oot.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 121, |
| "text": "(Lin, 1998)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For mw detection and identification we used WordNet to detect if a multiword in WordNet which includes the target word occurs within a window of 2 words before and 2 words after the target word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "9 teams registered and 8 participated, and two of these teams (SWAG and IRST) each entered two systems, we distinguish the first and second systems with a 1 and 2 suffix respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The systems all used 1 or more predefined inventories. Most used web queries (HIT, MELB, UNT) or web data (Brants and Franz, 2006 ) (IRST2, KU, SWAG1, SWAG2, USYD, UNT) to obtain counts for disambiguation, with some using algorithms to derive domain (IRST1) or co-occurrence (TOR) information from the BNC. Most systems did not use sense tagged data for disambiguation though MELB did use SemCor (Miller et al., 1993b) for filtering infrequent synonyms and UNT used a semi-supervised word sense disambiguation combined with a host of other techniques, including machine translation engines.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 93, |
| "text": "MELB, UNT)", |
| "ref_id": null |
| }, |
| { |
| "start": 106, |
| "end": 129, |
| "text": "(Brants and Franz, 2006", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 396, |
| "end": 418, |
| "text": "(Miller et al., 1993b)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In tables 1 to 3 we have ordered systems according to R on the best task, and in tables 4 to 6 according to R on oot. We show all scores as percentages i.e. we multiply the scores in section 3 by 100. In tables 3 and 6 we show results using the subset of items which were i) NOT identified as multiwords (NMWT) ii) scored only on non multiword substitutes from both annotators and systems (i.e. no spaces) (NMWS). Unfortunately we do not have space to show the analysis for the MAN and RAND subsets here. Please refer to the task website for these results. 5 We retain the same ordering for the further analysis tables when we look at subsets of the data. Although there are further differences in the systems which would warrant reranking on an individual analysis, since we combined the subanalyses in one table we keep the order as for 1 and 4 respectively for ease of comparison.", |
| "cite_spans": [ |
| { |
| "start": 557, |
| "end": 558, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "There is some variation in rank order of the systems depending on which measures are used. 6 KU is highest ranking on R for best. UNT is best at finding the mode, particularly on oot, though it is the most complicated system exploiting a great many knowledge sources and components. IRST2 does well at finding the mode in best. The IRST2 best R score is lower because it supplied many answers for each item however it achieves the best R score on the oot task. The baselines are outperformed by most systems. The WordNet baseline outperforms those derived from distributional methods. The distributional methods, especially lin, show promising results given that these methods are automatic and Table 2 : best baseline results don't require hand-crafted inventories. As yet we haven't combined the baselines with disambiguation methods.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 695, |
| "end": 702, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Only HIT attempted the mw task. It outperforms all baselines from WordNet.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Choosing a lexical substitute for a given word is not clear cut and there is inherently variation in the task. Since it is quite likely that there will be synonyms that the five annotators do not think of we conducted a post hoc analysis to see if the synonyms selected by the original annotators were better, on the whole, than those in the systems responses. We randomly selected 100 sentences from the subset of items which had more than 2 single word substitutes, no NAME responses, and where the target word was Table 8 : post hoc results not one of those identified as a multiword (i.e. a majority vote by 2 or more annotators for the same multiword as described in section 2). We then mixed the substitutes from the human annotators with those of the systems. Three fresh annotators 7 were given the test sentence and asked to categorise the randomly ordered substitutes as good, reasonable or bad. We take the majority verdict for each substitute, but if there is one reasonable and one good verdict, then we categorise the substitute as reasonable. The percentage of substitutes for systems (sys) and original annotators (origA) categorised as good, reasonable and bad by the post hoc annotators are shown in table 8. We see the substitutes from the humans have a higher proportion of good or reasonable responses by the post hoc annotators compared to the substitutes from the systems.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 517, |
| "end": 524, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Post Hoc Analysis", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We think this task is an interesting one in which to evaluate automatic approaches of capturing lexical meaning. There is an inherent variation in the task because several substitutes may be possible for a given context. This makes the task hard and scoring is less straightforward than a task which has fixed choices. On the other hand, we believe the task taps into human understanding of word meaning and we hope that computers that perform well on this task will have potential in NLP applications. Since a pre-defined inventory is not used, the task allows us to compare lexical resources as well as disambiguation techniques without a bias to any predefined inventory. It is possible for those interested in disambiguation to focus on this, rather than the choice of substitutes, by using the union of responses from the annotators in future experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Directions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The scoring measures are as described in the document at http://nlp.cs.swarthmore.edu/semeval/tasks/task10/ task10documentation.pdf released with our trial data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We used 0.99 as the parameter for \u03b1 for this measure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The task website is at http://www.informatics.sussex.ac.uk/ research/nlp/mccarthy/task10index.html.6 There is not a big difference between P and R because systems typically supplied answers for most items.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We acknowledge support from the Royal Society UK for funding the annotation for the project, and for a Dorothy Hodgkin 7 Again, these were native English speakers from the UK.Fellowship to the first author. We also acknowledge support to the second author from INTEROP NoE (508011, 6 th EU FP). We thank the annotators for their hard work. We thank Serge Sharoff for the use of his Internet corpus, Julie Weeds for the software we used for producing the distributional similarity baselines and Suzanne Stevenson for suggesting the oot task .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": "7" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Web 1T 5-gram corpus version 1.1", |
| "authors": [ |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram corpus version 1.1. Technical Report.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Direct word sense matching for lexical substitution", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Ido Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Alfio", |
| "middle": [], |
| "last": "Glickman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gliozzo", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Australia", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ido Dagan, Oren Glickman, Alfio Gliozzo, Efrat Mar- morshtein, and Carlo Strapparava. 2006. Direct word sense matching for lexical substitution. In Proceed- ings of the 21st International Conference on Computa- tional Linguistics and 44th Annual Meeting of the As- sociation for Computational Linguistics, Sydney, Aus- tralia, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Making sense about sense", |
| "authors": [ |
| { |
| "first": "Nancy", |
| "middle": [], |
| "last": "Ide", |
| "suffix": "" |
| }, |
| { |
| "first": "Yorick", |
| "middle": [], |
| "last": "Wilks", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Word Sense Disambiguation, Algorithms and Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "47--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nancy Ide and Yorick Wilks. 2006. Making sense about sense. In Eneko Agirre and Phil Edmonds, editors, Word Sense Disambiguation, Algorithms and Applica- tions, pages 47-73. Springer.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "How dominant is the commonest sense of a word?", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of Text, Speech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Kilgarriff. 2004. How dominant is the common- est sense of a word? In Proceedings of Text, Speech, Dialogue, Brno, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Measures of distributional similarity", |
| "authors": [ |
| { |
| "first": "Lillian", |
| "middle": [ |
| "Lee" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lillian Lee. 1999. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the As- sociation for Computational Linguistics, pages 25-32.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "100 million words of English: the British National Corpus", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Leech", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Language Research", |
| "volume": "28", |
| "issue": "1", |
| "pages": "1--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Geoffrey Leech. 1992. 100 million words of English: the British National Corpus. Language Research, 28(1):1-13.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "An information-theoretic definition of similarity", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 15th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning, Madison, WI.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Lexical substitution as a task for wsd evaluation", |
| "authors": [ |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL Workshop on Word Sense Disambiguation: Recent Successes and Future Directions", |
| "volume": "", |
| "issue": "", |
| "pages": "109--115", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diana McCarthy. 2002. Lexical substitution as a task for wsd evaluation. In Proceedings of the ACL Workshop on Word Sense Disambiguation: Recent Successes and Future Directions, pages 109-115, Philadelphia, USA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Introduction to WordNet: an On-Line Lexical Database", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Beckwith", |
| "suffix": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Miller, Richard Beckwith, Christine Fellbaum, David Gross, and Katherine Miller, 1993a. Intro- duction to WordNet: an On-Line Lexical Database. ftp://clarity.princeton.edu/pub/WordNet/5papers.ps.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A semantic concordance", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "A" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudia", |
| "middle": [], |
| "last": "Leacock", |
| "suffix": "" |
| }, |
| { |
| "first": "Randee", |
| "middle": [], |
| "last": "Tengi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ross T", |
| "middle": [], |
| "last": "Bunker", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the ARPA Workshop on Human Language Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "303--308", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller, Claudia Leacock, Randee Tengi, and Ross T Bunker. 1993b. A semantic concordance. In Proceedings of the ARPA Workshop on Human Lan- guage Technology, pages 303-308. Morgan Kaufman.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Open-source corpora: Using the net to fish for linguistic data", |
| "authors": [ |
| { |
| "first": "Serge", |
| "middle": [], |
| "last": "Sharoff", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "International Journal of Corpus Linguistics", |
| "volume": "11", |
| "issue": "4", |
| "pages": "435--462", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Serge Sharoff. 2006. Open-source corpora: Using the net to fish for linguistic data. International Journal of Corpus Linguistics, 11(4):435-462.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "type_str": "table", |
| "text": "best results", |
| "content": "<table><tr><td>Systems</td><td>P</td><td>R</td><td colspan=\"2\">M ode P M ode R</td></tr><tr><td colspan=\"3\">WordNet 9.95 9.95</td><td>15.28</td><td>15.28</td></tr><tr><td>lin</td><td colspan=\"2\">8.84 8.53</td><td>14.69</td><td>14.23</td></tr><tr><td>l1</td><td colspan=\"2\">8.11 7.82</td><td>13.35</td><td>12.93</td></tr><tr><td>lee</td><td colspan=\"2\">6.99 6.74</td><td>11.34</td><td>10.98</td></tr><tr><td>jaccard</td><td colspan=\"2\">6.84 6.60</td><td>11.17</td><td>10.81</td></tr><tr><td>cos</td><td colspan=\"2\">5.07 4.89</td><td>7.64</td><td>7.40</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "text": "Further analysis for best", |
| "content": "<table><tr><td>Systems</td><td>P</td><td>R</td><td colspan=\"2\">M ode P M ode R</td></tr><tr><td>IRST2</td><td colspan=\"2\">69.03 68.90</td><td>58.54</td><td>58.54</td></tr><tr><td>UNT</td><td colspan=\"2\">49.19 49.19</td><td>66.26</td><td>66.26</td></tr><tr><td>KU</td><td colspan=\"2\">46.15 46.15</td><td>61.30</td><td>61.30</td></tr><tr><td>IRST1</td><td colspan=\"2\">41.23 41.20</td><td>55.28</td><td>55.28</td></tr><tr><td>USYD</td><td colspan=\"2\">36.07 34.96</td><td>43.66</td><td>42.28</td></tr><tr><td>SWAG2</td><td colspan=\"2\">37.80 34.66</td><td>50.18</td><td>46.02</td></tr><tr><td>HIT</td><td colspan=\"2\">33.88 33.88</td><td>46.91</td><td>46.91</td></tr><tr><td>SWAG1</td><td colspan=\"2\">35.53 32.83</td><td>47.41</td><td>43.82</td></tr><tr><td>TOR</td><td colspan=\"2\">11.19 11.19</td><td>14.63</td><td>14.63</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "text": "oot results", |
| "content": "<table><tr><td>Systems</td><td>P</td><td>R</td><td colspan=\"2\">M ode P M ode R</td></tr><tr><td colspan=\"3\">WordNet 29.70 29.35</td><td>40.57</td><td>40.57</td></tr><tr><td>lin</td><td colspan=\"2\">27.70 26.72</td><td>40.47</td><td>39.19</td></tr><tr><td>l1</td><td colspan=\"2\">24.09 23.23</td><td>36.10</td><td>34.96</td></tr><tr><td>lee</td><td colspan=\"2\">20.09 19.38</td><td>29.81</td><td>28.86</td></tr><tr><td>jaccard</td><td colspan=\"2\">18.23 17.58</td><td>26.87</td><td>26.02</td></tr><tr><td>cos</td><td colspan=\"2\">14.07 13.58</td><td>20.82</td><td>20.16</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "text": "oot baseline results", |
| "content": "<table><tr><td/><td>NMWT</td><td/><td>NMWS</td><td/></tr><tr><td>Systems</td><td>P</td><td>R</td><td>P</td><td>R</td></tr><tr><td>IRST2</td><td colspan=\"4\">72.04 71.90 76.19 76.06</td></tr><tr><td>UNT</td><td colspan=\"4\">51.13 51.13 54.01 54.01</td></tr><tr><td>KU</td><td colspan=\"4\">48.43 48.43 49.72 49.72</td></tr><tr><td>IRST1</td><td colspan=\"4\">43.11 43.08 45.13 45.11</td></tr><tr><td>USYD</td><td colspan=\"4\">37.26 36.17 40.13 38.89</td></tr><tr><td>SWAG2</td><td colspan=\"4\">39.95 36.51 40.97 37.75</td></tr><tr><td>HIT</td><td colspan=\"4\">35.60 35.60 36.63 36.63</td></tr><tr><td>SWAG1</td><td colspan=\"4\">37.49 34.64 38.36 35.67</td></tr><tr><td>TOR</td><td colspan=\"4\">11.77 11.77 12.22 12.22</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "text": "Further analysis for oot", |
| "content": "<table><tr><td/><td>HIT</td><td/><td colspan=\"2\">WordNet BL</td></tr><tr><td/><td>P</td><td>R</td><td>P</td><td>R</td></tr><tr><td>detection</td><td colspan=\"4\">45.34 56.15 43.64 36.92</td></tr><tr><td colspan=\"5\">identification 41.61 51.54 40.00 33.85</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "text": "MW results", |
| "content": "<table><tr><td/><td colspan=\"2\">good reasonable</td><td>bad</td></tr><tr><td>sys</td><td>9.07</td><td>19.08</td><td>71.85</td></tr><tr><td colspan=\"2\">origA 37.36</td><td>41.01</td><td>21.63</td></tr></table>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |