ACL-OCL / Base_JSON /prefixS /json /S07 /S07-1047.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:22:52.651213Z"
},
"title": "LCC-WSD: System Description for English Coarse Grained All Words Task at SemEval 2007",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Novischi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Computer Corp. Richardson",
"location": {
"region": "TX"
}
},
"email": ""
},
{
"first": "Munirathnam",
"middle": [],
"last": "Srikanth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Computer Corp. Richardson",
"location": {
"region": "TX"
}
},
"email": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Bennett",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Computer Corp. Richardson",
"location": {
"region": "TX"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This document describes the Word Sense Disambiguation system used by Language Computer Corporation at English Coarse Grained All Word Task at SemEval 2007. The system is based on two supervised machine learning algorithms: Maximum Entropy and Support Vector Machines. These algorithms were trained on a corpus created from Sem-Cor, Senseval 2 and 3 all words and lexical sample corpora and Open Mind Word Expert 1.0 corpus. We used topical, syntactic and semantic features. Some semantic features were created using WordNet glosses with semantic relations tagged manually and automatically as part of eXtended WordNet project. We also tried to create more training instances from the disambiguated WordNet glosses found in XWN project (XWN, 2003). For words for which we could not build a sense classifier, we used First Sense in WordNet as a back-off strategy in order to have coverage of 100%. The precision and recall of the overall system is 81.446% placing it in the top 5 systems.",
"pdf_parse": {
"paper_id": "S07-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "This document describes the Word Sense Disambiguation system used by Language Computer Corporation at English Coarse Grained All Word Task at SemEval 2007. The system is based on two supervised machine learning algorithms: Maximum Entropy and Support Vector Machines. These algorithms were trained on a corpus created from Sem-Cor, Senseval 2 and 3 all words and lexical sample corpora and Open Mind Word Expert 1.0 corpus. We used topical, syntactic and semantic features. Some semantic features were created using WordNet glosses with semantic relations tagged manually and automatically as part of eXtended WordNet project. We also tried to create more training instances from the disambiguated WordNet glosses found in XWN project (XWN, 2003). For words for which we could not build a sense classifier, we used First Sense in WordNet as a back-off strategy in order to have coverage of 100%. The precision and recall of the overall system is 81.446% placing it in the top 5 systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The performance of a Word Sense Disambiguation (WSD) system using a finite set of senses depends greatly on the definition of the word senses. Fine grained senses are hard to distinguish while coarse grained senses tend to be more clear. Word Sense Disambiguation is not a final goal, but it is an intermediary step used in other Natural Processing applications like detection of Semantic Relations, Information Retrieval or Machine Translation. Word Sense Disambiguation is not useful if it is not performed with high accuracy (Sanderson, 1994) . A coarse grained set of sense gives the opportunity to make more precise sense distinction and to make a Word Sense Disambiguation system more useful to other tasks.",
"cite_spans": [
{
"start": 528,
"end": 545,
"text": "(Sanderson, 1994)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal at SemEval 2007 was to measure the performance of known supervised machine learning algorithm using coarse grained senses. The idea of using supervised machine learning for WSD is not new and was used for example in (Ng and Lee, 1996) . We made experiments with two supervised methods: Maximum Entropy (ME) and Support Vector Machines (SVM). These supervised algorithms were used with topical, syntactic and semantic features. We trained a classifier for each word using both supervised algorithms. New features were added in 3 incremental steps. After an initial set of experiments the algorithm performance was enhanced using a greedy feature selection algorithm similar to one in (Mihalcea, 2002) . In order to increase the number of training instances, we tried to use the disambiguated WordNet glosses from XWN project (XWN, 2003) . Combining other corpora with disambiguated glosses from XWN did not provide any improvement so we used XWN as a fall back strategy for 70 words that did not have any training examples in other corpora but XWN.",
"cite_spans": [
{
"start": 225,
"end": 243,
"text": "(Ng and Lee, 1996)",
"ref_id": "BIBREF4"
},
{
"start": 692,
"end": 708,
"text": "(Mihalcea, 2002)",
"ref_id": "BIBREF3"
},
{
"start": 833,
"end": 844,
"text": "(XWN, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 describes the supervised methods used by our WSD system, the pre-processing module and the set of features. Section 3 presents the experiments we performed and their results. Section 4 draws the conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The system contains a preprocessing module used before computing the values of the features needed by the machine learning classifiers. The preprocessing module perform the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Tokenization: using an in house text tokenizer Named Entity Recognition: using an in house system Part of Speech Tagging: normally we use the Brill tagger, but we took advantage of the part of speech tags given in the test file WordNet look-up to check if the word exists in WordNet and to get its lemma, possible part of speech for that lemma and if the word has a single sense or not. For SemEval English Coarse All Words task we took advantage by the lemma provided in the test file. Compound concept detection: using a classifier based on WordNet Syntactic Parsing: using an in-house implementation of Collin's parser (Glaysher and Moldovan, 2006) The Maximum Entropy classifier is a C++ implementation found on web (Le, 2006) . The classifier was adapted to accept symbolic features for classification tasks in Natural Language Processing.",
"cite_spans": [
{
"start": 622,
"end": 651,
"text": "(Glaysher and Moldovan, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 720,
"end": 730,
"text": "(Le, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "For training SVM classifiers we used LIBSVM package (Chang and Lin, 2001 ). Each symbolic feature can have a single value from a finite set of values or can be assigned a subset of values from the set of all possible values. For each value we created a mapping between the feature value and a dimension in the N-dimensional classification space and we assigned the number 1.0 to that dimension if the feature had the corresponding value or 0.0 otherwise.",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Chang and Lin, 2001",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "We first performed experiments with our existing set of features used at Senseval 3 All Words task. We call this set \u00a1 \u00a3 \u00a2 \u00a5 \u00a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": ". Then we made three incremental changes to improve the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "The initial set contains the following features: current word form (CRT WORD) and part of speech (CRT POS), contextual features (CTX WORD) in a window (-3,3) words, collocations in a window of (-3,3) words (COL WORD), keywords (KEY-WORDS) and bigrams (BIGRAMS) in a window of (-3,3) sentences, verb mode (VERB MODE) which can take 4 values: ACTIVE, INFINITIVE, PAST, GERUND, verb voice (VERB VOICE) which can take 2 values ACTIVE, PASSIVE, the parent of the current verb in the parse tree (CRT PARENT) (ex: VP, NP), the first ancestor that is not VP in the parse tree (RAND PARENT) (like S, NP, PP, SBAR) and a boolean flag indicating if the current verb belongs to the main clause or not (MAIN CLAUSE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "We added new features to the initial set. We call this set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "\u00a1 \u00a3 \u00a2 \u00a7 \u00a6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "The lemmas of the contextual words in the window of (-3, 3) words around the target word (CTX LEMMA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Collocations formed with the lemma of surrounding words in a window of (-3, 3) (COL LEMMA) The parent of the contextual words in the parse tree in the window of (-3, 3) words around target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Collocations formed with the parents of the surrounding words in the window (-3, 3) words around the target word (COL PARENT). Occurrences in the current sentence of the words that are linked to the current word with a semantic relation of AGENT or THEME in WordNet 2.0 glosses (XWN LEMMA). We used files from XWN project (XWN, 2003) containing WordNet 2.0 glosses that were sense disambiguated and tagged with semantic relations both manually and automatically. For each word to be disambiguated we created a signature consisting of the set of words that are linked with a semantic relation of THEME or AGENT in all WordNet glosses. For every word in this set we created a feature showing if that word appears in the current sentence containing the target word.",
"cite_spans": [
{
"start": 322,
"end": 333,
"text": "(XWN, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Then we added a new feature consisting of all the named entities in a window of (-5,5) sentences around the target word. We called this feature NAMED ENTITIES. We created the feature set ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "For SemEval 2007 we performed several experiments: we tested ME and SVM classifiers on the 4 feature sets described in the previous section and then we tried to improve the performance using disambiguated glosses from XWN project. Each set of experiments together with the final submission is described in detail below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "3"
},
{
"text": "Initially we made experiments with the set of features used at Senseval 3 All Words task. For training the ME and SVM classifiers, we used a combined corpus made from SemCor, Senseval 3 All Words corpus, Senseval 3 Lexical Sample testing and training corpora and Senseval 2 Lexical sample training corpus. For testing we used Senseval 2 Lexical Sample corpus. We made 3 experiments for the first three feature sets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with different feature sets",
"sec_num": "3.1"
},
{
"text": "\u00a1 \u00a3 \u00a2 \u00a4 , \u00a1 \u00a3 \u00a2 \u00a6 , \u00a1 \u00a3 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with different feature sets",
"sec_num": "3.1"
},
{
"text": ". Both algorithms attempted to disambiguate all the words (cov-erage=100%) so the precision is equal with recall. Table 3 : The precision using SemCor and disambiguated glosses from XWN project expected an increase in performance with the additional features. This led us to the idea that not all the features are useful for all words. So we created a greedy feature selection algorithm based on the performance of the SVM classifier (Mihalcea, 2002) . The feature selection algorithm starts with an empty set of features \u00a2",
"cite_spans": [
{
"start": 434,
"end": 450,
"text": "(Mihalcea, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments with different feature sets",
"sec_num": "3.1"
},
{
"text": ", and iteratively adds one feature from the set of unused features . Initially the set contains all the features. The algorithm iterates as long as the overall performance increase. At each step the algorithm adds tentatively one feature from the set to the existing feature list \u00a2 and measures the performance of the classifier on a 10 fold cross validation on the training corpus. The feature providing the greatest increase in performance is finally added to \u00a2 and removed from .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with different feature sets",
"sec_num": "3.1"
},
{
"text": "The feature selection algorithm turned out to be very slow, so we could not use it to train all the words. Therefore we used it to train only the words from Senseval 2 Lexical Sample task and then we computed a global set of features by selecting the first 20 features that were selected the most (at least 5 times). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with different feature sets",
"sec_num": "3.1"
},
{
"text": "while ME surprisingly did get 1.53% increase in performance. Given the higher precision of ME classifier, it was selected for creating the submission file.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 \u00a2 \u00a4",
"sec_num": null
},
{
"text": "The ME classifier works well for words with enough training examples. However we found many words for which the number of training examples was too small. We tried to increase the number of training examples using the disambiguated WordNet glosses from XWN project. Not all the senses in the disambiguated glosses were assigned manually and the text of the glosses is different than normal running text. However we were curious if we could improve the overall performance by adding more training examples. We made 3 experiments showed in table 3. For all three experiments we used Senseval 2 English All Words corpus for testing. On the first experiment we used SemCor for training, on the second we used disambiguated glosses from XWN project and on the third we used both. XWN did not bring an improvement to the overall precision, so we decided to use XWN as a fall back strategy only for 70 words that did not have training examples is other corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using disambiguated glosses from XWN project",
"sec_num": "3.2"
},
{
"text": "For final submission we used trained ME models using feature set ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submission",
"sec_num": "3.3"
},
{
"text": "LCC-WSD team used two supervised approaches for performing experiments using coarse grained senses: Maximum Entropy and Support Vector Ma-chines. We used 4 feature sets: the first one was the feature set used in Senseval 3 and next two representing incremental additions. The fourth feature set represents a global set of features obtained from the individual feature sets for each word resulted from the greedy feature selection algorithm used to improve the performance of SVM classifiers. In addition we used disambiguated WordNet glosses from XWN to measure the improvement made by adding additional training examples. The submitted answer has a coverage of 100% and a precision of 81.446%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "LIBSVM: a library for support vector machines",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chung Chang and Chih-Jen Lin, 2001. LIBSVM: a library for support vector machines. Software avail- able at http://www.csie.ntu.edu.tw/ cjlin/libsvm.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Speeding up full syntactic parsing by leveraging partial parsing decisions",
"authors": [
{
"first": "Elliot",
"middle": [],
"last": "Glaysher",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"I"
],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "295--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elliot Glaysher and Dan I. Moldovan. 2006. Speeding up full syntactic parsing by leveraging partial parsing de- cisions. In Proceedings of the 21st International Con- ference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguis- tics, pages 295-300, Sydney, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Maximum Entropy Modeling Toolkit for Python and C++",
"authors": [
{
"first": "Zhang",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang Le, 2006. Maximum Entropy Modeling Toolkit for Python and C++. Software avail- able at http://homepages.inf.ed.ac.uk/s0450736/ maxent toolkit.html.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Instance based learning with automatic feature selection applied to word sense disambiguation",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics COL-ING 2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea. 2002. Instance based learning with au- tomatic feature selection applied to word sense dis- ambiguation. In Proceedings of the 19th Interna- tional Conference on Computational Linguistics COL- ING 2002, Taiwan.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Integrating multiple knowledge sources to disambiguate word sense: an exemplar-based approach",
"authors": [
{
"first": "Tou",
"middle": [],
"last": "Hwee",
"suffix": ""
},
{
"first": "Hian Beng",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng and Hian Beng Lee. 1996. Integrat- ing multiple knowledge sources to disambiguate word sense: an exemplar-based approach. In Proceedings of the 34th annual meeting on Association for Com- putational Linguistics, pages 40-47, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Word sense disambiguation and information retrieval",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Sanderson",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Sanderson. 1994. Word sense disambiguation and information retrieval. In Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval, pages 49-57, Dublin, IE.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td>\u00a1 \u00a3 \u00a2 \u00a1</td><td>obtained from the features most selected by the greedy selection algorithm</td></tr><tr><td colspan=\"2\">applied to all the words in Senseval 2</td></tr><tr><td colspan=\"2\">only for words in Senseval 2 English lexical sample</td></tr><tr><td colspan=\"2\">task and the top 20 features appearing the most often</td></tr><tr><td colspan=\"2\">(at least 5 times) in the selected feature set for each word were used to create feature set \u00a1 \u00a3 \u00a2 \u00a2 presented</td></tr><tr><td>in table 1.</td><td/></tr></table>",
"html": null,
"text": "The feature set",
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>Algorithm ME</td><td>\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a3 \u00a5 \u00a4 \u00a9 76.03% 75.86% 76.03% 77.56% \u00a3 \u00a5 \u00a4 \u00a3 \u00a5 \u00a4</td></tr><tr><td>SVM</td><td>73.30% 71.36% 71.46% 71.90%</td></tr></table>",
"html": null,
"text": "The precision of each algorithm on each feature set is presented in table 2.",
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>After the first 3 experiments we noticed that both</td></tr><tr><td>ME and SVM classifiers had good results using the first set of features \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 . This seemed odd since we</td></tr></table>",
"html": null,
"text": "The precision of ME and SVM classifiers using 4 sets of features.",
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>\u00a1 \u00a3 \u00a2 \u00a1</td><td colspan=\"2\">for 852 words, representing</td></tr><tr><td colspan=\"3\">1715 instances using SemCor, Senseval 2 and 3</td></tr><tr><td colspan=\"3\">English All Words and Lexical Sample testing and</td></tr><tr><td colspan=\"3\">training and OMWE 1.0. For 50 words represent-</td></tr><tr><td colspan=\"3\">ing 70 instances, we used disambiguated WordNet</td></tr><tr><td colspan=\"3\">glosses from XWN project to train ME classifiers using feature set \u00a1 \u00a3 \u00a2 \u00a1</td></tr><tr><td colspan=\"2\">LCC-WSD</td><td>81.446%</td></tr><tr><td colspan=\"3\">Best submission 83.208%</td></tr></table>",
"html": null,
"text": ". For the rest of 484 words for which we could not find training examples we used the First Sense in WordNet strategy. The submitted answer had a 100% coverage and a 81.446% precision presented in table 4.",
"num": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table/>",
"html": null,
"text": "The LCC-WSD and the best submission at SemEval 2007 Coarse All Words Task",
"num": null,
"type_str": "table"
}
}
}
}