ACL-OCL / Base_JSON /prefixK /json /K15 /K15-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:51.954667Z"
},
"title": "One Million Sense-Tagged Instances for Word Sense Disambiguation and Induction",
"authors": [
{
"first": "Kaveh",
"middle": [],
"last": "Taghipour",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {
"addrLine": "13 Computing Drive",
"postCode": "117417",
"country": "Singapore"
}
},
"email": ""
},
{
"first": "Hwee",
"middle": [
"Tou"
],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {
"addrLine": "13 Computing Drive",
"postCode": "117417",
"country": "Singapore"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Supervised word sense disambiguation (WSD) systems are usually the best performing systems when evaluated on standard benchmarks. However, these systems need annotated training data to function properly. While there are some publicly available open source WSD systems, very few large annotated datasets are available to the research community. The two main goals of this paper are to extract and annotate a large number of samples and release them for public use, and also to evaluate this dataset against some word sense disambiguation and induction tasks. We show that the open source IMS WSD system trained on our dataset achieves stateof-the-art results in standard disambiguation tasks and a recent word sense induction task, outperforming several task submissions and strong baselines.",
"pdf_parse": {
"paper_id": "K15-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "Supervised word sense disambiguation (WSD) systems are usually the best performing systems when evaluated on standard benchmarks. However, these systems need annotated training data to function properly. While there are some publicly available open source WSD systems, very few large annotated datasets are available to the research community. The two main goals of this paper are to extract and annotate a large number of samples and release them for public use, and also to evaluate this dataset against some word sense disambiguation and induction tasks. We show that the open source IMS WSD system trained on our dataset achieves stateof-the-art results in standard disambiguation tasks and a recent word sense induction task, outperforming several task submissions and strong baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Identifying the meaning of a word automatically has been an interesting research topic for a few decades. The approaches used to solve this problem can be roughly categorized into two main classes: Word Sense Disambiguation (WSD) and Word Sense Induction (WSI) (Navigli, 2009) . For word sense disambiguation, some systems are based on supervised machine learning algorithms (Lee et al., 2004; Zhong and Ng, 2010) , while others use ontologies and other structured knowledge sources (Ponzetto and Navigli, 2010; Agirre et al., 2014; Moro et al., 2014) .",
"cite_spans": [
{
"start": 261,
"end": 276,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF17"
},
{
"start": 375,
"end": 393,
"text": "(Lee et al., 2004;",
"ref_id": "BIBREF11"
},
{
"start": 394,
"end": 413,
"text": "Zhong and Ng, 2010)",
"ref_id": "BIBREF28"
},
{
"start": 483,
"end": 511,
"text": "(Ponzetto and Navigli, 2010;",
"ref_id": "BIBREF22"
},
{
"start": 512,
"end": 532,
"text": "Agirre et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 533,
"end": 551,
"text": "Moro et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are several sense-annotated datasets for WSD (Miller et al., 1993; Ng and Lee, 1996; Passonneau et al., 2012) . However, these datasets either include few samples per word sense or only cover a small set of polysemous words.",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "(Miller et al., 1993;",
"ref_id": "BIBREF13"
},
{
"start": 73,
"end": 90,
"text": "Ng and Lee, 1996;",
"ref_id": "BIBREF18"
},
{
"start": 91,
"end": 115,
"text": "Passonneau et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To overcome these limitations, automatic methods have been developed for annotating training samples. For example, Ng et al. (2003) , Chan and Ng (2005) , and Zhong and Ng (2009) used Chinese-English parallel corpora to extract samples for training their supervised WSD system. Diab (2004) proposed an unsupervised bootstrapping method to automatically generate a senseannotated dataset. Another example of automatically created datasets is the semi-supervised method used in (K\u00fcbler and Zhekova, 2009) , which employed a supervised classifier to label instances.",
"cite_spans": [
{
"start": 102,
"end": 131,
"text": "For example, Ng et al. (2003)",
"ref_id": null
},
{
"start": 134,
"end": 152,
"text": "Chan and Ng (2005)",
"ref_id": "BIBREF3"
},
{
"start": 159,
"end": 178,
"text": "Zhong and Ng (2009)",
"ref_id": "BIBREF27"
},
{
"start": 278,
"end": 289,
"text": "Diab (2004)",
"ref_id": "BIBREF5"
},
{
"start": 476,
"end": 502,
"text": "(K\u00fcbler and Zhekova, 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The two main contributions of this paper are as follows. First, we employ the same method used in Chan and Ng, 2005) to semi-automatically annotate one million training samples based on the WordNet sense inventory (Miller, 1995) and release the annotated corpus for public use. To our knowledge, this annotated set of sense-tagged samples is the largest publicly available dataset for word sense disambiguation. Second, we train an open source supervised WSD system, IMS (Zhong and Ng, 2010) , using our data and evaluate it against standard WSD and WSI benchmarks. We show that our system outperforms other state-of-the-art systems in most cases.",
"cite_spans": [
{
"start": 98,
"end": 116,
"text": "Chan and Ng, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 214,
"end": 228,
"text": "(Miller, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 471,
"end": 491,
"text": "(Zhong and Ng, 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As any WSD system is also a WSI system when we treat the pre-defined sense inventory of the WSD system as the induced word senses, a WSD system can also be evaluated and used for WSI. Some researchers believe that, in some cases, WSI methods may perform better than WSD systems (Jurgens and Klapaftis, 2013; Wang et al., 2015) . However, we argue that WSI systems have few advantages compared to WSD methods and according to our results, disambiguation systems consistently outperform induction systems. Although there are some cases where WSI systems can be useful (e.g., for resource-poor languages), in most cases WSD systems are preferable because of higher accuracy and better interpretability of output.",
"cite_spans": [
{
"start": 278,
"end": 307,
"text": "(Jurgens and Klapaftis, 2013;",
"ref_id": "BIBREF8"
},
{
"start": 308,
"end": 326,
"text": "Wang et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is composed of the following sections. Section 2 explains our methodology for creating the training data. We evaluate the extracted data in Section 3 and finally, we conclude the paper in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to train a supervised word sense disambiguation system, we extract and sense-tag data from a freely available parallel corpus, in a semiautomatic manner. To increase the coverage and therefore the ultimate performance of our WSD system, we also make use of existing sense-tagged datasets. This section explains each step in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2"
},
{
"text": "Since the main purpose of this paper is to build and release a publicly available training set for word sense disambiguation systems, we selected the MultiUN corpus (MUN) (Eisele and Chen, 2010) produced in the EuroMatrixPlus project 1 . This corpus is freely available via the project website and includes seven languages. An automatically sentence-aligned version of this dataset can be downloaded from the OPUS website 2 and therefore we decided to extract samples from this sentence-aligned version.",
"cite_spans": [
{
"start": 171,
"end": 194,
"text": "(Eisele and Chen, 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2"
},
{
"text": "To extract training data from the MultiUN parallel corpus, we follow the approach described in (Chan and Ng, 2005) and select the Chinese-English part of the MultiUN corpus. The extraction method has the following steps:",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(Chan and Ng, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2"
},
{
"text": "English side of the corpus is tokenized using the Penn TreeBank tokenizer 3 , while the Chinese side of the corpus is segmented using the Chinese word segmenter of (Low et al., 2005) .",
"cite_spans": [
{
"start": 164,
"end": 182,
"text": "(Low et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "2. Word alignment: After tokenizing the texts, GIZA++ (Och and Ney, 2000) is used to align English and Chinese words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "3. Part-of-speech (POS) tagging and lemmatization: After running GIZA++, we use the OpenNLP POS tagger 4 and then the Word-Net lemmatizer to obtain POS tags and word lemmas of the English sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "4. Annotation: In order to assign a WordNet sense tag to an English word w e in a sentence, we make use of the aligned Chinese translation w c of w e , based on the automatic word alignment formed by GIZA++. For each sense i of w e in the WordNet sense inventory (WordNet 1.7.1), a list of Chinese translations of sense i of w e has been manually created. If w c matches one of these Chinese translations of sense i, then w e is tagged with sense i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "The average time needed to manually assign Chinese translations to the word senses of one word type for noun, adjective, and verb is 20, 25, and 40 minutes respectively (Chan, 2008). The above procedure annotates the top 60% most frequent word types (nouns, verbs, and adjectives) in English, selected based on their frequency in the Brown corpus. This set of selected word types includes 649 nouns, 190 verbs, and 319 adjectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "Since automatic sentence and word alignment can be noisy, and a Chinese word w c can occasionally be a valid translation of more than one sense of an English word w e , the senses tagged using the above procedure may be erroneous. To get an idea of the accuracy of the senses tagged with this procedure, we manually evaluated a subset of 1,000 randomly selected sense-tagged instances. Although the sense inventory is finegrained (WordNet 1.7.1), the sense-tag accuracy achieved is 83.7%. We also performed an error analysis to identify the sources of errors. We found that only 4% of errors are caused by wrong sentence or word alignment. However, 69% of erroneous sense-tagged instances are the result of a Chinese word associated with multiple senses of a target English word. In such cases, the Chinese word is linked to multiple sense tags and therefore, errors in sense-tagged data are introduced. Our results are similar to those reported in (Chan, 2008) .",
"cite_spans": [
{
"start": 949,
"end": 961,
"text": "(Chan, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "To speed up the training process, we perform random sampling on the sense tags with more than 500 samples and limit the number of samples per sense to 500. However, all samples of senses with fewer than 500 samples are included in the training data. This sampling method ensures that rare sense tags also have training samples during the selection process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "In order to improve the coverage of the training set, we augment it by adding samples from SEM-COR (SC) (Miller et al., 1993) pus (Ng and Lee, 1996) . We only add the 28 most frequent adverbs from SEMCOR because we observe almost no improvement when adding all adverbs. We notice that the DSO corpus generally improves the performance of our system. However, since the annotated DSO corpus is copyrighted, we are unable to release a dataset including the DSO corpus. Therefore, we experiment with two different configurations, one with the DSO corpus and one without, although the released dataset will not include the DSO corpus.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF13"
},
{
"start": 130,
"end": 148,
"text": "(Ng and Lee, 1996)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "Since some shared tasks use newer WordNet versions, we convert the training set sense labels using the sense mapping files provided by Word-Net 5 . As replicating our results requires WordNet versions 1.7.1, 2.1, and 3.0, we release our sensetagged dataset in all three versions. Some statistics about the sense-tagged training set can be found in Table 1 to Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 366,
"text": "Table 1 to Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tokenization and word segmentation: The",
"sec_num": "1."
},
{
"text": "For the WSD system, we use IMS (Zhong and Ng, 2010) in our experiments. IMS is a supervised WSD system based on support vector machines (SVM). This WSD system comes with outof-the-box pre-trained models. However, since the original training set is not released, we use our own training set (see Section 2) to train IMS and then evaluate it on standard WSD and WSI benchmarks. This section presents the results obtained on four WSD and one WSI shared tasks. The four all-words WSD shared tasks are SensEval-2 (Edmonds and Cotton, 2001), SensEval-3 task 1 (Snyder and Palmer, 2004), and both the fine-grained task 17 and coarse-grained task 7 of SemEval-2007 (Pradhan et al., 2007 Navigli et al., 2007) . These all-words WSD shared tasks provide no training data to the participants. The selected word sense induction task in our experiments is 5 http://wordnet.princeton.edu/wordnet/download/currentversion/ SemEval-2013 task 13 (Jurgens and Klapaftis, 2013) .",
"cite_spans": [
{
"start": 31,
"end": 51,
"text": "(Zhong and Ng, 2010)",
"ref_id": "BIBREF28"
},
{
"start": 644,
"end": 656,
"text": "SemEval-2007",
"ref_id": "BIBREF23"
},
{
"start": 657,
"end": 678,
"text": "(Pradhan et al., 2007",
"ref_id": "BIBREF23"
},
{
"start": 679,
"end": 700,
"text": "Navigli et al., 2007)",
"ref_id": "BIBREF16"
},
{
"start": 928,
"end": 957,
"text": "(Jurgens and Klapaftis, 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "The results of our experiments on WSD tasks are presented in Table 4 . For the SensEval-2 and SensEval-3 test sets, we use the training set with the WordNet 1.7.1 sense inventory and for the SemEval-2007 test sets, we use training data with the WordNet 2.1 sense inventory.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "WSD All-Words Tasks",
"sec_num": "3.1"
},
{
"text": "In Table 4 , IMS (original) refers to the IMS system trained with the original training instances as reported in (Zhong and Ng, 2010) . We also compare our systems with two other configurations obtained from training IMS on SEMCOR, and SEM-COR plus DSO datasets. In Table 4 , these two settings are shown by IMS (SC) and IMS (SC+DSO), respectively. Finally, Rank 1 and Rank 2 are the top two participating systems in the respective allwords tasks.",
"cite_spans": [
{
"start": 113,
"end": 133,
"text": "(Zhong and Ng, 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": null
},
{
"start": 266,
"end": 273,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "WSD All-Words Tasks",
"sec_num": "3.1"
},
{
"text": "As shown in Table 4 , our systems (both with and without the DSO corpus as training instances) perform competitively with and in some cases even better than the original IMS and also the best shared task submissions. This shows that our training set is of high quality and training a supervised WSD system using our training data achieves state-of-the-art results on the all-words tasks. Since the MUN dataset does not cover all target word types in the all-words shared tasks, the accuracy achieved with MUN alone is lower than the SC and SC+DSO settings. However, the evaluation results show that IMS trained on MUN alone often performs better than or is competitive with the WordNet Sense 1 baseline. Finally, it can be seen that adding the training instances from MUN (that is, IMS (MUN+SC) and IMS (MUN+SC+DSO)) often achieves higher accuracy than without MUN instances (IMS (SC) and IMS (SC+DSO)).",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "WSD All-Words Tasks",
"sec_num": "3.1"
},
{
"text": "In order to evaluate our system on a word sense induction task, we selected SemEval-2013 task 13, the latest WSI shared task. Unlike most other tasks that assume a single sense is sufficient for representing word senses, this task allows each instance to be associated with multiple sense labels with their applicability weights. This WSI task considers 50 lemmas, including 20 nouns, 20 verbs, and 10 adjectives, annotated with the WordNet 3.1 noun verb adjective adverb total MUN (before sampling) 649 190 319 0 1,158 MUN 649 190 319 0 1,158 MUN+SC 11,446 4,705 5,129 28 21,308 MUN+SC+DSO 11,446 4,705 5,129 28 21,308 sense inventory. We use WordNet 3.0 in our experiments on this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 482,
"end": 632,
"text": "(before sampling) 649 190 319 0 1,158 MUN 649 190 319 0 1,158 MUN+SC 11,446 4,705 5,129 28 21,308 MUN+SC+DSO 11,446 4,705 5,129 28",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "SemEval-2013 Word Sense Induction Task",
"sec_num": "3.2"
},
{
"text": "We evaluated our system using all measures used in the shared task. The results are presented in Table 5 . The columns in this table denote the scores of the various systems according to the different evaluation metrics used in the WSI shared task, which are Jaccard Index, K sim \u03b4 , WNDCG, Fuzzy NMI, and Fuzzy B-Cubed. See (Jurgens and Klapaftis, 2013) for details of the evaluation metrics.",
"cite_spans": [
{
"start": 325,
"end": 354,
"text": "(Jurgens and Klapaftis, 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "SemEval-2013 Word Sense Induction Task",
"sec_num": "3.2"
},
{
"text": "This table also includes the top two systems in the shared task, AI-KU (Baskaya et al., 2013) and Unimelb (Lau et al., 2013) , as well as Wang-15 (Wang et al., 2015) . AI-KU uses a language model to find the most likely substitutes for a target word to represent the context. The clustering method used in AI-KU is K-means and the system gives good performance in the shared task. Unimelb relies on Hierarchical Dirichlet Process (Teh et al., 2006) to identify the sense of a target word using positional word features. Finally, Wang-15 uses Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to model the word sense and topic jointly. This system obtains high scores, according to Fuzzy B-Cubed and Fuzzy NMI measures. The last three rows are some baseline systems: grouping all instances into one cluster, grouping each instance into a cluster of its own, and assigning the most frequent sense in SEM-COR to all instances. As shown in Table 5 , training IMS on our training data outperforms all other systems on three out of five evaluation metrics, and performs competitively on the remaining two metrics.",
"cite_spans": [
{
"start": 71,
"end": 93,
"text": "(Baskaya et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 98,
"end": 124,
"text": "Unimelb (Lau et al., 2013)",
"ref_id": null
},
{
"start": 138,
"end": 165,
"text": "Wang-15 (Wang et al., 2015)",
"ref_id": null
},
{
"start": 430,
"end": 448,
"text": "(Teh et al., 2006)",
"ref_id": "BIBREF25"
},
{
"start": 576,
"end": 595,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 940,
"end": 947,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "SemEval-2013 Word Sense Induction Task",
"sec_num": "3.2"
},
{
"text": "IMS trained on MUN alone (IMS (MUN)) outperforms IMS (SC) and IMS (SC+DSO) in terms of the first three evaluation measures, and achieves comparable Fuzzy NMI and Fuzzy B-Cubed scores. Moreover, the evaluation results show that IMS (MUN) often performs better than the SEMCOR most frequent sense baseline. Finally, it can be observed that in most cases, adding training instances from MUN significantly improves IMS (SC) and IMS (SC+DSO).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval-2013 Word Sense Induction Task",
"sec_num": "3.2"
},
{
"text": "One of the major problems in building supervised word sense disambiguation systems is the training data acquisition bottleneck. In this paper, we semi-automatically extracted and sense-tagged an English corpus containing one million sensetagged instances. This large sense-tagged corpus can be used for training any supervised WSD systems. We then evaluated the performance of IMS trained on our sense-tagged corpus in several WSD and WSI shared tasks. Our sense-tagged dataset has been released publicly 6 . We believe our dataset is the largest publicly available annotated dataset for WSD at present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "After training a supervised WSD system using our training set, we evaluated the system using standard benchmarks. The evaluation results show that our sense-tagged corpus can be used to build a WSD system that performs competitively with the Table 5 : Supervised and unsupervised evaluation results (in %) on SemEval-2013 word sense induction task top performing WSD systems in the SensEval-2, SensEval-3, and SemEval-2007 fine-grained and coarse-grained all-words tasks, as well as the top systems in the SemEval-2013 WSI task.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "http://www.euromatrixplus.eu/multi-un 2 http://opus.lingfil.uu.se/MultiUN.php 3 http://www.cis.upenn.edu/\u223ctreebank/tokenization.html 4 http://opennlp.apache.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.comp.nus.edu.sg/\u223cnlp/corpora.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office. We are also grateful to Christian Hadiwinoto and Benjamin Yap for assistance with performing the error analysis, and to the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Random walks for knowledge-based word sense disambiguation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "1",
"pages": "57--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57-84.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "AI-KU: using substitute vectors and co-occurrence modeling for word sense induction and disambiguation",
"authors": [
{
"first": "Osman",
"middle": [],
"last": "Baskaya",
"suffix": ""
},
{
"first": "Enis",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "Volkan",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "300--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Osman Baskaya, Enis Sert, Volkan Cirik, and Deniz Yuret. 2013. AI-KU: using substitute vectors and co-occurrence modeling for word sense induction and disambiguation. In Proceedings of the Sev- enth International Workshop on Semantic Evalua- tion (SemEval 2013), pages 300-306.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Scaling up word sense disambiguation via parallel texts",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 20th National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1037--1042",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan and Hwee Tou Ng. 2005. Scaling up word sense disambiguation via parallel texts. In Proceedings of the 20th National Conference on Ar- tificial Intelligence, pages 1037-1042.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word Sense Disambiguation: Scaling up, Domain Adaptation, and Application to Machine Translation",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan. 2008. Word Sense Disambiguation: Scaling up, Domain Adaptation, and Application to Machine Translation. Ph.D. thesis, National Univer- sity of Singapore.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Relieving the data acquisition bottleneck in word sense disambiguation",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "303--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mona Diab. 2004. Relieving the data acquisition bot- tleneck in word sense disambiguation. In Proceed- ings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 303-310.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SENSEVAL-2: Overview",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Edmonds",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Cotton",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Edmonds and Scott Cotton. 2001. SENSEVAL- 2: Overview. In Proceedings of the Second Interna- tional Workshop on Evaluating Word Sense Disam- biguation Systems, pages 1-5.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "MultiUN: A multilingual corpus from United Nation documents",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Eisele",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "2868--2872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Eisele and Yu Chen. 2010. MultiUN: A multilingual corpus from United Nation documents. In Proceedings of the Seventh International Confer- ence on Language Resources and Evaluation, pages 2868-2872.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semeval-2013 task 13: Word sense induction for graded and non-graded senses",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Klapaftis",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "290--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens and Ioannis Klapaftis. 2013. Semeval- 2013 task 13: Word sense induction for graded and non-graded senses. In Proceedings of the Sev- enth International Workshop on Semantic Evalua- tion (SemEval 2013), pages 290-299.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semisupervised learning for word sense disambiguation: Quality vs. quantity",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Desislava",
"middle": [],
"last": "Zhekova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "197--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler and Desislava Zhekova. 2009. Semi- supervised learning for word sense disambiguation: Quality vs. quantity. In Proceedings of the Inter- national Conference on Recent Advances in Natural Language Processing, pages 197-202.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "unimelb: topic modelling-based word sense induction",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "307--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Paul Cook, and Timothy Baldwin. 2013. unimelb: topic modelling-based word sense induc- tion. In Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 307-311.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Supervised word sense disambiguation with support vector machines and multiple knowledge sources",
"authors": [
{
"first": "Hwee Tou",
"middle": [],
"last": "Yoong Keok Lee",
"suffix": ""
},
{
"first": "Tee Kiah",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chia",
"suffix": ""
}
],
"year": 2004,
"venue": "Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text",
"volume": "",
"issue": "",
"pages": "137--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoong Keok Lee, Hwee Tou Ng, and Tee Kiah Chia. 2004. Supervised word sense disambiguation with support vector machines and multiple knowledge sources. In Senseval-3: Third International Work- shop on the Evaluation of Systems for the Semantic Analysis of Text, pages 137-140.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A maximum entropy approach to Chinese word segmentation",
"authors": [
{
"first": "Jin",
"middle": [
"Kiat"
],
"last": "Low",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Wenyuan",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "161--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word seg- mentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 161-164.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A semantic concordance",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Randee",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "Ross",
"middle": [
"T"
],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "303--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proceedings of the Workshop on Human Language Technology, pages 303-308.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WordNet: a lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Entity linking meets word sense disambiguation: a unified approach",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "231--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity linking meets word sense disam- biguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231- 244.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semeval-2007 task 07: Coarsegrained English all-words task",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"C"
],
"last": "Litkowski",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli, Kenneth C. Litkowski, and Orin Har- graves. 2007. Semeval-2007 task 07: Coarse- grained English all-words task. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 30-35.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word sense disambiguation: A survey",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Computing Surveys",
"volume": "41",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41(2):10:1- 10:69.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach",
"authors": [
{
"first": "Tou",
"middle": [],
"last": "Hwee",
"suffix": ""
},
{
"first": "Hian Beng",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng and Hian Beng Lee. 1996. Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. In Proceed- ings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 40-47.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploiting parallel texts for word sense disambiguation: An empirical study",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Yee Seng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "455--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Bin Wang, and Yee Seng Chan. 2003. Exploiting parallel texts for word sense disambigua- tion: An empirical study. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics, pages 455-462.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Compu- tational Linguistics, pages 440-447.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The MASC word sense sentence corpus",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Ide",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "3025--3030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Passonneau, Collin Baker, Christiane Fell- baum, and Nancy Ide. 2012. The MASC word sense sentence corpus. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation, pages 3025-3030.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Knowledge-rich word sense disambiguation rivaling supervised systems",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Simone",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1522--1531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Paolo Ponzetto and Roberto Navigli. 2010. Knowledge-rich word sense disambiguation rivaling supervised systems. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics, pages 1522-1531.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semeval-2007 task-17: English lexical sample, SRL and all words",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "87--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. Semeval-2007 task-17: En- glish lexical sample, SRL and all words. In Proceed- ings of the Fourth International Workshop on Se- mantic Evaluations (SemEval-2007), pages 87-92.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The English all-words task",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text",
"volume": "",
"issue": "",
"pages": "41--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Snyder and Martha Palmer. 2004. The En- glish all-words task. In Proceedings of the Third In- ternational Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 41-43.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Beal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "476",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2006. Hierarchical Dirichlet pro- cesses. Journal of the American Statistical Associa- tion, 101(476):1566-1581.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A sense-topic model for word sense induction with unsupervised data enrichment",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"D"
],
"last": "Ziebart",
"suffix": ""
},
{
"first": "Clement",
"middle": [
"T"
],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "59--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Wang, Mohit Bansal, Kevin Gimpel, Brian D. Ziebart, and Clement T. Yu. 2015. A sense-topic model for word sense induction with unsupervised data enrichment. Transactions of the Association for Computational Linguistics, 3:59-71.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Word sense disambiguation for all words without hard labor",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1616--1621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi Zhong and Hwee Tou Ng. 2009. Word sense dis- ambiguation for all words without hard labor. In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence, pages 1616- 1621.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "It Makes Sense: A wide-coverage word sense disambiguation system for free text",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics System Demonstrations",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It Makes Sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics System Demonstrations, pages 78-83.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td>Avg. # samples</td></tr><tr><td/><td>per word type</td></tr><tr><td>MUN (before sampling)</td><td>19,837.6</td></tr><tr><td>MUN</td><td>852.5</td></tr><tr><td>MUN+SC</td><td>55.4</td></tr><tr><td>MUN+SC+DSO</td><td>63.7</td></tr><tr><td colspan=\"2\">Table 3: Average number of samples per word</td></tr><tr><td>type (WordNet 1.7.1)</td><td/></tr></table>",
"text": "and the DSO cor-",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"5\">: Number of word types in each part-of-speech (WordNet 1.7.1)</td><td/></tr><tr><td/><td/><td colspan=\"3\">number of training samples</td><td/><td/></tr><tr><td/><td>noun</td><td>verb</td><td colspan=\"2\">adjective adverb</td><td>total</td><td>size</td></tr><tr><td colspan=\"4\">MUN (before sampling) 14,492,639 4,400,813 4,078,543</td><td>0</td><td colspan=\"2\">22,971,995 17.7 GB</td></tr><tr><td>MUN</td><td>503,408</td><td>265,785</td><td>218,046</td><td>0</td><td>987,239</td><td>745 MB</td></tr><tr><td>MUN+SC</td><td>582,028</td><td>341,141</td><td>251,362</td><td>6,207</td><td colspan=\"2\">1,180,738 872 MB</td></tr><tr><td>MUN+SC+DSO</td><td>687,871</td><td>412,482</td><td>251,362</td><td>6,207</td><td colspan=\"2\">1,357,922 939 MB</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "Number of training samples in each part-of-speech (WordNet 1.7.1). The size column shows the total size of each dataset in megabytes or gigabytes.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}