ACL-OCL / Base_JSON /prefixD /json /D14 /D14-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:55:45.881169Z"
},
"title": "Automatic Domain Assignment for Word Sense Alignment",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": "",
"affiliation": {},
"email": "t.caselli@gmail.com"
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports on the development of a hybrid and simple method based on a machine learning classifier (Naive Bayes), Word Sense Disambiguation and rules, for the automatic assignment of WordNet Domains to nominal entries of a lexicographic dictionary, the Senso Comune De Mauro Lexicon. The system obtained an F1 score of 0.58, with a Precision of 0.70. We further used the automatically assigned domains to filter out word sense alignments between MultiWordNet and Senso Comune. This has led to an improvement in the quality of the sense alignments showing the validity of the approach for domain assignment and the importance of domain information for achieving good sense alignments.",
"pdf_parse": {
"paper_id": "D14-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports on the development of a hybrid and simple method based on a machine learning classifier (Naive Bayes), Word Sense Disambiguation and rules, for the automatic assignment of WordNet Domains to nominal entries of a lexicographic dictionary, the Senso Comune De Mauro Lexicon. The system obtained an F1 score of 0.58, with a Precision of 0.70. We further used the automatically assigned domains to filter out word sense alignments between MultiWordNet and Senso Comune. This has led to an improvement in the quality of the sense alignments showing the validity of the approach for domain assignment and the importance of domain information for achieving good sense alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Lexical knowledge, i.e. how words are used and express meaning, plays a key role in Natural Language Processing. Lexical knowledge is available in many different forms, ranging from unstructured terminologies (i.e. word list), to full fledged computational lexica and ontologies (e.g. WordNet (Fellbaum, 1998) ). The process of creation of lexical resources is costly both in terms of money and time. To overcome these limits, semi-automatic approaches have been developed (e.g. MultiWordNet (Pianta et al., 2002) ) with different levels of success. Furthermore, important information is scattered in different resources and difficult to use. Semantic interoperability between resources could represent a viable solution to allow reusability and develop more robust and powerful resources. Word sense alignment (WSA) qualifies as the preliminary requirement for achieving this goal (Matuschek and Gurevych, 2013) . WSA aims at creating lists of pairs of senses from two, or more, (lexical-semantic) resources which denote the same meaning. Different approaches to WSA have been proposed and they all share some common elements, namely: i.) the extensive use of sense descriptions of the words (e.g. WordNet glosses); and ii.) the extension of the basic sense descriptions with additional information such as hypernyms, synonyms and domain or category labels.",
"cite_spans": [
{
"start": 293,
"end": 309,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 492,
"end": 513,
"text": "(Pianta et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 882,
"end": 912,
"text": "(Matuschek and Gurevych, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Problem Statement",
"sec_num": "1"
},
{
"text": "The purpose of this work is two folded: first, we experiment on the automatic assignment of domain labels to sense descriptions, and then, evaluate the impact of this information for improving an existing sense aligned dataset for nouns. Previous works has demonstrated that domain labels are a good feature for obtaining high quality alignments of entries (Navigli, 2006; Toral et al., 2009; Navigli and Ponzetto, 2012) . The Word-Net (WN) Domains (Magnini and Cavaglia, 2000; Bentivogli et al., 2004) have been selected as reference domain labels. We will use as candidate lexico-semantic resources to be aligned two Italian lexica, namely, Mul-tiWordNet (MWN) and the Senso Comune De Mauro Lexicon (SCDM) (Vetere et al., 2011) . The two resources differ in terms of modelization: the former, MWN, is an Italian version of WN obtained through the \"expand model\" (Vossen, 1996) and perfectly aligned to Princeton WN 1.6, while the latter, SCDM, is a machine readable dictionary obtained from a paper-based reference lexicographic dictionary, De Mauro GRADIT. Major issues for WSA of the lexica concern the following aspects:",
"cite_spans": [
{
"start": 357,
"end": 372,
"text": "(Navigli, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 373,
"end": 392,
"text": "Toral et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 393,
"end": 420,
"text": "Navigli and Ponzetto, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 449,
"end": 477,
"text": "(Magnini and Cavaglia, 2000;",
"ref_id": "BIBREF5"
},
{
"start": 478,
"end": 502,
"text": "Bentivogli et al., 2004)",
"ref_id": "BIBREF2"
},
{
"start": 708,
"end": 729,
"text": "(Vetere et al., 2011)",
"ref_id": "BIBREF19"
},
{
"start": 864,
"end": 878,
"text": "(Vossen, 1996)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Problem Statement",
"sec_num": "1"
},
{
"text": "\u2022 SCMD has no structure of word senses (i.e. no taxonomy, no synonymy relations, no distinction between core senses and subsenses for polysemous entries) unlike MWN;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Problem Statement",
"sec_num": "1"
},
{
"text": "\u2022 SCDM has no domain or category labels associated to senses (with the exception of specific terminological entries) unlike MWN;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Problem Statement",
"sec_num": "1"
},
{
"text": "\u2022 the Italian section of MWN has only 2,481 glosses in Italian over 28,517 synsets for nouns (i.e. 8.7%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Problem Statement",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows: Section 2 will report on the methodology and experiments implemented for the automatic assignment of the WN Domains to the SCDM entries. Section 3 will describe the dataset used for the evaluation of the WSA experiments and the use of the WN Domains for filtering the sense alignments. Finally, Section 4 illustrates conclusion and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Problem Statement",
"sec_num": "1"
},
{
"text": "The WN Domains consist of a set of 166 hierarchically organized labels which have been associated to each synset 1 and express a subject field label (e.g. SPORT, MEDICINE). A special label, FACTOTUM, has been used for those synsets which can appear in almost all subject fields. The identification of a domain label to the nominal entries in the SCDM Lexicon is based the \"One Domain per Discourse\" (ODD) hypothesis applied to the sense descriptions. We have used a reduced set of domains labels (45 normalized domains) following (Magnini et al., 2001 ).",
"cite_spans": [
{
"start": 530,
"end": 551,
"text": "(Magnini et al., 2001",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experiments",
"sec_num": "2"
},
{
"text": "To assign the WN domain label to the SCDM entries, we have developed a hybrid method: first a binary classifier is applied to the SCDM sense descriptions to discriminate between two domain values, FACTOTUM and OTHER, where the OTHER value includes all remaining 44 normalized domains. After this, all entries classified with the OTHER value are analyzed by a rule based system and associated with a specific domain label (i.e. SPORT, MEDICINE, FOOD . . . ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experiments",
"sec_num": "2"
},
{
"text": "We have developed a training set by manually aligning noun senses between the two lexica. The sense alignment allows us to associate all the information of a synset to a corresponding entry in the SCDM lexicon, including the WN Domain label. Concerning the test set, we have used an existing dataset of aligned noun pairs as in (Caselli et al., 2014) . We report in Table 1 In order for the classifier to predict the binary domain labels (FACTOTUM and OTHER), each sense description of the SCDM Lexicon has been represented by means of a two-dimensional feature vector (e.g. for training data: BINARY DOMAIN LABEL GENERIC:val SPECIFIC:val). Feature values have been obtained through two strategies:",
"cite_spans": [
{
"start": 328,
"end": 350,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 366,
"end": 373,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Classifier and feature selection",
"sec_num": "2.1"
},
{
"text": "\u2022 lemma label: we extract all normalized domain labels associated to each sense of each lemma in the sense description from MWN. The value of the feature GENERIC corresponds to the sum of the FACTOTUM labels. The value of the feature SPECIFIC corresponds to the sum of all other specific domain labels (e.g. MEDICINE, SPORT etc.) after they have been collapsed into a single value (i.e. NOT-FACTOTUM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier and feature selection",
"sec_num": "2.1"
},
{
"text": "\u2022 word sense label: for each sense description, we have first performed Word Sense Disambiguation by means of an adapted version to Italian of the UKB package 2 (Agirre et al., 2010; Agirre et al., 2014) 3 . Only the highest ranked synset, and associated WN Domain(s), was retained as good.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Agirre et al., 2010;",
"ref_id": "BIBREF0"
},
{
"start": 183,
"end": 203,
"text": "Agirre et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier and feature selection",
"sec_num": "2.1"
},
{
"text": "Similarly to the lemma label strategy, the sum of the domain label FACTOTUM is assigned to the feature GENERIC, while the sum of all other domain labels collapsed into the single value NOT-FACTOTUM is assigned to the feature SPECIFIC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier and feature selection",
"sec_num": "2.1"
},
{
"text": "We experimented with two classifiers: Naive Bayes and Maximum Entropy as implemented in the MAL-LET package (McCallum, 2002) . We illustrate the results in Table 2 . The classifiers have been evaluated with respect to standard Precision (P), Recall (R) and F1 against the test set. Ten-fold cross validation has been performed on the training set as well. Classifiers trained with the first strategy will be associated with the label lemma, while those trained with the second strategy with the label wsd. Both classifiers obtains good results with respect to the test data in terms of Precision and Recall. The Naive Bayes classifier outperforms the Maximum Entropy one in both training approaches, suggesting better generalization capabilities even in presence of a small training set and basic features. The role of WSD has a positive impact, namely for the Maximum Entropy classifier (Precision +4 points, Recall +5 points with respect to the lemma label). Although such a positive effect of the WSD does not emerge for the Naive Bayes classifier with respect to the test set, we can still observe an improvement over the ten-fold cross validation (F1= 0.69 vs. F1=0.66). We finally selected the predictions of Naive Bayes wsd classifier as input to the rule-based system as it provides the highest scores.",
"cite_spans": [
{
"start": 108,
"end": 124,
"text": "(McCallum, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Classifier and feature selection",
"sec_num": "2.1"
},
{
"text": "The rule based classifier for final WN Domain assignment works as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules for WN Domain assignment",
"sec_num": "2.2"
},
{
"text": "\u2022 lemmatized and word sense disambiguated lemmas in the sense descriptions are associated with the corresponding WN Domains from MWN;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules for WN Domain assignment",
"sec_num": "2.2"
},
{
"text": "\u2022 frequency counts on the WN Domain labels is applied; the most frequent WN Domain is assigned as the correct WN Domain of the nominal entry;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules for WN Domain assignment",
"sec_num": "2.2"
},
{
"text": "\u2022 in case two or more WN Domains have same frequency, the following assignment strategy is applied: if the frequency scores of the WN Domains is equal to 1, the value FACTOTUM is selected; on the contrary, if the frequency score is higher than 1, all WN Domain labels are retained as good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules for WN Domain assignment",
"sec_num": "2.2"
},
{
"text": "We report the results on final domain assignment in Table 3 . The final system, NaiveBayes+Rules, has been compared to two baselines. Both baselines apply frequency counts over the WN Domains labels of the lemmas of the sense descriptions for the entire set of the 45 normalized domain values, including the FACTOTUM label, as explained in Section2. The Baseline lemma assigns the domain by taking into account every WN Domain associated to each lemma. On the other hand, the Baseline wsd selects only the WN Domain of sense disambiguated lemmas. WSD for the second baseline has been performed by applying the same method described in Section 2.1. The results of both baselines have high values for Precision (0.58 for Baseline lemma , 0.70 for Baseline wsd ). We consider this as a further support to the validity of the ODD hypothesis which seems to hold even for text descriptions like dictionary glosses which normally use generic lexical items to illustrate word senses. It is also interesting to notice that WSD on its own has a positive impact in Baseline wsd system for the assignment of specific domain labels (F1=0.53). The hybrid system performs better than both baselines in terms of F1 scores (F1=0.58 vs. F1=0.45 for Baseline lemma vs. F1=0.53 for Baseline wsd ). However, both the hybrid system and the Baseline wsd obtain the same Precision. To better evaluate the performance of our hybrid approach, we computed the paired t-test. The results of the hybrid system are statistically significant with respect to the Baseline lemma (p < 0.05) and for Recall only when compared to the Baseline wsd . To further analyze the difference between the hybrid system and the Baseline wsd , we performed an error analysis on their outputs. We have identified that the hybrid system is more accurate in the prediction of the Table 3 : Results of WN Domain Assignment over the SDCM entries. Statistical significance of the Naive-Bayes+Rules system has been marked with a \u2020 for the Baseline lemma and with a * for the Baseline wsd FACTOTUM class with respect to the baseline. In particular, the accuracy of the hybrid system on this class is 79% while that of the baseline is only 65%. In addition to this, the hybrid system provides better results in terms of Recall (R=0.50 vs. R=0.43). Although comparable, the hybrid system provides more accurate results with respect to the baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 3",
"ref_id": null
},
{
"start": 1829,
"end": 1836,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rules for WN Domain assignment",
"sec_num": "2.2"
},
{
"text": "This section reports on the experiments for improving existing WSA for nouns between SDCM and MWN. In this work we have used the same dataset and alignment methods as in (Caselli et al., 2014) , shortly described here:",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Filtering for WSA",
"sec_num": "3"
},
{
"text": "\u2022 Lexical Match: for each word w and for each sense s in the given resources R \u2208 {MWN, SCDM}, we constructed a sense descriptions d R (s) as a bag of words in Italian. The alignment is based on counting the number of overlapping tokens between the two strings, normalized by the length of the strings;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Filtering for WSA",
"sec_num": "3"
},
{
"text": "\u2022 Cosine Similarity: we used the Personalized Page Rank (PPR) algorithm (Agirre et al., 2010) with WN 3.0 as knowledge base extended with the \"Princeton Annotated Gloss Corpus\". Once the PPR vector pairs are obtained, the alignment is obtained on the basis of the cosine score for each pair 4 .",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Agirre et al., 2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Filtering for WSA",
"sec_num": "3"
},
{
"text": "The dataset consists of 166 pairs of aligned senses from MWN and SCDM for 46 nominal lemmas (see also column \"Test set\" in Table 1 ). Overall, SCDM covers 53.71The main difference with respect to (Caselli et al., 2014) is that the proposed alignments have been additionally filtered on the basis of the output of the WN domain system (NaiveBayes wsd +Rules). In particular, for each aligned pair which was considered as good in (Caselli et al., 2014) , we have applied a further filtering based on the WN domain system results as follows: if two senses are aligned but do not have the same domain, they are excluded from the WSA results, otherwise they are retained. the results of the WSA approaches with domain filters. We report in brackets the results from (Caselli et al., 2014) . The filtering based on WN Domains has a big impact on Precision and contributes to increase the quality of the aligned senses. Although, in general, we have a downgrading of the performance with respect to Recall, the increase in Precision will reduce the manual post-processing effort to fully aligned the two resources 5 . Furthermore, it is interesting to notice that, when merging together the results of the pre-filtered alignments from the two alignment approaches (Lex-icalMatch+Cosine > 0.1 and LexicalMatch+Cosine > 0.2), we still have a very high Precision (> 0.70) and an increase in Recall (> 0.40) with respect to the results of each approach. Finally, we want to point out that what was reported as the best alignment results in (Caselli et al., 2014) , namely LexicalMatch+Cosine > 0.2, can be obtained, at least for Precision, with a lower filtering cut-off threshold on the Cosine Similarity approach (i.e cut-off threshold at or higher than 0.1)",
"cite_spans": [
{
"start": 196,
"end": 218,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 428,
"end": 450,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 761,
"end": 783,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 1529,
"end": 1551,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Domain Filtering for WSA",
"sec_num": "3"
},
{
"text": "This work describes a hybrid approach based on a Naive Bayes classifier, Word Sense Disambiguation and rules for assigning WN Domains to nominal sense descriptions of a lexicographic dictionary, the Senso Comune De Mauro Lexicon. The assignment of domain labels has been used to improve WSA results on nouns between the Senso Comune Lexicon and Mul-tiWordNet. The results support some observations, namely: i.) domain filtering plays an important role in WSA, namely as a strategy to exclude wrong alignments (false positives) and improve the quality of the aligned pairs; ii.) the method we have proposed is a viable approach for automatically enriching existing lexical resources in a reliable way; and iii.) the ODD hypothesis also apply to sense descriptions. An advantage of our approach is its simplicity. We have used features based on frequency counts and obtained good results, with a Precision of 0.70 for automatic WN Domain assignment. Nevertheless, an important role is played by Word Sense Disambiguation. The use of domain labels obtained from sense disambiguated lemmas improves both the results of the classifier and those 5 The F1 of 0.64 in (Caselli et al., 2014) is obtained with a Precision of 0.67, suggesting that some alignments are false positives of the rules. The absence of statistical significance with respect to the Baseline wsd is not to be considered as a negative result. As the error analysis has showed, the classifier mostly contributes to the identification of the FACTOTUM value, which tends to be overestimated even with sense disambiguated lemmas, and to Recall. We are planning to extend this work to include domain clusters to improve the domain assignment results, namely in terms of Recall.",
"cite_spans": [
{
"start": 1140,
"end": 1141,
"text": "5",
"ref_id": null
},
{
"start": 1160,
"end": 1182,
"text": "(Caselli et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "The full set of labels and hierarchy is available at http://wndomains.fbk.eu/hierarchy.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at http://ixa2.si.ehu.es/ukb/3 We used the WN Multilingual Central Repository as knowledge base and the MWN entries as dictionary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The vectors for the SCDM entries were obtained by, first, applying Google Translate API to get the English translations and, then, PPR over WN 3.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "One of the author wants to thank Vrije Univerisiteit Amsterdam for sponsoring the attendance to the EMNLP conference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploring knowledge bases for similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Cuadros",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
},
{
"first": ".",
"middle": [
";"
],
"last": "",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "Bente",
"middle": [],
"last": "Maegaard",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Mariani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Montse Cuadros, German Rigau, and Aitor Soroa. 2010. Exploring knowledge bases for similarity. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Ros- ner, and Daniel Tapias, editors, Proceedings of the Seventh International Conference on Language Re- sources and Evaluation (LREC'10), Valletta, Malta, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Random walks for knowledge-based word sense disambiguation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "1",
"pages": "57--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57-84.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Revising the wordnet domains hierarchy: semantics, coverage and balancing",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Pamela",
"middle": [],
"last": "Forner",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Pianta",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Workshop on Multilingual Linguistic Resources",
"volume": "",
"issue": "",
"pages": "101--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Pamela Forner, Bernardo Magnini, and Emanuele Pianta. 2004. Revising the wordnet domains hierarchy: semantics, coverage and balanc- ing. In Proceedings of the Workshop on Multilin- gual Linguistic Resources, pages 101-108. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Aligning an italianwordnet with a lexicographic dictionary: Coping with limited data",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Laure",
"middle": [],
"last": "Vieu",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Vetere",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Seventh Global WordNet Conference",
"volume": "",
"issue": "",
"pages": "290--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli, Carlo Strapparava, Laure Vieu, and Guido Vetere. 2014. Aligning an italianwordnet with a lexicographic dictionary: Coping with limited data. In Proceedings of the Seventh Global WordNet Conference, pages 290-298.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WordNet: An Electronic Lexical Database (Language, Speech, and Communication)",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Commu- nication). MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Integrating subject field codes into wordnet",
"authors": [
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Gabriela",
"middle": [],
"last": "Cavaglia",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the conference on International Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernardo Magnini and Gabriela Cavaglia. 2000. Inte- grating subject field codes into wordnet. In Proceed- ings of the conference on International Language Resources and Evaluation (LREC 2000).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using domain information for word sense disambiguation",
"authors": [
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Pezzulo",
"suffix": ""
},
{
"first": "Alfio",
"middle": [],
"last": "Gliozzo",
"suffix": ""
}
],
"year": 2001,
"venue": "The Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems",
"volume": "",
"issue": "",
"pages": "111--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernardo Magnini, Carlo Strapparava, Giovanni Pez- zulo, and Alfio Gliozzo. 2001. Using domain in- formation for word sense disambiguation. In The Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 111-114. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A graph-based approach to word sense alignment. Transactions of the Association for Computational Linguistics (TACL)",
"authors": [
{
"first": "",
"middle": [],
"last": "Dijkstra-Wsa",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dijkstra-wsa: A graph-based approach to word sense alignment. Transactions of the Association for Com- putational Linguistics (TACL), 2:to appear.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 2002. Mallet: A ma- chine learning for language toolkit.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using Wikipedia for automatic word sense disambiguation",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea. 2007. Using Wikipedia for automatic word sense disambiguation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, Rochester, New York.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2012,
"venue": "Artificial Intelligence",
"volume": "193",
"issue": "",
"pages": "217--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual se- mantic network. Artificial Intelligence, 193:217- 250.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Meaningful clustering of senses helps boost word sense disambiguation performance",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 44 th Annual Meeting of the Association for Computational Linguistics joint with the 21 st International Conference on Computational Linguistics (COLING-ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2006. Meaningful clustering of senses helps boost word sense disambiguation per- formance. In Proceedings of the 44 th Annual Meet- ing of the Association for Computational Linguis- tics joint with the 21 st International Conference on Computational Linguistics (COLING-ACL), Sydney, Australia.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The peoples web meets linguistic knowledge: Automatic sense alignment of Wikipedia and WordNet",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Niemann",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 9th International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "205--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabeth Niemann and Iryna Gurevych. 2011. The peoples web meets linguistic knowledge: Automatic sense alignment of Wikipedia and WordNet. In Proceedings of the 9th International Conference on Computational Semantics, pages 205-214, Singa- pore, January.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "MultiWordNet: developing an aligned multilingual database",
"authors": [
{
"first": "Emanuele",
"middle": [],
"last": "Pianta",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Girardi",
"suffix": ""
}
],
"year": 2002,
"venue": "First International Conference on Global WordNet",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuele Pianta, Luisa Bentivogli, and Cristian Gi- rardi. 2002. MultiWordNet: developing an aligned multilingual database. In First International Confer- ence on Global WordNet, Mysore, India.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Disambiguating bilingual nominal entries against WordNet",
"authors": [
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "Eneko",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of workshop The Computational Lexicon, 7th European Summer School in Logic, Language and Information",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "German Rigau and Agirre Eneko. 1995. Disambiguat- ing bilingual nominal entries against WordNet. In Proceedings of workshop The Computational Lexi- con, 7th European Summer School in Logic, Lan- guage and Information, Barcelona, Spain.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mapping concrete entities from PAROLE-SIMPLE-CLIPS to ItalWordNet: Methodology and results",
"authors": [
{
"first": "Adriana",
"middle": [],
"last": "Roventini",
"suffix": ""
},
{
"first": "Nilda",
"middle": [],
"last": "Ruimy",
"suffix": ""
},
{
"first": "Rita",
"middle": [],
"last": "Marinelli",
"suffix": ""
},
{
"first": "Marisa",
"middle": [],
"last": "Ulivieri",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Mammini",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriana Roventini, Nilda Ruimy, Rita Marinelli, Marisa Ulivieri, and Michele Mammini. 2007. Mapping concrete entities from PAROLE-SIMPLE- CLIPS to ItalWordNet: Methodology and results. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, Prague, Czech Republic, June.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic assignment of Wikipedia encyclopedic entries to WordNet synsets",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Ruiz-Casado",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Castells",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third international conference on Advances in Web Intelligence, AWIC'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Ruiz-Casado, Enrique Alfonseca, and Pablo Castells. 2005. Automatic assignment of Wikipedia encyclopedic entries to WordNet synsets. In Pro- ceedings of the Third international conference on Advances in Web Intelligence, AWIC'05, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A study on linking and disambiguating wikipedia categories to wordnet using text similarity",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Aguirre",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Munoz",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Toral, Oscar Ferr\u00e1ndez, Eneko Aguirre, and Rafael Munoz. 2009. A study on linking and disam- biguating wikipedia categories to wordnet using text similarity. Proceedings of the International Confer- ence on Recent Advances in Natural Language Pro- cessing (RANLP 2009).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Senso Comune, an open knowledge base for italian",
"authors": [
{
"first": "Guido",
"middle": [],
"last": "Vetere",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Oltramari",
"suffix": ""
},
{
"first": "Isabella",
"middle": [],
"last": "Chiari",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Jezek",
"suffix": ""
},
{
"first": "Laure",
"middle": [],
"last": "Vieu",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2011,
"venue": "JTraitement Automatique des Langues",
"volume": "53",
"issue": "3",
"pages": "217--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guido Vetere, Alessandro Oltramari, Isabella Chiari, Elisabetta Jezek, Laure Vieu, and Fabio Massimo Zanzotto. 2011. Senso Comune, an open knowl- edge base for italian. JTraitement Automatique des Langues, 53(3):217-243.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Right or wrong: Combining lexical resources in the eurowordnet project",
"authors": [
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 1996,
"venue": "Euralex",
"volume": "96",
"issue": "",
"pages": "715--728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piek Vossen. 1996. Right or wrong: Combining lexi- cal resources in the eurowordnet project. In Euralex, volume 96, pages 715-728. Citeseer.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"text": "Results for the Naive Bayes and Maximum Entropy binary classifiers.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table/>",
"text": "Training and test sets for the classifier.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table><tr><td>illustrates</td></tr></table>",
"text": "LexicalMatch+Cosine > 0.2 0.77 (0.67) 0.37 (0.61) 0.50 (0.64)",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table/>",
"text": "Results for WSA of nouns with domain filtering.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}