ACL-OCL / Base_JSON /prefixL /json /L18 /L18-1007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L18-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:32:23.299416Z"
},
"title": "Cross-Lingual Generation and Evaluation of a Wide-Coverage Lexical Semantic Resource",
"authors": [
{
"first": "Attila",
"middle": [],
"last": "Nov\u00e1k",
"suffix": "",
"affiliation": {
"laboratory": "P\u00e1zm\u00e1ny P\u00e9ter Catholic University Faculty of Information Technology and Bionics MTA-PPKE Hungarian Language Technology Research Group",
"institution": "",
"location": {}
},
"email": "novak.attila@itk.ppke.hu"
},
{
"first": "Borb\u00e1la",
"middle": [],
"last": "Nov\u00e1k",
"suffix": "",
"affiliation": {
"laboratory": "P\u00e1zm\u00e1ny P\u00e9ter Catholic University Faculty of Information Technology and Bionics MTA-PPKE Hungarian Language Technology Research Group",
"institution": "",
"location": {}
},
"email": "novak.borbala@itk.ppke.hu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural word embedding models trained on sizable corpora have proved to be a very efficient means of representing meaning. However, the abstract vectors representing words and phrases in these models are not interpretable for humans by themselves. In this paper we present the Thing Recognizer, a method that assigns explicit symbolic semantic features from a finite list of terms to words present in an embedding model, making the model interpretable for humans and covering the semantic space by a controlled vocabulary of semantic features. We do this in a cross-lingual manner, applying semantic tags taken form lexical resources in one language (English) to the embedding space of another (Hungarian).",
"pdf_parse": {
"paper_id": "L18-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural word embedding models trained on sizable corpora have proved to be a very efficient means of representing meaning. However, the abstract vectors representing words and phrases in these models are not interpretable for humans by themselves. In this paper we present the Thing Recognizer, a method that assigns explicit symbolic semantic features from a finite list of terms to words present in an embedding model, making the model interpretable for humans and covering the semantic space by a controlled vocabulary of semantic features. We do this in a cross-lingual manner, applying semantic tags taken form lexical resources in one language (English) to the embedding space of another (Hungarian).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A recently very popular and efficient method for the distributional representation of words is using word embedding (WE) models (Mikolov et al., 2013c) . In this paper we present a method that creates the WE model of a large text corpus and inserts the corresponding embedding vectors of a limited set of abstract semantic features into the same space. The embedding vectors for semantic features are built from automatically reorganized lexical resources (that may be in a language different from our target language) and are transformed to the target WE space. Then, a nearest neighbor approach is applied to find the most relevant features for a query word. The assigned features can also be used as a searchable semantic annotation of the original corpus the WE model was created from, because our model assigns semantic features to any (even non-standard/slang or misspelled) word in a text in a language-independent manner, regardless of whether these are present in a lexical resource or not, and whether any such resource is available for the target language. The organization of categories and the way they are actually assigned to words by the algorithm is in accordance with the actual usage of these words as manifested by their distribution in a large corpus. The method is demonstrated for English and Hungarian, but it can easily be applied to other languages as well.",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "WE models have frequently been used to represent word meaning efficiently (Mikolov et al., 2013b; Pennington et al., 2014) . There are also approaches that replace WE with sense embedding (Bordes et al., 2012; Neelakantan et al., 2014; Tian et al., 2014; Li and Jurafsky, 2015; Bartunov et al., 2015) . Huang et al. (2012) applied clustering algorithms to create single prototype embedding. Some have tried to match WE's to entities in existing lexical resources, for example to BabelNet entries (Panchenko, 2016) or WordNet synsets (Chen et al., 2014; Agirre et al., 2006 ). Rothe and Sch\u00fctze (2015) combines WE vec-tors to obtain Wordnet synset representations in the original WE space. Labutov and Lipson (2013) also try to take existing WE's and use labeled data to produce WE's in the same space in order to tune or adapt the original representation. Other approaches try to exploit knowledge bases to improve WE's. Yu and Dredze (2014) aim at predicting related words in a knowledge base to WE's. Others compute vector representations of word senses directly from knowledge bases (Bordes et al., 2011; Camacho-Collados et al., 2015) .",
"cite_spans": [
{
"start": 74,
"end": 97,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF15"
},
{
"start": 98,
"end": 122,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 188,
"end": 209,
"text": "(Bordes et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 210,
"end": 235,
"text": "Neelakantan et al., 2014;",
"ref_id": "BIBREF18"
},
{
"start": 236,
"end": 254,
"text": "Tian et al., 2014;",
"ref_id": "BIBREF27"
},
{
"start": 255,
"end": 277,
"text": "Li and Jurafsky, 2015;",
"ref_id": "BIBREF12"
},
{
"start": 278,
"end": 300,
"text": "Bartunov et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 303,
"end": 322,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF9"
},
{
"start": 496,
"end": 513,
"text": "(Panchenko, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 533,
"end": 552,
"text": "(Chen et al., 2014;",
"ref_id": "BIBREF7"
},
{
"start": 553,
"end": 572,
"text": "Agirre et al., 2006",
"ref_id": "BIBREF0"
},
{
"start": 689,
"end": 714,
"text": "Labutov and Lipson (2013)",
"ref_id": "BIBREF11"
},
{
"start": 921,
"end": 941,
"text": "Yu and Dredze (2014)",
"ref_id": "BIBREF29"
},
{
"start": 1086,
"end": 1107,
"text": "(Bordes et al., 2011;",
"ref_id": "BIBREF2"
},
{
"start": 1108,
"end": 1138,
"text": "Camacho-Collados et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "We built WE models for Hungarian, an agglutinative language with complex morphology. In order to incorporate the information encoded in the morphological structure of word forms, full morphological disambiguation was applied to the input words, and the tag sequence following the main PoS tag of each word was detached and included as a separate token following the token consisting of the lemma and the PoS tag in the text. The following example shows the representation of the sentence Szeretlek, kedvesem. 'I love you, dear.':",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding Models for Morphologically Rich Languages",
"sec_num": "3."
},
{
"text": "szeret#V #1Sg.>2Sg ,#, kedves#N #Poss1Sg love [I, you] , dear [my]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding Models for Morphologically Rich Languages",
"sec_num": "3."
},
{
"text": "Thus, while no information was lost, we managed to improve the quality of the WE model compared to that created from surface word forms in two ways: by assigning a separate representation to lexical items of different part of speech; and by effectively reducing data sparseness problems following from the great variety of rare inflected word forms (Sikl\u00f3si, 2016) .",
"cite_spans": [
{
"start": 349,
"end": 364,
"text": "(Sikl\u00f3si, 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding Models for Morphologically Rich Languages",
"sec_num": "3."
},
{
"text": "Although morphological annotation has a less pronounced impact on the quality of the model in English (the language of the lexical resources we used to extract semantic features -see Section 4.), we applied the same method to the English text as well to make the two models compatible by introducing PoS-based sense distinctions and thus improving the quality of mapping between the models (see Section 5.2.). For building the WE models, we used the word2vec 1 tool. The Hungarian model was trained on a web-crawled corpus of 3.18 billion tokens (27.49 M token types) that was annotated using the PurePos (Orosz and Nov\u00e1k, 2013) tagger, augmented with the Humor Hungarian morphological analyzer (Nov\u00e1k, 2014; Nov\u00e1k et al., 2016) . 2 We trained the English WE model on the English Wikipedia dump of 2.25 billion tokens (8.24 M token types) that was analyzed using Stanford tagger (Toutanova et al., 2003) . We created a CBOW model for both languages with the radius of the context window set to 5 and the number of dimensions to 300 and using a token frequency limit of 5. ",
"cite_spans": [
{
"start": 605,
"end": 628,
"text": "(Orosz and Nov\u00e1k, 2013)",
"ref_id": "BIBREF21"
},
{
"start": 695,
"end": 708,
"text": "(Nov\u00e1k, 2014;",
"ref_id": "BIBREF20"
},
{
"start": 709,
"end": 728,
"text": "Nov\u00e1k et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 731,
"end": 732,
"text": "2",
"ref_id": null
},
{
"start": 879,
"end": 903,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding Models for Morphologically Rich Languages",
"sec_num": "3."
},
{
"text": "In order to assign semantic labels to the words in the embedding models, we needed some lexical resource to induce the tags from. A widely used, although quite dated, system of concepts is Roget's Thesaurus (Chapman, 1977 The annotation generated by this combination of tools contains inflectional features and participles only. The internal structure of compounds and derived words is not explicit in the annotation.",
"cite_spans": [
{
"start": 207,
"end": 221,
"text": "(Chapman, 1977",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "4."
},
{
"text": "The third resource we used, 4lang, is also based on LDOCE. The definitions of LDOCE's defining vocabulary were transformed into a formal description (Kornai et al., 2015) illustrated by the following examples:",
"cite_spans": [
{
"start": 149,
"end": 170,
"text": "(Kornai et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "4."
},
{
"text": "bread: food, FROM/2742 flour, bake MAKE show: =AGT CAUSE[=DAT LOOK =PAT], communicate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "4."
},
{
"text": "We further transformed this format so that we have a similar one to the previous ones. This was achieved by segmenting the formal descriptions into single tokens (by splitting at spaces and brackets) and treating each token as a category label. Then, all words that had the particular token in their definition were listed for that label. This resulted in 1489 category labels and 12,507 words listed for them. 4lang includes some affixes and inflected forms, which are not present in the Wikipedia model, so the intersection resulted in 11,039 words. We also created another model from 4lang, in which we did not segment predicates with more than one argument into further parts, so e.g. HAS[four.(legs)] remained an atomic feature. Further processing of this model, to which we refer as 4lang2 in the paper, was identical to that of the 4lang model. The first four columns of Table 1 summarize the main characteristics of the resources, while Table 2 shows some examples from each resource. Table 1 : Characteristics of the three lexical resources (number of different category labels, number of words and the average number of words per category; before and after intersection with the English embedding model and clustering).",
"cite_spans": [],
"ref_spans": [
{
"start": 878,
"end": 885,
"text": "Table 1",
"ref_id": null
},
{
"start": 945,
"end": 952,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 993,
"end": 1000,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "4."
},
{
"text": "One of the most popular semantic resources for English is WordNet (Fellbaum, 1998; Miller, 1995) . However, WordNet has been criticized for its too high granularity at the bottom level and its generality at the top level (Brown, 2008) . Selecting an appropriate set of concepts from Word-Net that could be used as semantic features is far from trivial. There is a high level categorization into which Word-Net synsets are organized (\"supersenses\"), and these could be used as features similarly to the ones derived from the resources mentioned before. However, there are only 45 supersenses, which seems to be an extremely low-grained categorization to be useful for practical purposes. Due to these problems, although we consider using WordNet in the future both as a resource and as a possible benchmark, we did not use it in the experiments presented in this paper.",
"cite_spans": [
{
"start": 66,
"end": 82,
"text": "(Fellbaum, 1998;",
"ref_id": null
},
{
"start": 83,
"end": 96,
"text": "Miller, 1995)",
"ref_id": "BIBREF17"
},
{
"start": 221,
"end": 234,
"text": "(Brown, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Resources",
"sec_num": "4."
},
{
"text": "The goal of this research was to create a tool that is able to assign semantic features to words, even if the target word is not included in any semantic lexicon or if such a lexicon does not even exist in the given language. Thus, two problems had to be handled: assigning features and, if needed, bridging the language gap. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "5."
},
{
"text": "As described in Section 4., we used three lexical resources in this experiment using the category labels in these lexicons as semantic features. However, some categories were too broad and the set of words listed for them was too heterogeneous. To handle this problem, a hierarchical agglomerative clustering algorithm was applied to the set of words in those categories that contained at least five words (for details of the clustering algorithm, see (Sikl\u00f3si, 2016) ). Each cluster was then labeled with the original category label and a numeric index. Since the clustering algorithm used the distance between the embedding vectors of words trained from the English Wikipedia corpus, only words present in the Wikipedia model could be used from the original resources. How this intersection and the clustering of words affected the representations in each lexical resource is shown in Table 1 . We used a simple but effective method for representing each semantic feature in the same semantic space as that of the English PoS-tagged WE model: we assigned the average of the embedding vectors of clustered example words to each indexed semantic label. To find the relevant features for a query word tagged with its appropriate part-of-speech, its representational vector is retrieved from the WE model and its nearest neighbors are taken from each feature model. Figure 1 shows how four words (pianist, teacher, turner, maid) and the 3 nearest features assigned to them from the LDOCE and Roget's models are organized in semantic space.",
"cite_spans": [
{
"start": 452,
"end": 467,
"text": "(Sikl\u00f3si, 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 887,
"end": 894,
"text": "Table 1",
"ref_id": null
},
{
"start": 1364,
"end": 1372,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Semantic Feature Space",
"sec_num": "5.1."
},
{
"text": "It has been shown that WE spaces can effectively be mapped across languages. One mapping method is to use a word-aligned bilingual parallel corpus to build an embedding model that contains vector representations of words in both languages (Luong et al., 2015) . We applied another approach instead, where the projection is achieved by learning a piecewise linear transformation based on a seed dictionary, through which a monolingual WE space can be mapped to another monolingual space (Mikolov et al., 2013a) . The transformation maps each word vector in the source language space to a point in the vicinity of the vector of its translation in the target language space. We used a subset of the 4lang dictionary (built from the defining vocabulary of LDOCE) containing 3477 English-Hungarian word pairs as the seed dictionary to calculate the transformation matrix. We used pairs where both the English and the Hungarian word had a frequency over 10000 in the two corpora. Manual evaluation of the transformation on an additional 100 words resulted in 0.38 precision for the first-ranked translation and precision=0.69/0.81 for the first 5/10 top-ranked translations (indicating whether a correct translation of the target word was found in the set of the first five/ten most similar words in the transformed space). We used this transformation matrix to map the English semantic label vectors to the Hungarian WE space. Then, the same nearest neighbor algorithm could be applied to the query word as in the case of searching the English semantic space. This made it possible to input a Hungarian word as a query to our system and receive semantic features based on originally English resources without the expensive and labor-intensive task of translating them. Moreover, since instead of exact matching, nearest neighbors are searched for, out-of-vocabulary words (with respect to the original lexical resources) can also be assigned semantic labels.",
"cite_spans": [
{
"start": 239,
"end": 259,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 486,
"end": 509,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Mapping of the Models",
"sec_num": "5.2."
},
{
"text": "When looking at the output of the models, we found that even though the LDOCE features seemed to be the most meaningful, the Roget's, 4lang and 4lang2 models also turned out to be useful. E.g. adjectives have a much richer categorization in Roget's than what we obtain from the LDOCE model. Since LDOCE and Roget's seemed to perform well in complementary regions, we decided to unify these two models (ROLD). We carried out two kinds of quantitative analysis of the performance of our model. First, we checked the robustness of the model by performing a sanity check on the original English resources. In the other scenario, we selected 280 words randomly from a predefined list of Hungarian words in which each word was assigned to one of 28 semantic domains (e.g. food, vehicles, locations, occupations, etc.) and manually checked the accuracy of the semantic features assigned to these words by each model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6."
},
{
"text": "For each word present in the original 4lang dictionary, we calculated how many of the semantic features present in the original definition were retrieved among the top N features returned by the model (feature recall, R f ) and the percentage of words for which all features were retrieved (word recall, R w ). The results are shown in Table 3 as a function of N . Recall was also calculated ignoring words having more than N features (R w (poss)) and features over the N limit (R f (poss)). As no definition contained more than 10 terms, R w (poss) is identical to R w and R f (poss) is identical to R f for N \u2265 10. The last column of the ",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 343,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Sanity Check",
"sec_num": "6.1."
},
{
"text": "After the sanity check, we tested our system on standard Hungarian. In order to do this, we collected groups of words belonging to different semantic categories. These categories were defined manually and the test words were collected by a semi-automatic algorithm as described in (Sikl\u00f3si, 2016) . Finally, each group was manually checked resulting in 28 groups containing 39,050 words altogether. We randomly selected 10 words from each group, and the top 10 semantic features were generated using the models 4lang, 4lang2 and ROLD. The list of randomly selected words also included misspelled and very rare words. Features that were partitioned and indexed when building the models (see Section 5.1.) were joined after lookup. Two annotators checked the generated semantic feature sets, and marked each feature that was inappropriate for the given lexical item (e.g. HAS.horn for v\u00edzimad\u00e1r 'water fowl').",
"cite_spans": [
{
"start": 281,
"end": 296,
"text": "(Sikl\u00f3si, 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Language Use",
"sec_num": "6.2."
},
{
"text": "Cases when the given lexical group is in the domain of the given feature (e.g. the domain of HAS.horn is animals) and completely inappropriate features (e.g feature dig for cs\u0171r 'barn' in the buildings group) were not differentiated: they were all simply marked wrong. Inter-annotator agreement was found to be substantial (Cohen's kappa=0.734).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Language Use",
"sec_num": "6.2."
},
{
"text": "Results of the evaluation are shown in Table 4 3 . The ta-3 Due to length limits, we included only selected categories in ble shows semantic feature accuracy (acc: the ratio of correctly assigned features) in each category for each model. We also automatically computed feature \"domain accuracy\" (d-acc): here we ignored feature assignment errors where the same feature was marked adequate for another test word in the same domain. The table also shows the number of different features (#F) each model assigned to the test words in each domain, and the number of features that were marked wrong for any of the test words in the given domain (#B). The overall feature accuracy of the 4lang-derived models was nearly 75%, while the combined ROLD model achieved over 80% feature accuracy. The feature space of the ROLD model is less fine-grained in some domains (e.g.food or clothing) than that derived from 4lang definitions (this is indicated by the lower number of different features assigned by the ROLD model) and this results in higher accuracy. Note that the domain accuracy of 4lang features is much higher than feature accuracy, it is about 90%. The worst average accuracy was obtained on colors: lists of things having specific colors or patterns and the high number of color terms themselves generated too much noise. Figure 2 shows the distribution of the precision of features per word. The ROLD model assigned only appropriate features to 42% of the 290 test words, and precision was over 70% for over 75% of the words. 4lang and 4lang2 had 100% precision for 20% and 13.4% of the test words, respectively. All models had over 50% precision for about 90% of the words. The precision of 4lang2 was over 20% for all test words.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 1326,
"end": 1334,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Standard Language Use",
"sec_num": "6.2."
},
{
"text": "The WE models our method is based on also reflect world knowledge as represented in the corpus from which they are generated from. This enables our model to assign features to proper names of various types, such as names of people, institutions, fictional creatures, or even abbreviations as shown in Table 5 . In the names section of the : educate institution science group study student degree society *sleeve @Catholic_Church 4L2: educate study institution science knowledge group religion *system job HAS.purpose RL: College School_Adj Education Occupations School Table 5 : Examples of features returned for proper names and abbreviations of names of institutions from the models that each person is assigned features that provide information about them. Thus, the model can be queried even for names one is not familiar with, and relevant features will be provided. This also holds for names with lower frequency in the corpus, as long as the name itself is unique. Table 5 also contains the abbreviated name of some organizations. ELTE is for E\u00f6tv\u00f6s Lor\u00e1nd University, while PPKE is for P\u00e1zm\u00e1ny P\u00e9ter Catholic University. While both of them are educational institutions, ELTE is a state university, but PPKE is catholic, and this difference is re-flected by the set of features assigned to them in addition to their relation to science and education. The same applies to slang terms, including many short diminutive forms. These are abundant in the web-crawled corpus, mainly coming from often heated discussions in user comments and fora, and many of them have strong emotional connotations. These are neatly reflected by the semantic tags assigned to them in addition to the ones reflecting the basic meaning of the term, e.g. 'Deceiver', 'Obstinacy', 'Ignorance', 'Thief', 'Crime', 'Politics' 'Race relations' 'Psychology, Psychiatry', 'stupid', 'criminal' in addition to 'person' for derogative terms like nyugger 'pensioner', proli 'proletarian', bolsi 'bolshevik' or cig\u00f3 'Gypsy'.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 5",
"ref_id": null
},
{
"start": 569,
"end": 576,
"text": "Table 5",
"ref_id": null
},
{
"start": 972,
"end": 979,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proper Names and Non-standard Language",
"sec_num": "6.3."
},
{
"text": "We have shown that the meaning implicitly represented in word embedding models can be transformed into a set of symbolic features that can be used as semantic annotation. This can also be done across languages, thus relevant semantic tags can be assigned to words in a language that lacks appropriate semantic resources. Despite its simplicity, our system, the Thing Recognizer, performs this surprisingly efficiently also for names and words that cannot be expected to be included in manually created lexical semantic resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7."
}
],
"back_matter": [
{
"text": "This research has been implemented with support provided by grants FK125217 and PD125216 of the National Research, Development and Innovation Office of Hungary financed under the FK17 and PD17 funding schemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating and optimizing the parameters of an unsupervised graph-based wsd algorithm",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
},
{
"first": "O",
"middle": [
"L"
],
"last": "De Lacalle",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "89--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre, E., Mart\u00ednez, D., de Lacalle, O. L., and Soroa, A. (2006). Evaluating and optimizing the parameters of an unsupervised graph-based wsd algorithm. In Proceed- ings of the First Workshop on Graph Based Methods for Natural Language Processing, TextGraphs-1, pages 89-96, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Breaking sticks and ambiguities with adaptive skip-gram",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bartunov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kondrashkin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Osokin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vetrov",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.07257"
]
},
"num": null,
"urls": [],
"raw_text": "Bartunov, S., Kondrashkin, D., Osokin, A., and Vetrov, D. (2015). Breaking sticks and ambiguities with adaptive skip-gram. arXiv preprint arXiv:1502.07257.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning structured embeddings of knowledge bases",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bordes, A., Weston, J., Collobert, R., and Bengio, Y. (2011). Learning structured embeddings of knowledge bases. In AAAI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Joint learning of words and meaning representations for open-text semantic parsing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of 15th International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bordes, A., Glorot, X., Weston, J., and Bengio, Y. (2012). Joint learning of words and meaning representations for open-text semantic parsing. In In Proceedings of 15th International Conference on Artificial Intelligence and Statistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Choosing sense distinctions for wsd: Psycholinguistic evidence",
"authors": [
{
"first": "S",
"middle": [
"W"
],
"last": "Brown",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, HLT-Short '08",
"volume": "",
"issue": "",
"pages": "249--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, S. W. (2008). Choosing sense distinctions for wsd: Psycholinguistic evidence. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, HLT-Short '08, pages 249-252, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A unified multilingual semantic representation of concepts",
"authors": [
{
"first": "J",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "M",
"middle": [
"T"
],
"last": "Pilehvar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "741--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Camacho-Collados, J., Pilehvar, M. T., and Navigli, R. (2015). A unified multilingual semantic representation of concepts. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 741- 751, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Roget's International Thesaurus",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chapman",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chapman, R. (1977). Roget's International Thesaurus. Harper Colophon Books. Crowell.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A unified model for word sense representation and disambiguation",
"authors": [
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1025--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, X., Liu, Z., and Sun, M. (2014). A unified model for word sense representation and disambiguation. In Pro- ceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025- 1035, Doha, Qatar, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "WordNet: an electronic lexical database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. (1998). WordNet: an elec- tronic lexical database. MIT Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "E",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, E. H., Socher, R., Manning, C. D., and Ng, A. Y. (2012). Improving word representations via global con- text and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computa- tional Linguistics: Long Papers -Volume 1, ACL '12, pages 873-882, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Competence in lexical semantics",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kornai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "\u00c1cs",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Makrai",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Nemeskey",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Pajkossy",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Recski",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "165--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kornai, A., \u00c1cs, J., Makrai, M., Nemeskey, D. M., Pa- jkossy, K., and Recski, G. (2015). Competence in lex- ical semantics. In Proceedings of the Fourth Joint Con- ference on Lexical and Computational Semantics, pages 165-175, Denver, Colorado, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Re-embedding words",
"authors": [
{
"first": "I",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lipson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "489--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Labutov, I. and Lipson, H. (2013). Re-embedding words. In Proceedings of the 51st Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 2: Short Papers), pages 489-493, Sofia, Bulgaria, August. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Do multi-sense embeddings improve natural language understanding?",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1722--1732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, J. and Jurafsky, D. (2015). Do multi-sense embed- dings improve natural language understanding? In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1722-1732, Lisbon, Portugal, September. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bilingual word representations with monolingual quality in mind",
"authors": [
{
"first": "M.-T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "NAACL Workshop on Vector Space Modeling for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luong, M.-T., Pham, H., and Manning, C. D. (2015). Bilingual word representations with monolingual quality in mind. In NAACL Workshop on Vector Space Modeling for NLP, Denver, United States.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Le, Q. V., and Sutskever, I. (2013a). Exploit- ing similarities among languages for machine transla- tion. CoRR, abs/1309.4168.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th An- nual Conference on Neural Information Processing Sys- tems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111- 3119.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Yih, W., and Zweig, G. (2013c). Linguis- tic regularities in continuous space word representations. In Human Language Technologies: Conference of the North American Chapter of the Association of Computa- tional Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 746-751.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Wordnet: A lexical database for English",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "COMMUNICATIONS OF THE ACM",
"volume": "38",
"issue": "",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. A. (1995). Wordnet: A lexical database for En- glish. COMMUNICATIONS OF THE ACM, 38:39-41.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient non-parametric estimation of multiple embeddings per word in vector space",
"authors": [
{
"first": "A",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1059--1069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neelakantan, A., Shankar, J., Passos, A., and McCallum, A. (2014). Efficient non-parametric estimation of mul- tiple embeddings per word in vector space. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1059- 1069, Doha, Qatar, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A new integrated open-source morphological analyzer for Hungarian",
"authors": [
{
"first": "A",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sikl\u00f3si",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Oravecz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nov\u00e1k, A., Sikl\u00f3si, B., and Oravecz, C. (2016). A new integrated open-source morphological analyzer for Hun- garian. In Nicoletta Calzolari (Conference Chair), et al., editors, Proceedings of the Tenth International Confer- ence on Language Resources and Evaluation (LREC 2016), Paris, France, may. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A new form of Humor -mapping constraint-based computational morphologies to a finitestate representation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nov\u00e1k, A. (2014). A new form of Humor -mapping constraint-based computational morphologies to a finite- state representation. In Nicoletta Calzolari (Conference Chair), et al., editors, Proceedings of the Ninth Interna- tional Conference on Language Resources and Evalua- tion (LREC'14), Reykjavik, Iceland, may. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "PurePos 2.0: a hybrid tool for morphological disambiguation",
"authors": [
{
"first": "Gy",
"middle": [],
"last": "Orosz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orosz, Gy. and Nov\u00e1k, A. (2013). PurePos 2.0: a hy- brid tool for morphological disambiguation. In Proceed- ings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2013), pages 539-545, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Best of both worlds: Making word sense embeddings interpretable",
"authors": [
{
"first": "A",
"middle": [
";"
],
"last": "Panchenko",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Panchenko, A. (2016). Best of both worlds: Making word sense embeddings interpretable. In Nicoletta Calzo- lari (Conference Chair), et al., editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. Euro- pean Language Resources Association (ELRA).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes",
"authors": [
{
"first": "S",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1793--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rothe, S. and Sch\u00fctze, H. (2015). Autoextend: Extend- ing word embeddings to embeddings for synsets and lex- emes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1793-1803, Beijing, China, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Using embedding models for lexical categorization in morphologically rich languages",
"authors": [
{
"first": "B",
"middle": [],
"last": "Sikl\u00f3si",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics and Intelligent Text Processing: 17th International Conference, CICLing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sikl\u00f3si, B. (2016). Using embedding models for lexi- cal categorization in morphologically rich languages. In Alexander Gelbukh, editor, Computational Linguistics and Intelligent Text Processing: 17th International Con- ference, CICLing 2016, Konya, Turkey, April. Springer International Publishing, Cham.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Longman Dictionary of Contemporary English. Longman Dictionary of Contemporary English Series",
"authors": [
{
"first": "D",
"middle": [],
"last": "Summers",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Summers, D. (2005). Longman Dictionary of Contempo- rary English. Longman Dictionary of Contemporary En- glish Series. Longman.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A probabilistic model for learning multi-prototype word embeddings",
"authors": [
{
"first": "F",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "T.-Y",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "151--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tian, F., Dai, H., Bian, J., Gao, B., Zhang, R., Chen, E., and Liu, T.-Y. (2014). A probabilistic model for learning multi-prototype word embeddings. In Proceed- ings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 151-160, Dublin, Ireland, August. Dublin City Univer- sity and Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toutanova, K., Klein, D., Manning, C. D., and Singer, Y. (2003). Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03, pages 173-180, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving lexical embeddings with semantic knowledge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "545--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, M. and Dredze, M. (2014). Improving lexical embed- dings with semantic knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 545- 550, Baltimore, Maryland, June. Association for Com- putational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The 3 nearest features assigned to the words pianist, teacher, turner, maid from the LDOCE and Roget's models arranged in semantic space"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The distribution of feature precision for the three models ROLD, 4lang and 4lang2."
},
"TABREF3": {
"content": "<table/>",
"html": null,
"text": "ResourceCategoryExamplewords in the original resource ROGET Mean_N medium#NN generality#NN neutrality#NN middle_state#NN median#NN golden_mean#NN middle#NN etc. ROGET Rotundity_ADJ spherical#JJ cylindric#JJ round_as_an_apple#JJ bell_shaped#JJ spheroidal#JJ conical#JJ globated#JJ etc. LDOCE Cooking allspice#NN bake#VB barbecue#VB baste#VB blanch#VB boil#VB bottle#VB bouillon_cube#NN etc. LDOCE Mythology centaur#NN chimera#NN Cyclops#NN deity#NN demigod#NN faun#NN god#NN griffin#NN gryphon#NN etc. 4LANG food sandwich#NN, fat#NN, bread#NN, pepper#NN, meal#NN, fork#NN, egg#NN, bowl#NN, salt#NN etc. 4LANG =DAT say#VB, show#VB, allow#VB, swear#VB, grateful#ADV, let#VB, teach#VB, give#VB, help#VB etc. 4LANG2 PART_OF.body body#NN, tongue#NN, back#NN, neck#NN, shoulder#NN, bone#NN, skin#NN, wrist#NN, buttock#NN etc. 4LANG2 =AGT.HAS.mouth swallow#VB, suck#VB, eat#VB, drink#VB",
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table/>",
"html": null,
"text": "Examples from each resource after transforming them to the same format",
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td>table</td></tr><tr><td colspan=\"7\">shows the mean reciprocal rank of features (terms) present</td></tr><tr><td colspan=\"7\">in the original definitions. Reciprocal rank is calculated as</td></tr><tr><td colspan=\"2\">i/N</td><td>Rw</td><td>Rw(poss)</td><td>R f</td><td>R f (poss)</td><td>MRR</td></tr><tr><td/><td>1</td><td>0.1508</td><td>0.8504</td><td>0.2694</td><td>0.9455</td><td>0.9455</td></tr><tr><td>4lang</td><td>5 10 20</td><td>0.5472 0.7049 0.8187</td><td>0.6574 0.7080 0.8187</td><td>0.7614 0.8756 0.9316</td><td>0.8445 0.8785 0.9316</td><td>0.9586 0.9237 0.8922</td></tr><tr><td/><td>1</td><td>0.4411</td><td>0.8818</td><td>0.5079</td><td>0.9266</td><td>0.9266</td></tr><tr><td>4lang2</td><td>5 10 20</td><td>0.8688 0.9339 0.9648</td><td>0.8775 0.9339 0.9648</td><td>0.9138 0.9597 0.9793</td><td>0.9226 0.9597 0.9793</td><td>0.9456 0.9276 0.9163</td></tr><tr><td/><td>1</td><td>0.3354</td><td>0.3590</td><td>0.7421</td><td>0.8426</td><td>0.8426</td></tr><tr><td>ROLD</td><td>5 10 20</td><td>0.6557 0.7433 0.8117</td><td>0.7482 0.8349 0.8896</td><td>0.7017 0.7481 0.8118</td><td>0.8079 0.8419 0.8897</td><td>0.9080 0.8877 0.8645</td></tr></table>",
"html": null,
"text": "Rank for the i th feature returned by the model that is also present in the original definition, it is zero if no valid feature was retrieved. MRR is calculated as the average of the reciprocal rank of all expected features retrieved for all words.",
"num": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table/>",
"html": null,
"text": "Performance (recall) of the three models for English tested on the original resources.",
"num": null,
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td/><td>4L: music art *poem *poet *poetry WRITE sound *text musician</td></tr><tr><td>Bart\u00f3k</td><td>4L2: art *poem *poet music HAS.rhythm entertainment sound sequence</td></tr><tr><td/><td>*text MAKE.beautiful</td></tr><tr><td/><td>RL: Music Music_N Performing</td></tr><tr><td/><td>4L: country government politician @United_States state LEAD *place</td></tr><tr><td>Obama</td><td>president republic</td></tr><tr><td/><td>4L2: country politician @United_States country.HAS place MAKE.law</td></tr><tr><td/><td>state *@Soviet_Union politics</td></tr><tr><td/><td>RL: Officials Government_N Government Politics_N Authority_N Di-</td></tr><tr><td/><td>rector_N Council_N</td></tr><tr><td/><td>4L: institution group society *president *republic educate science pur-</td></tr><tr><td>MTA</td><td>pose *person people</td></tr><tr><td/><td>4L2: institution society group educate science HAS.purpose study struc-</td></tr><tr><td/><td>ture people</td></tr><tr><td/><td>RL: Occupations Education *Receptacle_N College *Geology Skill_N</td></tr><tr><td/><td>Organizations</td></tr><tr><td/><td>4L: educate institution study student degree science numbers atom</td></tr><tr><td>ELTE</td><td>*GIVE</td></tr><tr><td/><td>4L2: educate institution study science *name *part knowledge public</td></tr><tr><td/><td>*system</td></tr><tr><td/><td>RL: College Education Knowledge_N School_Adj Language_N</td></tr><tr><td/><td>4L</td></tr><tr><td>PPKE</td><td/></tr></table>",
"html": null,
"text": "Performance of the models 4lang, 4lang2 and ROLD on test words from different semantic groups. acc: feature accuracy, d-acc: domain accuracy of features, #F: different features, #B: features marked wrong at least once.",
"num": null,
"type_str": "table"
}
}
}
}