ACL-OCL / Base_JSON /prefixW /json /W17 /W17-0213.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W17-0213",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:31:52.313196Z"
},
"title": "Using Pseudowords for Algorithm Comparison: An Evaluation Framework for Graph-based Word Sense Induction",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Flavio",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Cecchini",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Martin",
"middle": [],
"last": "Riedl",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Group Universit\u00e4t Hamburg",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Group Universit\u00e4t Hamburg",
"institution": "",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we define two parallel data sets based on pseudowords, extracted from the same corpus. They both consist of word-centered graphs for each of 1225 different pseudowords, and use respectively first-order co-occurrences and secondorder semantic similarities. We propose an evaluation framework on these data sets for graph-based Word Sense Induction (WSI) focused on the case of coarsegrained homonymy: We compare different WSI clustering algorithms by measuring how well their outputs agree with the a priori known ground-truth decomposition of a pseudoword. We perform this evaluation for four different clustering algorithms: the Markov cluster algorithm, Chinese Whispers, MaxMax and a gangplankbased clustering algorithm. To further improve the comparison between these algorithms and the analysis of their behaviours, we also define a new specific evaluation measure. As far as we know, this is the first large-scale systematic pseudoword evaluation dedicated to the induction of coarsegrained homonymous word senses.",
"pdf_parse": {
"paper_id": "W17-0213",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we define two parallel data sets based on pseudowords, extracted from the same corpus. They both consist of word-centered graphs for each of 1225 different pseudowords, and use respectively first-order co-occurrences and secondorder semantic similarities. We propose an evaluation framework on these data sets for graph-based Word Sense Induction (WSI) focused on the case of coarsegrained homonymy: We compare different WSI clustering algorithms by measuring how well their outputs agree with the a priori known ground-truth decomposition of a pseudoword. We perform this evaluation for four different clustering algorithms: the Markov cluster algorithm, Chinese Whispers, MaxMax and a gangplankbased clustering algorithm. To further improve the comparison between these algorithms and the analysis of their behaviours, we also define a new specific evaluation measure. As far as we know, this is the first large-scale systematic pseudoword evaluation dedicated to the induction of coarsegrained homonymous word senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word Sense Induction (WSI) is the branch of Natural Language Processing (NLP) concerned with the unsupervised detection of all the possible senses that a term can assume in a text document. It could also be described as \"unsupervised Word Sense Disambiguation\" (Navigli, 2009) . Since ambiguity and arbitrariness are constantly present in natural languages, WSI can help improve the analysis and understanding of text or speech (Martin and Jurafsky, 2000) . At its core we find the notion of distributional semantics, exemplified by the state-ment by Harris (1954) : \"Difference of meaning correlates with difference of distribution.\"",
"cite_spans": [
{
"start": 261,
"end": 276,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF25"
},
{
"start": 428,
"end": 455,
"text": "(Martin and Jurafsky, 2000)",
"ref_id": "BIBREF21"
},
{
"start": 551,
"end": 564,
"text": "Harris (1954)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "In this paper, we focus on graph-based methods. Graphs provide an intuitive mathematical representation of relations between words. A graph can be defined and built in a straightforward way, but allows for a very deep analysis of its structural properties. This and their discrete nature (contrary to the continuous generalizations represented by vector spaces of semantics, cf. Turney and Pantel (2010)) favour the identification of significative patterns and subregions, among other things allowing the final number of clusters to be left unpredetermined, an ideal condition for WSI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "The main contribution of this paper is threefold: We present two parallel word graph data sets based on the concept of pseudowords, both for the case of semantic similarities and co-occurrences; on them, we compare the performances of four WSI clustering algorithms; and we define a new ad hoc evaluation measure for this task, called",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "Pseudowords were first proposed by Gale et al. (1992) and Sch\u00fctze (1992) as a way to create artificial ambiguous words by merging two (or more) random words. A pseudoword simulates homonymy, i.e. a word which possesses two (or more) semantically and etymologically unrelated senses, such as count as \"nobleman\" as opposed to \"the action of enumerating\". The study of Nakov and Hearst (2003) shows that the performances of WSI algorithms on random pseudowords might represent an optimistic upper bound with respect to true polysemous words, as generic polysemy implies some kind of correlation between the categories and the distributions of the different senses of a word, which is absent from randomly generated ones. We are aware of the approaches proposed in (Otrusina and Smr\u017e, 2010) and (Pilehvar and Navigli, 2013) , used e.g. in (Ba\u015fkaya and Jurgens, 2016) , for a pseudoword generation that better models polysemous words with an arbitrary degree of polysemy. Both works imply the emulation of existing polysemous words, following the semantic structure of WordNet (Miller, 1995) : pseudosenses (the components of a pseudoword) corresponding to the synsets of a word are represented by the closest monosemous terms on the WordNet graph, according to Personalized PageRank (Haveliwala, 2002) applied to the WordNet graph. However, we want to remark the different nature of our paper. Here we compare the behaviours of different clustering algorithms on two data sets of pseudowords built to emulate homonymy, and relate these behaviours to the structure of the word graphs relative to these pseudowords. As homonymy is more clear-cut than generic polysemy, we deem that the efficacy of a WSI algorithm should be first measured in this case before being tested in a more fine-grained and ambiguous situation. Also, the task we defined does not depend on the arbitrary granularity of an external lexical resource 1 , which might be too finegrained for our purpose. Further, the sense distinctions e.g. in WordNet might not be mirrored in the corpus, and conversely, some unforeseen senses might be observed. Instead, our work can be seen as an expansion of the pseudoword evaluation presented in (Bordag, 2006) , albeit more focused in its goal and implementation.",
"cite_spans": [
{
"start": 35,
"end": 53,
"text": "Gale et al. (1992)",
"ref_id": "BIBREF12"
},
{
"start": 58,
"end": 72,
"text": "Sch\u00fctze (1992)",
"ref_id": "BIBREF32"
},
{
"start": 367,
"end": 390,
"text": "Nakov and Hearst (2003)",
"ref_id": "BIBREF23"
},
{
"start": 762,
"end": 787,
"text": "(Otrusina and Smr\u017e, 2010)",
"ref_id": "BIBREF26"
},
{
"start": 792,
"end": 820,
"text": "(Pilehvar and Navigli, 2013)",
"ref_id": "BIBREF28"
},
{
"start": 836,
"end": 863,
"text": "(Ba\u015fkaya and Jurgens, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1073,
"end": 1087,
"text": "(Miller, 1995)",
"ref_id": "BIBREF22"
},
{
"start": 1280,
"end": 1298,
"text": "(Haveliwala, 2002)",
"ref_id": "BIBREF15"
},
{
"start": 2201,
"end": 2215,
"text": "(Bordag, 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TOP2.",
"sec_num": null
},
{
"text": "In our opinion, current WSI tasks present some shortcomings. A fundamental problem is the vagueness regarding the granularity (fine or coarse) of the senses that have to be determined. As a consequence, the definition of an adequate evaluation measure becomes difficult, as many of them have been showed to be biased towards few or many clusters 2 . Further, small data sets often do not allow obtaining significant results. Pseudoword evaluation, on the contrary, presents an objective and self-contained framework where the classification task is well characterized and gives the opportunity to define an ad hoc evaluation measure, at the same time automating the data set creation. Therefore, we tackle the following research questions: What are the limitations of a 1 As was also the case for task 13 of SemEval 2013, cf. (Jurgens and Klapaftis, 2013) 2 See for example the results at task 14 of SemEval 2010 (Manandhar et al., 2010) , where adjusted mutual information was introduced to correct the bias: https://www.cs.york. ac.uk/semeval2010_WSI/task_14_ranking.html. pseudoword evaluation for homonymy detection? How does the structure of a pseudoword's word graph depend on its components? How do different clustering strategies compare on the same data set, and what are the most suited measures to evaluate their performances?",
"cite_spans": [
{
"start": 826,
"end": 855,
"text": "(Jurgens and Klapaftis, 2013)",
"ref_id": "BIBREF17"
},
{
"start": 913,
"end": 937,
"text": "(Manandhar et al., 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TOP2.",
"sec_num": null
},
{
"text": "The paper is structured as follows. In Section 2 we give a definition of the ego word graph of a word and present our starting corpus. Section 3 details our evaluation setting and describes our proposed measure TOP2. Section 4 introduces the four graph clustering algorithms chosen for evaluation. Lastly, Section 5 comments the results of the comparisons, and Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TOP2.",
"sec_num": null
},
{
"text": "For our evaluation we will use word graphs based both on semantic similarities (SSIM) and on co-occurrences. We define both as undirected, weighted graphs G = (V, E) whose nodes correspond to a given subset V of the vocabulary of the considered corpus, and where two nodes v, w are connected by an edge if and only if v and w co-occur in the same sentence (co-occurrences) or share some kind of context (semantic similarities). In either case, we express the strength of the connection between two words through a weight mapping p : E \u2212\u2192 R + , for which we can take indicators such as raw frequency or pointwise mutual information. The higher the value on an edge, the more significant we deem their connection. We will consider word-centered graphs, called ego word graphs. Both kinds of ego word graphs will be induced by the distributional thesauri computed on a corpus consisting of 105 million English newspaper sentences 3 , using the JoBimText (Biemann and Riedl, 2013) implementation. In the case of co-occurrences, for a given word v we use a frequency-weighted version of pointwise mutual information called lexicographer's mutual information (LMI) (Kilgarriff et al., 2004; Evert, 2004) to rank all the terms co-occurring with v in a sentence and to select those that will appear in its ego word graph. Edge weights are defined by LMI and the possible edge between two nodes u and w will be determined by the presence of u in the distribu-tional thesaurus of w, or viceversa.",
"cite_spans": [
{
"start": 951,
"end": 976,
"text": "(Biemann and Riedl, 2013)",
"ref_id": "BIBREF4"
},
{
"start": 1159,
"end": 1184,
"text": "(Kilgarriff et al., 2004;",
"ref_id": "BIBREF18"
},
{
"start": 1185,
"end": 1197,
"text": "Evert, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Graphs and Data Set",
"sec_num": "2"
},
{
"text": "The process is similar in the case of SSIMs, but here LMI is computed on term-context cooccurrences based on syntactic dependencies extracted from the corpus by means of the Stanford Parser (De Marneffe et al., 2006) .",
"cite_spans": [
{
"start": 194,
"end": 216,
"text": "Marneffe et al., 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Graphs and Data Set",
"sec_num": "2"
},
{
"text": "In both cases, the word v itself is removed from G, since we are interested just in the relations between the words more similar to it, following (Widdows and Dorow, 2002) . The clusters in which the node set of G will be subdivided will represent the possible senses of v. We remark that co-occurrences are first-order relations (i.e. inferred directly by data), whereas SSIMs are of second order, as they are computed on the base of cooccurrences 4 . For this reason, two different kinds of distributional thesauri might have quite different entries even if they pertain to the same word. Further, the ensuing word graphs will show a complementary correlation: co-occurrences represent syntagmatic relations with the central word, while SSIMs paradigmatic ones 5 , and this also determines different structures, as e.g. co-occurrences are denser than SSIMs.",
"cite_spans": [
{
"start": 146,
"end": 171,
"text": "(Widdows and Dorow, 2002)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Graphs and Data Set",
"sec_num": "2"
},
{
"text": "The method of pseudoword evaluation was first independently proposed in (Gale et al., 1992) and (Sch\u00fctze, 1992) . Given two words appearing in a corpus, e.g. cat and window, we replace all their occurrences therein with an artificial term formed by their combination (represented in our example as cat window), a so-called pseudoword that merges the contexts of its components (also called pseudosenses). The original application of this evaluation assumes that all the components of a pseudoword are monosemous words, i.e. possess only one sense. Ideally, an algorithm trying to induce the senses of a monosemous word from the corresponding word graph should return only one cluster, and we would expect it to find exactly two clusters in the case of a pseudoword with two components. This makes evaluation more transparent, and we are restricting ourselves to monosemous words for this reason.",
"cite_spans": [
{
"start": 72,
"end": 91,
"text": "(Gale et al., 1992)",
"ref_id": "BIBREF12"
},
{
"start": 96,
"end": 111,
"text": "(Sch\u00fctze, 1992)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "For the purpose of our evaluation, we extract monosemous nouns from the 105 million sentences of the corpus described in Section 2, over which we compute all SSIM-and cooccurrence-based distributional thesauri. We divide all the nouns into 5 logarithmic frequency classes identified with respect to the frequency of the most common noun in the corpus. For each class, we extract random candidates: We retain only those that possess one single meaning, i.e. for which Chinese Whispers (see Section 4.2) 6 yields one single cluster, additionally checking that they have only one synset in WordNet (which is commonly accepted to be fine-grained). We repeat this process until we obtain 10 suitable candidates per frequency class. In the end, we obtain a total of 50 words whose combinations give rise to 1225 different pseudowords. We then proceed to create two kinds of pseudoword ego word graph data sets, as described in Section 2: one for co-occurrences and one for semantic similarities. In both cases we limit the graphs to the topmost 500 terms, ranked by LMI. The evaluation consists in running the clustering algorithms on the ego word graphs: since we know the underlying (pseudo)senses of each pseudoword A B, we also know for each node in its ego word graph if it belongs to the distributional thesaurus, and thus to the subgraph relative to A, B or both, and thus we already know our ground truth clustering T = (T A , T B ). Clearly, the proportion between T A and T B might be very skewed, especially if A and B belong to very different frequency classes. Despite the criticism of the pseudoword evaluation for being too artificial and its senses not obeying the true sense distribution of a proper polysemic word, we note that this is a very realistic situation for homonymy, since sense distributions tend to be skewed and dominated by a most frequent sense (MFS). In coarse-grained Word Sense Disambiguation evaluations, the MFS baseline is often in the range of 70% -80% (Navigli et al., 2007) . Our starting assumption for very skewed cases is that a clustering algorithm will be biased towards the more frequent term of the two, that is, it will tendentially erroneously find only one cluster. It could also be possible that all nodes relative to A at the same time also appear in the distributional thesaurus of B, so that the word A is overshadowed by B. We call this a collapsed pseudoword. We decided not to take collapsed pseudowords into account for evaluation, since in this case the initial purpose of simulating a polysemous does not hold: we are left with an actually monosemous pseudoword.",
"cite_spans": [
{
"start": 1987,
"end": 2009,
"text": "(Navigli et al., 2007)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "We measure the quality of the clustering of a pseudoword ego graph in terms of the F-score of the BCubed metric (Bagga and Baldwin, 1998; Amig\u00f3 et al., 2009) , alongside with normalized mutual information 7 (NMI) (Strehl, 2002) and a measure developed by us, TOP2, loosely inspired by NMI. We define TOP2 as the average of the harmonic means of homogeneity and completeness of the two clusters that better represent the two components of the pseudoword. More formally, suppose that the pseudoword A B is the combination of the words A and B. We denote the topmost 500 entries in the distributional thesauri of A and B respectively as D A and D B , and we write D A = D A \u2229V and D B = D B \u2229V , where V is the node set of G AB , the pseudoword's ego word graph. We can express",
"cite_spans": [
{
"start": 112,
"end": 137,
"text": "(Bagga and Baldwin, 1998;",
"ref_id": "BIBREF1"
},
{
"start": 138,
"end": 157,
"text": "Amig\u00f3 et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 213,
"end": 227,
"text": "(Strehl, 2002)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V as V = \u03b1 \u222a \u03b2 \u222a \u03b3 \u222a \u03b4,",
"eq_num": "(1)"
}
],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "\u03b1 = D A \\D B , \u03b2 = D B \\D A , \u03b3 = D A \u2229 D B , \u03b4 = V \\(D A \u222a D B )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": ". So, elements in \u03b1 and \u03b2 are nodes in V that relate respectively only to A or B, elements of \u03b3 are nodes of V that appear in both distributional thesauri and elements in \u03b4 are not among the topmost 500 entries in the distributional thesauri of either A or B, but happened to have a significant enough relation with the pseudoword to appear in V . We note that we will not consider nodes in \u03b4, and we will neither consider nodes of \u03b3, since they act as neutral terms. Consequently, we take T A = \u03b1, T B = \u03b2 as the ground truth clusters of V \\(\u03b3 \u222a \u03b4), which we will com-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "pare with C \\(\u03b3 \u222a \u03b4) = {C\\(\u03b3 \u222a \u03b4) | C \u2208 C }, where C = {C 1 , . . . ,C n } is any clustering of V . It is pos- sible that either \u03b1 = / 0 or \u03b2 = / 0, which means that in G AB the relation D A \u2282 D B or D B \u2282 D A holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "In this case one word is totally dominant over the other, and the pseudoword actually collapses onto one sense. As already mentioned, we decided to exclude collapsed pseudowords from evaluation. To compute the BCubed F-score and NMI, we compare the ground truth clustering T = {\u03b1, \u03b2} to the clustering C \\(\u03b3 \u222a \u03b4) that we obtain from any algorithm under consideration. However, for the TOP2 score we want to look only at the two clusters C A and C B that better represent component A and B respectively. We define them as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "C A = arg max C\u2208C |C \u2229 \u03b1| , C B = arg max C\u2208C |C \u2229 \u03b2| .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "For C A (respectively C B ) we define its precision or purity p A (p B ) and its recall or completeness c A (c B ) with respect to \u03b1 (\u03b2) as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "p A = |C A \u2229 \u03b1| |C A | , c A = |C A \u2229 \u03b1| |\u03b1| .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "We take the respective harmonic means h(p A , c A ) and h(p B , c B ) and define the TOP2 score as their macro-average:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "TOP2 = h(p A , c A ) + h(p B , c B ) 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "If it happens that C A = C B , we keep the best cluster for one component and take the second best for the other, according to which choice maximizes TOP2. If the clustering consists of only one cluster, we define either C A = / 0 or C B = / 0 and put the harmonic mean of its purity and completeness equal to 0. Therefore, in such case the TOP2 will never be greater than 1 2 . The motivation for the TOP2 score is that we know what we are looking for: namely, for two clusters that represent A and B. The TOP2 score then gives us a measure of how well the clustering algorithm succeeds in correctly concentrating all the information in exactly two clusters with the least dispersion; this can be generalized to the case of more than two pseudosenses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudoword Evaluation Framework",
"sec_num": "3"
},
{
"text": "In our experimental setting we will compare four graph-based clustering algorithms commonly applied in, or especially developed for, the task of WSI. They are: the Markov cluster algorithm (MCL) (van Dongen, 2000) ; Chinese Whispers (CW) (Biemann, 2006) ; MaxMax (MM) (Hope and Keller, 2013) ; and the gangplank clustering algorithm (GP) (Cecchini and Fersini, 2015). They are detailed in the following subsections. We remark that none of these algorithms sets a predefined number of clusters to be found. This is a critical property of WSI algorithms, since it is not known a priori whether a word is ambiguous in the underlying data collection and how many senses it might have.",
"cite_spans": [
{
"start": 195,
"end": 213,
"text": "(van Dongen, 2000)",
"ref_id": "BIBREF36"
},
{
"start": 238,
"end": 253,
"text": "(Biemann, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 268,
"end": 291,
"text": "(Hope and Keller, 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithms",
"sec_num": "4"
},
{
"text": "The Markov cluster algorithm (van Dongen, 2000) uses the concept of random walk on a graph, or Markov chain: the more densely intra-connected a region in the graph, the higher the probability to remain inside it starting from one of its nodes and moving randomly to another one. The strategy of the algorithm is then to perform a given number n of steps of the random walk, equivalent to taking the n-th power of the graph's adjacency matrix. Subsequently, entries of the matrix are raised to a given power to further increase strong connections and weaken less significant ones. This cycle is repeated an arbitrary number of times, and, as weaker connections tend to disappear, the resulting matrix is interpretable as a graph clustering. Not rooted in the NLP community, MCL was used for the task of WSI on co-occurrence graphs in (Widdows and Dorow, 2002) . Our implementation uses an expansion factor of 2 and an inflation factor of 1.4, which yielded the best results.",
"cite_spans": [
{
"start": 29,
"end": 47,
"text": "(van Dongen, 2000)",
"ref_id": "BIBREF36"
},
{
"start": 833,
"end": 858,
"text": "(Widdows and Dorow, 2002)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Cluster Algorithm",
"sec_num": "4.1"
},
{
"text": "The Chinese Whispers algorithm was first described in (Biemann, 2006) . It is inspired by MCL as a simplified version of it and similarly simulates the flow of information in a graph. Initially, every node in the graph starts as a member of its own class; then, at each iteration every node assumes the prevalent class among those of its neighbours, measured by the weights on the edges incident to it. This algorithm is not deterministic and may not stabilize, as nodes are accessed in random order. However, it is extremely fast and quite successful at distinguishing denser subgraphs. The resulting clustering is generally relatively coarse. Besides its use for word sense induction, in (Biemann, 2006) CW was also used for the tasks of language separation and word class induction.",
"cite_spans": [
{
"start": 54,
"end": 69,
"text": "(Biemann, 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Whispers",
"sec_num": "4.2"
},
{
"text": "MaxMax was originally described in (Hope and Keller, 2013 ) and applied to the task of WSI on weighted word co-occurrence graphs. It is a softclustering algorithm that rewrites the word graph G as an unweighted, directed graph, where edges are oriented by the principle of maximal affinity: the node u dominates v if the weight of (u, v) is maximal among all edges departing from v. Clusters are then defined as all the maximal quasistrongly connected subgraphs of G (Ruohonen, 2013), each of which is represented by its root. Clusters can overlap because a node could be the descendant of two roots at the same time. The algorithm's complexity is linear in the number of the edges and its results are uniquely determined.",
"cite_spans": [
{
"start": 35,
"end": 57,
"text": "(Hope and Keller, 2013",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MaxMax",
"sec_num": "4.3"
},
{
"text": "The gangplank clustering algorithm was introduced in (Cecchini and Fersini, 2015), where its use for the task of WSI on co-occurrence graphs is shown. There, the concept of gangplank edges is introduced: they are edges that can be seen as weak links between nodes belonging to different, highly intra-connected subgraphs of a graph, and thus help deduce a cluster partitioning of the node set. In its proposed implementation, the computation of gangplank edges and the subsequent clustering of G is actually performed on a secondorder graph of G, a distance graph D G which represents the distances between nodes of G according to a weighted version of Jaccard distance adapted to node neighbourhoods. The gangplank algorithm is deterministic and behaves stably also on very dense or scale-free graphs. The resulting clustering tends to be relatively fine-grained. Table 2 summarizes the scores of BCubed Fmeasure (BC-F), NMI and TOP2 as mean scores over each possible pseudoword class, and Table 1 the overall mean scores per algorithm for the SSIM-and the co-occurrence-based data sets. The class of a pseudoword is the combination of the frequency classes of its two components, labelled from 1, comprising the least frequent words, to 5, comprising the most frequent words in the corpus. A total of 15 combinations are possible. Each has 45 pseudowords if the two words are of the same frequency class, and 100 otherwise. The case of having a collapsed pseudoword, discussed in Section 3, is more frequent for SSIMs than for co-occurrences. Formally, in the notation of (1), we say that one component of a pseudoword totally dominates the other one when either \u03b1 = / 0 or \u03b2 = / 0. This happens 249 times for SSIM-based graphs and 143 times for co-occurrence-based ones. We excluded all such pseudowords from evaluation, since they actually possess only one sense and thus can not really be disambiguated. There is a clear and expected tendency for collapsed pseudowords to appear for very uneven 93.0 \u00b1 0.6 69.1 \u00b1 0.9 53.0 \u00b1 2.6 5.4 \u00b1 0.3 72.4 \u00b1 1.8 3 3 39 9 9. . .3 3 3 \u00b1 0.7 CW 9 9 94 4 4. . .7 7 7 \u00b1 0.5 88.7 \u00b1 0.5 5 5 53 3 3. . .2 2 2 \u00b1 2.7 4.1 \u00b1 0.4 7 7 73 3 3. . .9 9 9 \u00b1 1.6 25.6 \u00b1 1.1 MM 18.8 \u00b1 0.5 35.2 \u00b1 0.7 27.3 \u00b1 0.9 1 1 11 1 1. . .1 1 1 \u00b1 0.4 39.7 \u00b1 0.8 34.2 \u00b1 0.6 GP 55.0 \u00b1 1.2 58.2 \u00b1 2.0 30.4 \u00b1 1.4 4.2 \u00b1 0.4 58.6 \u00b1 1.2 35.4 \u00b1 0.5 BSL 85.1 \u00b1 0.7 9 9 90 0 0. . .5 5 5 \u00b1 0.4 0.0 \u00b1 0 0.0 \u00b1 0 41.1 \u00b1 0.4 38.8 \u00b1 0.5 Table 1 : Mean scores in percentages over all pseudowords for each clustering algorithm and the baseline, for our three metrics and for both data sets. The 95% confidence interval is also reported for each mean value. The best values on each data set and for each measure are boldfaced.",
"cite_spans": [],
"ref_spans": [
{
"start": 865,
"end": 872,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 991,
"end": 999,
"text": "Table 1",
"ref_id": null
},
{
"start": 2431,
"end": 2438,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gangplanks",
"sec_num": "4.4"
},
{
"text": "combinations of frequency classes, like the extreme case 1-5, where out of 100 pseudowords this happens 72 times for similarities and 84 times for co-occurrences. On the contrary, when the components belong to the same frequency class, this phenomenon never arises. This can be explained by the fact that LMI (see Section 2) is proportional to the frequency of a particular context or co-occurrence, so that highly frequent words tend to develop stronger similarities in their distributional thesauri, relegating sparser similarities of less frequent words to a marginal role or outweighing them altogether. Especially in the two highest frequency classes 4 and 5, there are terms that always come to dominate the graphs of their related pseudowords (like beer).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Data Set Analysis",
"sec_num": "5"
},
{
"text": "Interestingly, we notice a drop of the NMI scores for similarities in the fields of Table 2a corresponding to the most skewed frequency class combinations, in particular 1-5, 2-5, 3-5, where some words tend to completely dominate their graphs, and clusterings tend to consist of a single big cluster, possibly accompanied by smaller, marginal ones. We also computed a most frequent score baseline (BSL), which yields just one single cluster for each ego word graph. Its NMI scores are always 0, as this measure heavily penalizes the asymmetry of having just one cluster in the output and two clusters in the ground truth. This, together with the fact that MaxMax, which is the most fine-grained among our examined algorithms, reaches NMI values that are on par with the other systems (or consistently better, in the case of co-occurrences) while regularly obtaining the lowest BC-F scores, leads us to claim that NMI is biased towards finegrained clusterings 8 . On the opposite side of the spectrum, the more coarse-grained systems tend to have very high BC-F scores close to the baseline, especially for the more skewed combinations. This depends on the fact that unbalanced graphs consist of nearly just one sense. Here the bias of BCubed measures becomes manifest: Due to their nature as averages over all single clustered elements, they stress the similarity between the internal structures of two clusterings, i.e. the distribution of elements inside each cluster, and disregard their external structures, i.e. their respective sizes and the distribution of cardinalities among clusters. The TOP2 measure, however, was defined so as to never assign a score greater than 0.5 in such occurrences. In fact, in the case of cooccurrences we see that the baseline achieves the best BC-F scores, but most of the time it is beaten by other systems in terms of TOP2 score. Overall, TOP2 seems to be the most suited measure for the evaluation of the task represented by our pseudoword data sets and is more in line with our expectations: higher scores when the ego word graph is more balanced, and much lower scores when the ego word graph is strongly skewed, without the excesses of NMI. We remark that scores on the whole are usually worse for co-occurrences than for similarities, both globally and for each frequency class combination. For co-occurrences, TOP2 never goes over 0.5. This is a strong indication that the structure of co-occurrence ego word graphs is different than that of SSIM-based ones, as already discussed in Section 2; in particular, they are denser and noisier, but generally more balanced. Remarkably, a coarse-grained algorithm like Chinese Whispers obtains its worst scores on co-occurrences, according to TOP2, suffering from its very unbalanced, nearly-BSL clusterings. However, this very characteristic makes Chinese Whispers the best system overall on the less dense SSIMs (and the other evaluation measures agree). At the same time, the more fine-grained GP and MCL seem to better adapt to the structure of co-occurrence graphs, while GP's performances clearly deteriorate on more unbalanced pseudowords for SSIMs. On the lower end of the spectrum, MaxMax shows a very constant but too divisive nature for our task of homonymy detection.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 92,
"text": "Table 2a",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Data Set Analysis",
"sec_num": "5"
},
{
"text": "We briefly want to show the differences between the clusterings of our four systems (CW, MCL, MaxMax, GP) on the SSIM ego word graph of a same pseudoword. We chose catsup bufflehead: catsup (variant of ketchup) belongs to frequency class 2 and bufflehead (a kind of duck) to frequency class 1. Their graph has 488 nodes and a density of 0.548, above the global mean of 0.45. The node ratio is in favour of catsup at 3.05 : 1 against bufflehead, with respectively 111 against 339 exclusive terms, still being a quite balanced ego graph. Chinese Whispers finds two clusters which seem to cover correctly the two senses of bird or animal on one side, {hummingbird, woodpecker, dove, merganser,...}, and food on the other side: {polenta, egg, baguette, squab,...}. Its scores are very high, respectively 0.95 for BC-F, 0.80 for NMI and 0.93 for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of Clusterings",
"sec_num": "5.1"
},
{
"text": "The gangplank algorithm yields 5 clusters. One is clearly about the bird: {goldeneye, condor, peacock,...}. The other four have high precision, but lose recall for splitting the sense of food, e.g. in {puree, clove, dill,...} and {jelly, tablespoon, dripping,...}, and the distinction between them is not always clear. We obtain a BC-F of 0.66, a NMI of 0.51 and a TOP2 of 0.78. The Markov cluster algorithm with an inflation factor of 1.4 fails to make a distinction and finds only one cluster: {raptor, Parmesan, coffee, stork,...}. Its scores are the same of our trivial baseline: BC-F 0.77, NMI 0.0 and TOP2 0.41 (< 0.5, see section 3). MaxMax confirms its tendency of very finegrained clusterings and produces 22 clusters. Each has a very high precision, but some consist of only two or three elements, such as {gin, rum, brandy} and {cashmere, denim} and in general they make very narrow distinctions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TOP2.",
"sec_num": null
},
{
"text": "The biggest cluster {chili, chily, ginger, shallot,...} has 89 elements. We also find a cluster with bird names, but the overall scores are low: BC-F 0.27, NMI 0.38 and TOP2 0.45.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TOP2.",
"sec_num": null
},
{
"text": "The major contribution of this work is to present two new pseudoword ego word graph data sets for graph-based homonymy detection: one for context semantic similarities and one for co-occurrences. The data sets are modelled around 1225 pseudowords, each representing the combination of two monosemous words. We show that many ego word graphs are too skewed when the two components come from very different frequency classes, up to the point of actually collapsing on just one sense, but in general they represent a good approximation of homonymy. We evidence the biases of BCubed measures and NMI, respectively towards baseline-like clusterings (and BSL is the best performing system for co-occurrences in this sense) and finer clusterings. On the contrary, our proposed TOP2 metric seems to strike the right balance and to provide the most meaningful scores for interpretation. Chinese Whispers, which yields tendentially coarse clusterings, emerges as the best system overall for this task with regard to SSIM, and is closely followed by MCL, which is in turn the best system for co-occurrences, according to TOP2. The more fine-grained GP approach falls in-between. MaxMax systematically has the lowest scores, as its clusterings prove to be too fragmented for our task, and only achieves good NMI values, which are however biased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "These considerations lead us to identify Word Sense Discrimination 9 (WSD), commonly used as a synonym for Word Sense Induction, as an actually different, yet complementary task which necessitates different instruments, as exemplified by our double data set: whereas WSI is paradigmatic, WSD is syntagmatic. We deem that this distinction deserves further investigation. As a future work, beyond expanding our data sets we envision the implementation of consensus clustering (Ghaemi et al., 2009) and re-clustering techniques to improve results, and a more accurate analysis of the relation between creation of word graphs and algorithms' outputs.",
"cite_spans": [
{
"start": 474,
"end": 495,
"text": "(Ghaemi et al., 2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "A combination of the Leipzig Corpora Collection (LCC), http://corpora.uni-leipzig.de(Richter et al., 2006) and the Gigaword corpus(Parker et al., 2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "About relations of second and higher orders, cf. (Biemann and Quasthoff, 2009).5 A fundamental source on this topic is(De Saussure, 19951916.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the implementation from https: //sourceforge.net/projects/jobimtext/ with parameters: -n 200 -N 200.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "NMI is equivalent to V-measure, as shown byRemus and Biemann (2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This bias is discussed more at length byLi et al. (2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Defined as \"determining for any two occurrences [of a word] whether they belong to the same sense or not\", afterSch\u00fctze (1998).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A comparison of extrinsic clustering evaluation metrics based on formal constraints",
"authors": [
{
"first": "Enrique",
"middle": [],
"last": "Amig\u00f3",
"suffix": ""
},
{
"first": "Julio",
"middle": [],
"last": "Gonzalo",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Artiles",
"suffix": ""
},
{
"first": "Felisa",
"middle": [],
"last": "Verdejo",
"suffix": ""
}
],
"year": 2009,
"venue": "Information retrieval",
"volume": "12",
"issue": "4",
"pages": "461--486",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrique Amig\u00f3, Julio Gonzalo, Javier Artiles, and Fe- lisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal con- straints. Information retrieval, 12(4):461-486.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the first international Conference on Language Resources and Evaluation (LREC'98), workshop on linguistic coreference",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the first international Conference on Language Resources and Evaluation (LREC'98), workshop on linguistic coreference, pages 563-566, Granada, Spain. European Language Resources Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semisupervised learning with induced word senses for state of the art word sense disambiguation",
"authors": [
{
"first": "Osman",
"middle": [],
"last": "Ba\u015fkaya",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "1025--1058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Osman Ba\u015fkaya and David Jurgens. 2016. Semi- supervised learning with induced word senses for state of the art word sense disambiguation. Journal of Artificial Intelligence Research, 55:1025-1058.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Networks generated from natural language text",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Quasthoff",
"suffix": ""
}
],
"year": 2009,
"venue": "Dynamics on and of complex networks",
"volume": "",
"issue": "",
"pages": "167--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Biemann and Uwe Quasthoff. 2009. Networks generated from natural language text. In Dynam- ics on and of complex networks, pages 167-185. Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Text: Now in 2D! a framework for lexical expansion with contextual similarity",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Language Modelling",
"volume": "1",
"issue": "1",
"pages": "55--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Biemann and Martin Riedl. 2013. Text: Now in 2D! a framework for lexical expansion with con- textual similarity. Journal of Language Modelling, 1(1):55-95.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Chinese whispers: an efficient graph clustering algorithm and its application to natural language processing problems",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the first workshop on graph based methods for natural language processing",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Biemann. 2006. Chinese whispers: an effi- cient graph clustering algorithm and its application to natural language processing problems. In Pro- ceedings of the first workshop on graph based meth- ods for natural language processing, pages 73-80, New York, New York, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Word sense induction: Tripletbased clustering and automatic evaluation",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Bordag",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "137--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Bordag. 2006. Word sense induction: Triplet- based clustering and automatic evaluation. In Pro- ceedings of the 11th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 137-144, Trento, Italy. EACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word sense discrimination: A gangplank algorithm",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Flavio",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Cecchini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fersini",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015",
"volume": "",
"issue": "",
"pages": "77--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flavio Massimiliano Cecchini and Elisabetta Fersini. 2015. Word sense discrimination: A gangplank al- gorithm. In Proceedings of the Second Italian Con- ference on Computational Linguistics CLiC-it 2015, pages 77-81, Trento, Italy.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the fifth international Conference on Language Resources and Evaluation (LREC'06)",
"volume": "",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De Marneffe, Bill MacCartney, and Christopher Manning. 2006. Generating typed de- pendency parses from phrase structure parses. In Proceedings of the fifth international Conference on Language Resources and Evaluation (LREC'06), pages 449-454, Genoa, Italy. European Language Resources Association.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cours de linguistique g\u00e9n\u00e9rale",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cours de lin- guistique g\u00e9n\u00e9rale. Payot&Rivage, Paris, France. Critical edition of 1st 1916 edition.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The statistics of word cooccurrences: word pairs and collocations",
"authors": [
{
"first": "Stefan",
"middle": [
"Evert"
],
"last": "",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Evert. 2004. The statistics of word cooccur- rences: word pairs and collocations. Ph.D. thesis, Universit\u00e4t Stuttgart, August.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Work on statistical methods for word sense disambiguation",
"authors": [
{
"first": "William",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Technical Report of 1992 Fall Symposium -Probabilistic Approaches to Natural Language",
"volume": "",
"issue": "",
"pages": "54--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Gale, Kenneth Church, and David Yarowsky. 1992. Work on statistical methods for word sense disambiguation. In Technical Report of 1992 Fall Symposium -Probabilistic Approaches to Natural Language, pages 54-60, Cambridge, Massachusetts, USA. AAAI.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A survey: clustering ensembles techniques",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaemi",
"suffix": ""
},
{
"first": "Md",
"middle": [],
"last": "Nasir Sulaiman",
"suffix": ""
},
{
"first": "Hamidah",
"middle": [],
"last": "Ibrahim",
"suffix": ""
},
{
"first": "Norwati",
"middle": [],
"last": "Mustapha",
"suffix": ""
}
],
"year": 2009,
"venue": "World Academy of Science, Engineering and Technology",
"volume": "50",
"issue": "",
"pages": "636--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reza Ghaemi, Md Nasir Sulaiman, Hamidah Ibrahim, Norwati Mustapha, et al. 2009. A survey: clustering ensembles techniques. World Academy of Science, Engineering and Technology, 50:636-645.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributional structure. Word",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(2-3):146-162.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Topic-sensitive pagerank",
"authors": [
{
"first": "Taher",
"middle": [],
"last": "Haveliwala",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 11th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "517--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taher Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of the 11th international conference on World Wide Web, pages 517-526, Honolulu, Hawaii, USA. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "MaxMax: a graphbased soft clustering algorithm applied to word sense induction",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hope",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "368--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hope and Bill Keller. 2013. MaxMax: a graph- based soft clustering algorithm applied to word sense induction. In Proceedings of the 14th Interna- tional Conference on Computational Linguistics and Intelligent Text Processing, pages 368-381, Samos, Greece.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SemEval-2013 task 13: Word sense induction for graded and non-graded senses",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Klapaftis",
"suffix": ""
}
],
"year": 2013,
"venue": "*SEM 2013: The Second Joint Conference on Lexical and Computational Semantics",
"volume": "2",
"issue": "",
"pages": "290--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens and Ioannis Klapaftis. 2013. SemEval- 2013 task 13: Word sense induction for graded and non-graded senses. In *SEM 2013: The Second Joint Conference on Lexical and Computational Se- mantics, volume 2, pages 290-299, Atlanta, Geor- gia, USA. ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The sketch engine",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Rychl\u00fd",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Smr\u017e",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Tugwell",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Eleventh Euralex Conference",
"volume": "",
"issue": "",
"pages": "105--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff, Pavel Rychl\u00fd, Pavel Smr\u017e, and David Tugwell. 2004. The sketch engine. In Proceedings of the Eleventh Euralex Conference, pages 105-116, Lorient, France.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improved estimation of entropy for evaluation of word sense induction",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "3",
"pages": "671--685",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Li, Ivan Titov, and Caroline Sporleder. 2014. Improved estimation of entropy for evaluation of word sense induction. Computational Linguistics, 40(3):671-685.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Semeval-2010 task 14: Word sense induction & disambiguation",
"authors": [
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Klapaftis",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "63--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. Semeval-2010 task 14: Word sense induction & disambiguation. In Pro- ceedings of the 5th international workshop on se- mantic evaluation, pages 63-68, Los Angeles, Cal- ifornia, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Speech and language processing",
"authors": [
{
"first": "James",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Martin and Daniel Jurafsky. 2000. Speech and language processing. Pearson, Upper Saddle River, New Jersey, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "WordNet: a lexical database for English",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Categorybased pseudowords",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2003,
"venue": "Companion Volume of the Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HTL-NAACL) 2003 -Short Papers",
"volume": "",
"issue": "",
"pages": "70--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov and Marti Hearst. 2003. Category- based pseudowords. In Companion Volume of the Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HTL- NAACL) 2003 -Short Papers, pages 70-72, Ed- monton, Alberta, Canada. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SemEval-2007 task 07: Coarsegrained English all-words task",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Litkowski",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli, Kenneth Litkowski, and Orin Har- graves. 2007. SemEval-2007 task 07: Coarse- grained English all-words task. In Proceedings of the 4th International Workshop on Semantic Evalu- ations, pages 30-35, Prague, Czech Republic. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Word sense disambiguation: A survey",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "41",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A new approach to pseudoword generation",
"authors": [
{
"first": "Lubom\u00edr",
"middle": [],
"last": "Otrusina",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Smr\u017e",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the seventh international Conference on Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "1195--1199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lubom\u00edr Otrusina and Pavel Smr\u017e. 2010. A new ap- proach to pseudoword generation. In Proceedings of the seventh international Conference on Language Resources and Evaluation (LREC'10), pages 1195- 1199. European Language Resources Association.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "English Gigaword Fifth Edition. Linguistic Data Consortium",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Parker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword Fifth Edition. Linguistic Data Consortium, Philadelphia, Pennsylvania, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Paving the way to a large-scale pseudosenseannotated dataset",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Taher",
"suffix": ""
},
{
"first": "Pilehvar",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HTL-NAACL)",
"volume": "",
"issue": "",
"pages": "1100--1109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar and Roberto Navigli. 2013. Paving the way to a large-scale pseudosense- annotated dataset. In Proceedings of the 2013 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (HTL-NAACL), pages 1100- 1109, Atlanta, Georgia, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Three knowledge-free methods for automatic lexical chain extraction",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HTL-NAACL)",
"volume": "",
"issue": "",
"pages": "989--999",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Remus and Chris Biemann. 2013. Three knowledge-free methods for automatic lexical chain extraction. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (HTL-NAACL), pages 989-999, Atlanta, Georgia, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploiting the Leipzig Corpora Collection",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Quasthoff",
"suffix": ""
},
{
"first": "Erla",
"middle": [],
"last": "Hallsteinsd\u00f3ttir",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth Slovenian and First International Language Technologies Conference, IS-LTC '06",
"volume": "",
"issue": "",
"pages": "68--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Richter, Uwe Quasthoff, Erla Hallsteinsd\u00f3ttir, and Chris Biemann. 2006. Exploiting the Leipzig Corpora Collection. In Proceedings of the Fifth Slovenian and First International Language Tech- nologies Conference, IS-LTC '06, pages 68-73, Ljubljana, Slovenia. Slovenian Language Technolo- gies Society.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Graph Theory. Tampereen teknillinen yliopisto. Originally titled Graafiteoria, lecture notes translated by",
"authors": [
{
"first": "",
"middle": [],
"last": "Keijo Ruohonen",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keijo Ruohonen. 2013. Graph Theory. Tampereen teknillinen yliopisto. Originally titled Graafiteoria, lecture notes translated by Tamminen, J., Lee, K.-C. and Pich\u00e9, R.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dimensions of meaning",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of Supercomputing'92",
"volume": "",
"issue": "",
"pages": "787--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1992. Dimensions of meaning. In Proceedings of Supercomputing'92, pages 787-796, Minneapolis, Minnesota, USA. ACM/IEEE.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational linguistics",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense dis- crimination. Computational linguistics, 24(1):97- 123.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Relationship-based clustering and cluster ensembles for high-dimensional data mining",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Strehl",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Strehl. 2002. Relationship-based cluster- ing and cluster ensembles for high-dimensional data mining. Ph.D. thesis, The University of Texas at Austin, May.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of artificial intelligence research",
"volume": "37",
"issue": "1",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research, 37(1):141-188.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Graph clustering by flow simulation",
"authors": [
{
"first": "",
"middle": [],
"last": "Stijn Van Dongen",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stijn van Dongen. 2000. Graph clustering by flow sim- ulation. Ph.D. thesis, Universiteit Utrecht, May.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A graph model for unsupervised lexical acquisition",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Beate",
"middle": [],
"last": "Dorow",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows and Beate Dorow. 2002. A graph model for unsupervised lexical acquisition. In Pro- ceedings of the 19th international conference on Computational Linguistics, volume 1, pages 1-7, Taipei, Taiwan. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"num": null,
"text": "Mean scores per frequency class combination over both SSIM-based and the co-occurrencebased ego word graph data sets. The best values for each frequency class combination are highlighted.",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}