| { |
| "paper_id": "J19-3002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:58:41.513564Z" |
| }, |
| "title": "", |
| "authors": [], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "J19-3002", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "of language. Namely, each symbol can refer to several meanings, mapping the space of objects to the space of communicative signs (de Saussure 1916) . For language processing applications, these symbols need to be represented in a computational format. The structure discovery paradigm (Biemann 2012) aims at inducing a system of linguistic symbols and relationships between them in an unsupervised way to enable processing of a wide variety of languages. Clustering algorithms are central and ubiquitous tools for such kinds of unsupervised structure discovery processes applied to natural language data. In this article, we present a new clustering algorithm, 1 which is especially suitable for processing graphs of linguistic data, because it performs disambiguation of symbols in the local context in order to subsequently globally cluster those disambiguated symbols.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 147, |
| "text": "(de Saussure 1916)", |
| "ref_id": null |
| }, |
| { |
| "start": 285, |
| "end": 299, |
| "text": "(Biemann 2012)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "At the heart of our method lies the pre-processing of a graph on the basis of local pre-clustering. Breaking nodes that connect to several communities (i.e., hubs) into several local senses helps to better reach the goal of clustering, no matter which clustering algorithm is used. This results in a sparser sense-aware graphical representation of the input data. Such a representation allows the use of efficient hard clustering algorithms for performing fuzzy clustering.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The contributions presented in this article include:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A meta-algorithm for graph clustering, called WATSET, performing a fuzzy clustering of the input graph using hard clustering methods in two subsequent steps (Section 3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "A method for synset induction based on the WATSET algorithm applied to synonymy graphs weighted by word embeddings (Section 4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "A method for semantic frame induction based on the WATSET algorithm applied as a triclustering algorithm to syntactic triples (Section 5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "A method for semantic class induction based on the WATSET algorithm applied to a distributional thesaurus (Section 6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "This article is organized as follows. Section 2 discusses the related work. Section 3 presents the WATSET algorithm in a more general fashion than previously introduced in Ustalov, , including an analysis of its computational complexity and run-time. We also describe a simplified version of WATSET that does not use the context similarity measure for propagating links in the original graph to the appropriate senses in the disambiguated graph. Three subsequent sections present different applications of the algorithm. Section 4 applies WATSET for unsupervised synset induction, referencing results by Ustalov, Panchenko, and Biemann. Section 5 shows frame induction with WATSET on the basis of a triclustering approach, as previously described by Ustalov et al. (2018) . Section 6 presents new experiments on semantic class induction with WATSET. Section 7 concludes with the final remarks and pointers for future work. Table 1 shows several examples of linguistic structures on which we conduct experiments described in this article. With the exception of the type of input graph and the hyper-parameters of the WATSET algorithm, the overall pipeline remains similar in every described application. For instance, in Section 4 the input of the clustering algorithm is a graph of ambiguous synonyms and the output is an induced linguistic structure that represents synsets. Thus, by varying the input graphs we show how using the same methodology on various types of linguistic structures can be induced in an unsupervised manner. This opens avenues for extraction of various meaningful structures from linguistic graphs in natural language processing (NLP) and other fields using the method presented in this article.", |
| "cite_spans": [ |
| { |
| "start": 750, |
| "end": 771, |
| "text": "Ustalov et al. (2018)", |
| "ref_id": "BIBREF133" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 923, |
| "end": 930, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "We present surveys on graph clustering (Section 2.1), word sense induction (Section 2.2), lexical semantic frame induction (Section 2.3), and semantic class induction (Section 2.4), giving detailed explanations of algorithms used in our experiments and discussing related work on these topics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Graph clustering is a process of finding groups of strongly related vertices in a graph, which is a field of research in its own right with a large number of proposed approaches; see Schaeffer (2007) for a survey. Graph clustering methods are strongly related to the methods for finding communities in networks (Newman and Girvan 2004; Fortunato 2010) . In our work, we focus mostly on the algorithms, which have proven to be useful for processing of networks of linguistic data, such as word co-occurrence graphs, especially those that were used for induction of linguistic structures such as word senses. Markov Clustering (MCL; van Dongen 2000) is a hard clustering algorithm, that is, a method that partitions nodes of the graph in a set of disjoint clusters. This method is based on simulation of stochastic flow in graphs. MCL simulates random walks within a graph by the alternation of two operators, called expansion and inflation, which recompute the class labels. Notably, it has been successfully used for the word sense induction task (Dorow and Widdows 2003) .", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 199, |
| "text": "Schaeffer (2007)", |
| "ref_id": "BIBREF114" |
| }, |
| { |
| "start": 311, |
| "end": 335, |
| "text": "(Newman and Girvan 2004;", |
| "ref_id": "BIBREF87" |
| }, |
| { |
| "start": 336, |
| "end": 351, |
| "text": "Fortunato 2010)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 1047, |
| "end": 1071, |
| "text": "(Dorow and Widdows 2003)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Clustering", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Chinese Whispers (CW; Biemann 2006 Biemann , 2012 ) is a hard clustering algorithm for weighted graphs, which can be considered as a special case of MCL with a simplified class update step. At each iteration, the labels of all the nodes are updated according to the majority of labels among the neighboring nodes. The algorithm has a hyperparameter that controls graph weights, which can be set to three values: (1) CW top sums over the neighborhood's classes; (2) CW lin downgrades the influence of a neighboring node by its degree; or (3) CW log by the logarithm of its degree.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 21, |
| "text": "(CW;", |
| "ref_id": null |
| }, |
| { |
| "start": 22, |
| "end": 34, |
| "text": "Biemann 2006", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 35, |
| "end": 49, |
| "text": "Biemann , 2012", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Clustering", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "MaxMax (Hope and Keller 2013a ) is a fuzzy clustering algorithm particularly designed for the word sense induction task. In a nutshell, pairs of nodes are grouped if they have a maximal mutual affinity. The algorithm starts by converting the undirected input graph into a directed graph by keeping the maximal affinity nodes of each node. Next, all nodes are marked as root nodes. Finally, for each root node, the following procedure is repeated: All transitive children of this root form a cluster and the roots are marked as non-root nodes; a root node together with all its transitive children form a fuzzy cluster.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 29, |
| "text": "(Hope and Keller 2013a", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Clustering", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Clique Percolation Method (CPM) by Palla et al. (2005) is a fuzzy clustering algorithm, that is, a method that partitions nodes of a graph in a set of potentially overlapping clusters. The method is designed for unweighted graphs and builds up clusters from k-cliques corresponding to fully connected sub-graphs of k nodes. Although this method is only commonly used in social network analysis for clique detection, we decided to add it to the comparison, as synsets are essentially cliques of synonyms.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 54, |
| "text": "Palla et al. (2005)", |
| "ref_id": "BIBREF93" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Clustering", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The Louvain method (Blondel et al. 2008 ) is a hard graph clustering method developed for identification of communities in large networks. The algorithm finds hierarchies of clusters in a recursive fashion. It is based on a greedy method that optimizes modularity of a partition of the network. First, it looks for small communities by optimizing modularity locally. Second, it aggregates nodes belonging to the same community and builds a new network whose nodes are the communities. These steps are repeated to maximize modularity of the clustering result.", |
| "cite_spans": [ |
| { |
| "start": 19, |
| "end": 39, |
| "text": "(Blondel et al. 2008", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Clustering", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Word Sense Induction is an unsupervised knowledge-free approach to Word Sense Disambiguation (WSD): It uses neither handcrafted lexical resources nor hand-annotated sense-labeled corpora. Instead, it induces word sense inventories automatically from corpora. Unsupervised WSD methods fall into two main categories: context clustering and word ego network clustering.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Context clustering approaches, such as Pedersen and Bruce (1997) and Sch\u00fctze (1998) , represent an instance usually by a vector that characterizes its context, where the definition of context can vary greatly. These vectors of each instance are then clustered. Sch\u00fctze (1998) induced sparse sense vectors by clustering context vectors, using the expectation-maximization algorithm. This approach is fitted with a similarity-based WSD mechanism. Pantel and Lin (2002) used a two-staged Clustering by Committee algorithm. In the first stage, it uses average-link clustering to find small and tight clusters, which are used to iteratively identify committees from these clusters. Reisinger and Mooney (2010) presented a multi-prototype vector space. Sparse tf-idf vectors are clustered, using a parametric method fixing the same number of senses for all words. Sense vectors are centroids of the clusters.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 64, |
| "text": "Pedersen and Bruce (1997)", |
| "ref_id": "BIBREF99" |
| }, |
| { |
| "start": 69, |
| "end": 83, |
| "text": "Sch\u00fctze (1998)", |
| "ref_id": "BIBREF116" |
| }, |
| { |
| "start": 261, |
| "end": 275, |
| "text": "Sch\u00fctze (1998)", |
| "ref_id": "BIBREF116" |
| }, |
| { |
| "start": 445, |
| "end": 466, |
| "text": "Pantel and Lin (2002)", |
| "ref_id": "BIBREF96" |
| }, |
| { |
| "start": 677, |
| "end": 704, |
| "text": "Reisinger and Mooney (2010)", |
| "ref_id": "BIBREF104" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Whereas most dense word vector models represent a word with a single vector and thus conflate senses (Mikolov et al. 2013; Pennington, Socher, and Manning 2014) , there are several approaches that produce word sense embeddings. Multi-prototype extensions of the Skip-Gram model (Mikolov et al. 2013 ) that use no predefined sense inventory learn one embedding word vector per one word sense and are commonly fitted with a disambiguation mechanism (Huang et al. 2012; Apidianaki and Sagot 2014; Neelakantan et al. 2014; Tian et al. 2014; Li and Jurafsky 2015; Bartunov et al. 2016; Cocos and Callison-Burch 2016; Pelevina et al. 2016; Thomason and Mooney 2017) . Huang et al. (2012) introduced multiple word prototypes for dense vector representations (embeddings). Their approach is based on a neural network architecture; during training, all contexts of the word are clustered. Apidianaki and Sagot (2014) use an aligned parallel corpus and WordNet for English to perform cross-lingual word sense disambiguation to produce French synsets. However, Cocos and Callison-Burch (2016) showed that it is possible to successfully perform a monolingual word sense induction using only such a paraphrase corpus as Paraphrase Database (Pavlick et al. 2015) . Tian et al. (2014) introduced a probabilistic extension of the Skip-Gram model (Mikolov et al. 2013 ) that learns multiple sense-aware prototypes weighted by their prior probability. These models use parametric clustering algorithms that produce a fixed number of senses per word.", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 122, |
| "text": "(Mikolov et al. 2013;", |
| "ref_id": "BIBREF82" |
| }, |
| { |
| "start": 123, |
| "end": 160, |
| "text": "Pennington, Socher, and Manning 2014)", |
| "ref_id": "BIBREF101" |
| }, |
| { |
| "start": 278, |
| "end": 298, |
| "text": "(Mikolov et al. 2013", |
| "ref_id": "BIBREF82" |
| }, |
| { |
| "start": 447, |
| "end": 466, |
| "text": "(Huang et al. 2012;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 467, |
| "end": 493, |
| "text": "Apidianaki and Sagot 2014;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 494, |
| "end": 518, |
| "text": "Neelakantan et al. 2014;", |
| "ref_id": "BIBREF86" |
| }, |
| { |
| "start": 519, |
| "end": 536, |
| "text": "Tian et al. 2014;", |
| "ref_id": "BIBREF125" |
| }, |
| { |
| "start": 537, |
| "end": 558, |
| "text": "Li and Jurafsky 2015;", |
| "ref_id": "BIBREF72" |
| }, |
| { |
| "start": 559, |
| "end": 580, |
| "text": "Bartunov et al. 2016;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 581, |
| "end": 611, |
| "text": "Cocos and Callison-Burch 2016;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 612, |
| "end": 633, |
| "text": "Pelevina et al. 2016;", |
| "ref_id": "BIBREF100" |
| }, |
| { |
| "start": 634, |
| "end": 659, |
| "text": "Thomason and Mooney 2017)", |
| "ref_id": "BIBREF124" |
| }, |
| { |
| "start": 662, |
| "end": 681, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 880, |
| "end": 907, |
| "text": "Apidianaki and Sagot (2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1050, |
| "end": 1081, |
| "text": "Cocos and Callison-Burch (2016)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1227, |
| "end": 1248, |
| "text": "(Pavlick et al. 2015)", |
| "ref_id": "BIBREF98" |
| }, |
| { |
| "start": 1251, |
| "end": 1269, |
| "text": "Tian et al. (2014)", |
| "ref_id": "BIBREF125" |
| }, |
| { |
| "start": 1330, |
| "end": 1350, |
| "text": "(Mikolov et al. 2013", |
| "ref_id": "BIBREF82" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Neelakantan et al. 2014proposed a multi-sense extension of the Skip-Gram model, which was the first one to learn the number of senses by itself. During training, a new sense vector is allocated if the current context's similarity to existing senses is below some threshold. All previously mentioned sense embeddings were evaluated on the contextual word similarity task, each one improving upon previous models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Nieto Pi\u00f1a and Johansson (2015) presented another multi-prototype modification of the Skip-Gram model. Their approach outperforms that of Neelakantan et al. 2014, but requires the number of senses for each word to be set manually. Bartunov et al. (2016) introduced AdaGram, a non-parametric method for learning sense embeddings based on a Bayesian extension of the Skip-Gram model. The granularity of learned sense embeddings is controlled by the \u03b1 parameter. Li and Jurafsky (2015) proposed an approach for learning sense embeddings based on the Chinese Restaurant Process. A new sense is allocated if a new word context is significantly different from existing senses. The approach was tested on multiple NLP tasks, showing that sense embeddings can significantly improve the performance of part-of-speech tagging, semantic relationship identification, and semantic relatedness tasks, but yield no improvement for named entity recognition and sentiment analysis.", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 31, |
| "text": "Pi\u00f1a and Johansson (2015)", |
| "ref_id": "BIBREF89" |
| }, |
| { |
| "start": 231, |
| "end": 253, |
| "text": "Bartunov et al. (2016)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 460, |
| "end": 482, |
| "text": "Li and Jurafsky (2015)", |
| "ref_id": "BIBREF72" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Thomason and Mooney (2017) performed multi-modal word sense induction by combining both language and vision signals. In this approach, word embeddings are learned from the ImageNet corpus (Deng et al. 2009) and visual features are obtained from a deep neural network. Running a k-means algorithm on the joint feature set produces WordNet-like synsets.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 206, |
| "text": "(Deng et al. 2009)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Word ego network clustering methods cluster graphs of words semantically related to the ambiguous word (Lin 1998; Pantel and Lin 2002; Widdows and Dorow 2002; Biemann 2006; Hope and Keller 2013a) . An ego network consists of a single node (ego), together with the nodes they are connected to (alters), and all the edges among those alters (Everett and Borgatti 2005) . In our case, such a network is a local neighborhood of one word. Nodes of the ego network can be (1) words semantically similar to the target word, as in our approach, or (2) context words relevant to the target, as in the UoS system (Hope and Keller 2013b) . Graph edges represent semantic relationships between words derived using corpus-based methods (e.g., distributional semantics) or gathered from dictionaries. The sense induction process using word graphs is explored by Widdows and Dorow (2002) , Biemann (2006) , and Hope and Keller (2013a) . Disambiguation of instances is performed by assigning the sense with the highest overlap between the instance's context words and the words of the sense cluster. V\u00e9ronis (2004) compiles a corpus with contexts of polysemous nouns using a search engine.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 113, |
| "text": "(Lin 1998;", |
| "ref_id": "BIBREF73" |
| }, |
| { |
| "start": 114, |
| "end": 134, |
| "text": "Pantel and Lin 2002;", |
| "ref_id": "BIBREF96" |
| }, |
| { |
| "start": 135, |
| "end": 158, |
| "text": "Widdows and Dorow 2002;", |
| "ref_id": "BIBREF137" |
| }, |
| { |
| "start": 159, |
| "end": 172, |
| "text": "Biemann 2006;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 173, |
| "end": 195, |
| "text": "Hope and Keller 2013a)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 339, |
| "end": 366, |
| "text": "(Everett and Borgatti 2005)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 603, |
| "end": 626, |
| "text": "(Hope and Keller 2013b)", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 848, |
| "end": 872, |
| "text": "Widdows and Dorow (2002)", |
| "ref_id": "BIBREF137" |
| }, |
| { |
| "start": 875, |
| "end": 889, |
| "text": "Biemann (2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 896, |
| "end": 919, |
| "text": "Hope and Keller (2013a)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 1084, |
| "end": 1098, |
| "text": "V\u00e9ronis (2004)", |
| "ref_id": "BIBREF134" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A word graph is built by drawing edges between co-occurring words in the gathered corpus, where edges below a certain similarity threshold were discarded. His HyperLex algorithm detects hubs of this graph, which are interpreted as word senses. Disambiguation in this experiment is performed by computing the distance between context words and hubs in this graph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Di Marco and Navigli (2013) present a comprehensive study of several graph-based WSI methods, including CW, HyperLex, and curvature clustering (Dorow et al. 2005) . Additionally, the authors propose two novel algorithms: Balanced Maximum Spanning Tree Clustering and Squares (B-MST), and Triangles and Diamonds (SquaT++). To construct graphs, authors use first-order and second-order relationships extracted from a background corpus as well as keywords from snippets. This research goes beyond intrinsic evaluations of induced senses and measures the impact of the WSI in the context of an information retrieval via clustering and diversifying Web search results. Depending on the data set, HyperLex, B-MST, or CW provided the best results. For a comparative study of graph clustering algorithms for word sense induction in a pseudoword evaluation confirming the effectiveness of CW, see Cecchini et al. (2018) .", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 162, |
| "text": "(Dorow et al. 2005)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 888, |
| "end": 910, |
| "text": "Cecchini et al. (2018)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Methods based on clustering of synonyms, such as our approach and Max-Max (Hope and Keller 2013a), induce the resource from an ambiguous graph of synonyms where edges are extracted from manually created resources. To the best of our knowledge, most experiments either used graph-based word sense induction applied to text-derived graphs or relied on a linking-based method that already assumes the availability of a WordNet-like resource. A notable exception is the ECO (Extraction, Clustering, Ontologization) approach by Gon\u00e7alo Oliveira and Gomes (2014) , which was applied to induce a WordNet of the Portuguese language called Onto.PT. 2 ECO is a fuzzy clustering algorithm that was used to induce synsets for a Portuguese WordNet from several available synonymy dictionaries. The algorithm starts by adding random noise to edge weights. Then, the approach applies Markov Clustering (Section 2.1) of this graph several times to estimate the probability of each word pair being in the same synset. Finally, candidate pairs over a certain threshold are added to output synsets. We compare this approach to five other state-of-the-art graph clustering algorithms described in Section 2.1 as the baselines.", |
| "cite_spans": [ |
| { |
| "start": 531, |
| "end": 556, |
| "text": "Oliveira and Gomes (2014)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Sense Induction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Frame Semantics was originally introduced by Fillmore (1982) and further developed in the FrameNet project (Baker, Fillmore, and Lowe 1998) . FrameNet is a lexical resource composed of a collection of semantic frames, relationships between them, and a corpus of frame occurrences in text. This annotated corpus gave rise to the development of frame parsers using supervised learning (Gildea and Jurafsky 2002; Erk and Pad\u00f3 2006; Das et al. 2014, inter alia) , as well as its application to a wide range of tasks, ranging from answer extraction in Question Answering Lapata 2007) and Textual Entailment (Burchardt et al. 2009; Ben Aharon, Szpektor, and Dagan 2010) .", |
| "cite_spans": [ |
| { |
| "start": 45, |
| "end": 60, |
| "text": "Fillmore (1982)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 107, |
| "end": 139, |
| "text": "(Baker, Fillmore, and Lowe 1998)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 383, |
| "end": 409, |
| "text": "(Gildea and Jurafsky 2002;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 410, |
| "end": 428, |
| "text": "Erk and Pad\u00f3 2006;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 429, |
| "end": 457, |
| "text": "Das et al. 2014, inter alia)", |
| "ref_id": null |
| }, |
| { |
| "start": 566, |
| "end": 582, |
| "text": "Lapata 2007) and", |
| "ref_id": "BIBREF118" |
| }, |
| { |
| "start": 583, |
| "end": 625, |
| "text": "Textual Entailment (Burchardt et al. 2009;", |
| "ref_id": null |
| }, |
| { |
| "start": 626, |
| "end": 663, |
| "text": "Ben Aharon, Szpektor, and Dagan 2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Frame Induction", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "However, frame-semantic resources are arguably expensive and time-consuming to build because of difficulties in defining the frames, their granularity and domain, as well as the complexity of the construction and annotation tasks. Consequently, such resources exist only for a few languages (Boas 2009) and even English is lacking domainspecific frame-based resources. Possible inroads are cross-lingual semantic annotation transfer (Pad\u00f3 and Lapata 2009; Hartmann, Eckle-Kohler, and Gurevych 2016) or linking FrameNet to other lexical-semantic or ontological resources (Narayanan et al. 2003; Tonelli and Pighin 2009; Laparra and Rigau 2010; Gurevych et al. 2012, inter alia) . One inroad for overcoming these issues is automatizing the process of FrameNet construction through unsupervised frame induction techniques, as investigated by the systems described next.", |
| "cite_spans": [ |
| { |
| "start": 291, |
| "end": 302, |
| "text": "(Boas 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 433, |
| "end": 455, |
| "text": "(Pad\u00f3 and Lapata 2009;", |
| "ref_id": "BIBREF92" |
| }, |
| { |
| "start": 456, |
| "end": 498, |
| "text": "Hartmann, Eckle-Kohler, and Gurevych 2016)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 570, |
| "end": 593, |
| "text": "(Narayanan et al. 2003;", |
| "ref_id": "BIBREF85" |
| }, |
| { |
| "start": 594, |
| "end": 618, |
| "text": "Tonelli and Pighin 2009;", |
| "ref_id": "BIBREF128" |
| }, |
| { |
| "start": 619, |
| "end": 642, |
| "text": "Laparra and Rigau 2010;", |
| "ref_id": "BIBREF69" |
| }, |
| { |
| "start": 643, |
| "end": 676, |
| "text": "Gurevych et al. 2012, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Frame Induction", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "LDA-Frames (Materna 2012 (Materna , 2013 is an approach to inducing semantic frames using a latent Dirichlet allocation (LDA) by Blei, Ng, and Jordan (2003) for generating semantic frames and their respective frame-specific semantic roles at the same time. The authors evaluated their approach against the CPA corpus (Hanks and Pustejovsky 2005) . Although Ritter, Mausam, and Etzioni (2010) have applied LDA for inducing structures similar to frames, their study is focused on the extraction of mutually related frame arguments.", |
| "cite_spans": [ |
| { |
| "start": 11, |
| "end": 24, |
| "text": "(Materna 2012", |
| "ref_id": "BIBREF77" |
| }, |
| { |
| "start": 25, |
| "end": 40, |
| "text": "(Materna , 2013", |
| "ref_id": "BIBREF78" |
| }, |
| { |
| "start": 129, |
| "end": 156, |
| "text": "Blei, Ng, and Jordan (2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 317, |
| "end": 345, |
| "text": "(Hanks and Pustejovsky 2005)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 357, |
| "end": 391, |
| "text": "Ritter, Mausam, and Etzioni (2010)", |
| "ref_id": "BIBREF109" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Frame Induction", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "ProFinder (Cheung, Poon, and Vanderwende 2013) is another generative approach that also models both frames and roles as latent topics. The evaluation was performed on the in-domain information extraction task MUC-4 (Sundheim 1992 ) and on the text summarization task TAC-2010. 3 Modi, Titov, and Klementiev (2012) build on top of an unsupervised semantic role labeling model (Titov and Klementiev 2012) . The raw text of sentences from the FrameNet data is used for training. The FrameNet gold annotations are then used to evaluate the labeling of the obtained frames and roles, effectively clustering instances known during induction. Kawahara, Peterson, and Palmer (2014) harvest a huge collection of verbal predicates along with their argument instances and then apply the Chinese Restaurant Process clustering algorithm to group predicates with similar arguments. The approach was evaluated on the verb cluster data set of Korhonen, Krymolowski, and Marx (2003) .", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 46, |
| "text": "(Cheung, Poon, and Vanderwende 2013)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 215, |
| "end": 229, |
| "text": "(Sundheim 1992", |
| "ref_id": "BIBREF122" |
| }, |
| { |
| "start": 277, |
| "end": 278, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 375, |
| "end": 402, |
| "text": "(Titov and Klementiev 2012)", |
| "ref_id": "BIBREF127" |
| }, |
| { |
| "start": 636, |
| "end": 673, |
| "text": "Kawahara, Peterson, and Palmer (2014)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 927, |
| "end": 965, |
| "text": "Korhonen, Krymolowski, and Marx (2003)", |
| "ref_id": "BIBREF64" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Frame Induction", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "These and some other related approaches (e.g., the one by O'Connor 2013), were all evaluated in completely different incomparable settings, and used different input corpora, making it difficult to judge their relative performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Frame Induction", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The problem of inducing semantic classes from text, also known as semantic lexicon induction, has also been extensively explored in previous works. This is because inducing semantic classes directly from text has the potential to avoid the limited coverage problems of knowledge bases like Freebase, DBpedia (Bizer et al. 2009) , or BabelNet (Navigli and Ponzetto 2012) , which rely on Wikipedia (Hovy, Navigli, and Ponzetto 2013) , as well as to allow for resource induction across domains (Hovy et al. 2011) . Information about semantic classes, in turn, has been shown to benefit such high-level NLP tasks as coreference (Ng 2007) .", |
| "cite_spans": [ |
| { |
| "start": 308, |
| "end": 327, |
| "text": "(Bizer et al. 2009)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 342, |
| "end": 369, |
| "text": "(Navigli and Ponzetto 2012)", |
| "ref_id": "BIBREF86" |
| }, |
| { |
| "start": 396, |
| "end": 430, |
| "text": "(Hovy, Navigli, and Ponzetto 2013)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 491, |
| "end": 509, |
| "text": "(Hovy et al. 2011)", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 624, |
| "end": 633, |
| "text": "(Ng 2007)", |
| "ref_id": "BIBREF88" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Class Induction", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Induction of semantic classes as a research direction in the field of NLP starts, to the best of our knowledge, with Lin and Pantel (2001) , where sets of similar words are clustered into concepts. This approach performs a hard clustering and does not label clusters, but these drawbacks are addressed by Pantel and Lin (2002) , where words can belong to several clusters, thus representing senses. Pantel and Ravichandran (2004) aggregate hypernyms per cluster, which come from Hearst (1992) patterns. Pattern-based approaches were further developed using graph-based methods using a PageRank-based weighting (Kozareva, Riloff, and Hovy 2008) , random walks (Talukdar et al. 2008) , or heuristic scoring (Qadir et al. 2015) . Other approaches use probabilistic graphical models, such as the ones proposed by Ritter, Mausam, and Etzioni (2010) and Hovy et al. (2011) . To ensure the overall quality of extraction pattern with minimal supervision, Thelen and Riloff (2002) explored a bootstrapping approach, later extended by McIntosh and Curran (2009) with bagging and distributional similarity to minimize the semantic drift problem of iterative bootstrapping algorithms.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 138, |
| "text": "Lin and Pantel (2001)", |
| "ref_id": "BIBREF74" |
| }, |
| { |
| "start": 305, |
| "end": 326, |
| "text": "Pantel and Lin (2002)", |
| "ref_id": "BIBREF96" |
| }, |
| { |
| "start": 399, |
| "end": 429, |
| "text": "Pantel and Ravichandran (2004)", |
| "ref_id": "BIBREF97" |
| }, |
| { |
| "start": 479, |
| "end": 492, |
| "text": "Hearst (1992)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 610, |
| "end": 643, |
| "text": "(Kozareva, Riloff, and Hovy 2008)", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 659, |
| "end": 681, |
| "text": "(Talukdar et al. 2008)", |
| "ref_id": "BIBREF123" |
| }, |
| { |
| "start": 705, |
| "end": 724, |
| "text": "(Qadir et al. 2015)", |
| "ref_id": "BIBREF103" |
| }, |
| { |
| "start": 809, |
| "end": 843, |
| "text": "Ritter, Mausam, and Etzioni (2010)", |
| "ref_id": "BIBREF109" |
| }, |
| { |
| "start": 848, |
| "end": 866, |
| "text": "Hovy et al. (2011)", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 1025, |
| "end": 1051, |
| "text": "McIntosh and Curran (2009)", |
| "ref_id": "BIBREF79" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Class Induction", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "As an alternative to pattern-based methods, Panchenko et al. (2018b) show how to apply semantic classes to improve hypernymy extraction and taxonomy induction. Like in our experiments in Section 6, it uses a distributional thesaurus as input, as well as multiple pre-and post-processing stages to filter the input graph and disambiguate individual nodes. In contrast to Pachenko et al., here we directly apply the WATSET algorithm to obtain the resulting distributional semantic classes instead of using a sophisticated parametric pipeline that performs a sequence of clustering and pruning steps.", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 68, |
| "text": "Panchenko et al. (2018b)", |
| "ref_id": "BIBREF94" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Class Induction", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Another related strain of research to semantic class induction is dedicated to the automatic set expansion task (Sarmento et al. 2007; Wang and Cohen 2008; Pantel et al. 2009; Rong et al. 2016; Shen et al. 2017) . In this task, a set of input lexical entries, such as words or entities, is provided (e.g., \"apple, mango, pear, banana\"). The system is expected to extend this initial set with relevant entries (such as other fruits in this case, e.g., \"peach\" and \"lemon\"). Besides the academic publications listed above, Google Sets was an industrial system for providing similar functionality. 4", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 134, |
| "text": "(Sarmento et al. 2007;", |
| "ref_id": "BIBREF113" |
| }, |
| { |
| "start": 135, |
| "end": 155, |
| "text": "Wang and Cohen 2008;", |
| "ref_id": "BIBREF135" |
| }, |
| { |
| "start": 156, |
| "end": 175, |
| "text": "Pantel et al. 2009;", |
| "ref_id": null |
| }, |
| { |
| "start": 176, |
| "end": 193, |
| "text": "Rong et al. 2016;", |
| "ref_id": "BIBREF110" |
| }, |
| { |
| "start": 194, |
| "end": 211, |
| "text": "Shen et al. 2017)", |
| "ref_id": "BIBREF119" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Class Induction", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "In this section, we present WATSET, a meta-algorithm for fuzzy graph clustering. Given a graph connecting potentially ambiguous objects (e.g., words), WATSET induces a set of unambiguous overlapping clusters (communities) by disambiguating and grouping the ambiguous objects. WATSET is a meta-algorithm that uses existing hard clustering algorithms for graphs to obtain a fuzzy clustering (e.g., soft clustering).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WATSET, an Algorithm for Fuzzy Graph Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In computational linguistics, graph clustering is used for addressing problems such as word sense induction (Biemann 2006) , lexical chain computing (Medelyan 2007) , Web search results diversification (Di Marco and Navigli 2013), sentiment analysis (Pang and Lee 2004) , and cross-lingual semantic relationship induction (Lewis and Steedman 2013b); more applications can be found in the book by Mihalcea and Radev (2011).", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 122, |
| "text": "(Biemann 2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 149, |
| "end": 164, |
| "text": "(Medelyan 2007)", |
| "ref_id": "BIBREF80" |
| }, |
| { |
| "start": 250, |
| "end": 269, |
| "text": "(Pang and Lee 2004)", |
| "ref_id": "BIBREF95" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WATSET, an Algorithm for Fuzzy Graph Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Definitions. Let G = (V, E) be an undirected simple graph, 5 where V is a set of nodes and E \u2286 V 2 is a set of undirected edges. We denote a subset of nodes C i \u2286 V as a cluster. A graph clustering algorithm then is a function CLUSTER :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WATSET, an Algorithm for Fuzzy Graph Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "(V, E) \u2192 C such that V = C i \u2208C C i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WATSET, an Algorithm for Fuzzy Graph Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We distinguish two classes of graph clustering algorithms: hard clustering algorithms (partitionings) produce non-overlapping clusters, that is, C i \u2229 C j = \u2205 \u21d0\u21d2 i = j, \u2200C i , C j \u2208 C, whereas fuzzy clustering algorithms permit cluster overlapping, that is, a node can be a member of several clusters in C.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "WATSET, an Algorithm for Fuzzy Graph Clustering", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The outline of the WATSET algorithm showing the local step of word sense induction and context disambiguation, and the global step of sense graph constructing and clustering.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 1", |
| "sec_num": null |
| }, |
| { |
| "text": "WATSET constructs an intermediate representation of the input graph called a sense graph, which has been sketched as a \"disambiguated word graph\" in Biemann (2012) . This is achieved by node sense induction based on hard clustering of the input graph node neighborhoods. The sense graph has the edges established between the different senses of the input graph nodes. The global clusters of the input graph are obtained by applying a hard clustering algorithm to the sense graph; removal of the sense labels yields overlapping clusters.", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 163, |
| "text": "Biemann (2012)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Outline of WATSET, a Fuzzy Method for Local-Global Graph Clustering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "An outline of our algorithm is depicted in Figure 1 . WATSET takes an undirected graph G = (V, E) as the input and outputs a set of clusters C. The algorithm has two steps: local and global. The local step, as described in Section 3.2, disambiguates the potentially ambiguous nodes in G. The global step, as described in Section 3.3, uses these disambiguated nodes to construct an intermediate sense graph G = (V, E ) and produce the overlapping clustering C. WATSET is parameterized by two graph partitioning algorithms Cluster Local and Cluster Global , and a context similarity measure sim. The complete pseudocode of WATSET is presented in Algorithm 1. For the sake of illustration, while describing the approach, we will provide examples with words and their synonyms. However, WATSET is not bound only to the lexical units and relationships, so our examples are given without loss of generality. Note also that WATSET can be applied for both unweighted and weighted graphs as soon as the underlying hard clustering algorithms Cluster Local and Cluster Global take edge weights into account.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 51, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Outline of WATSET, a Fuzzy Method for Local-Global Graph Clustering", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The local step of WATSET discovers the node senses in the input graph and uses this information to discover which particular senses of the nodes were connected via the edges of the input graph G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "3.2.1 Node Sense Induction. We induce node senses using the word neighborhood clustering approach by Dorow and Widdows (2003) . In particular, we assume that the removal of the nodes participating in many triangles separates a graph into several Algorithm 1 WATSET, a Local-Global Meta-Algorithm for Fuzzy Graph Clustering. Input:", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 125, |
| "text": "Dorow and Widdows (2003)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "graph G = (V, E),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "hard clustering algorithms Cluster Local and Cluster Global , context similarity measure sim :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(ctx(a), ctx(b)) \u2192 R, \u2200 ctx(a), ctx(b) \u2286 V. Output: clusters C. 1: for all u \u2208 V do Local Step: Sense Induction 2: senses(u) \u2190 \u2205 3: V u \u2190 {v \u2208 V : {u, v} \u2208 E} Note that u / \u2208 V u 4: E u \u2190 {{v, w} \u2208 E : v, w \u2208 V u } 5: G u \u2190 (V u , E u ) 6: C u \u2190 Cluster Local (G u )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Cluster the open neighborhood of u 7: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "for all C i u \u2208 C u do 8: ctx(u i ) \u2190 C i u 9: senses(u) \u2190 senses(u) \u222a {u i } 10: V \u2190 u\u2208V senses(u) Global", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "for all v \u2208 ctx(\u00fb) do 14:v \u2190 arg max v \u2208senses(v) sim(ctx(\u00fb) \u222a {u}, ctx(v )) \u00fb is a sense of u \u2208 V 15: ctx(\u00fb) \u2190 ctx(\u00fb) \u222a {v} 16: E \u2190 {{\u00fb,v} \u2208 V 2 :v \u2208 ctx(\u00fb)} Global Step: Sense Graph Edges 17: G \u2190 (V, E ) Global Step: Sense Graph Construction 18: C \u2190 Cluster Global (G) Global Step: Sense Graph Clustering 19: C \u2190 {{u \u2208 V :\u00fb \u2208 C i } \u2286 V : C i \u2208 C}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Remove the sense labels 20: return C connected components. Each component corresponds to the sense of the target node, so this procedure is executed for every node independently. Figure 2 ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 179, |
| "end": 187, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Local Step: Node Sense Induction and Disambiguation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Clustering the neighborhood of the node \"bank\" of the input graph results in two clusters treated as the non-disambiguated sense contexts: bank 1 = {streambank, riverbank, . . . } and {bank 2 = bank building, building, . . . }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Example of induced senses for the node \"bank\" and the corresponding clusters (contexts).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Context", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense", |
| "sec_num": null |
| }, |
| { |
| "text": "bank 1 {streambank, riverbank, . . . } bank 2 {bank building, building, . . . } bank 3 {bank company, . . . } bank 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense", |
| "sec_num": null |
| }, |
| { |
| "text": "{coin bank, penny bank, . . . } Given a node u \u2208 V, we extract its open neighborhood G u = (V u , E u ) from the input graph G, such that the target node u is not included into V u (lines 3-5):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "V u = {v \u2208 V : {u, v} \u2208 E} (1) E u = {{v, w} \u2208 E : v, w \u2208 V u }", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Sense", |
| "sec_num": null |
| }, |
| { |
| "text": "Then, we run a hard graph clustering algorithm on G u that assigns one node to one and only one cluster, yielding a clustering C u (line 6). We treat each obtained cluster C i u \u2208 C u \u2282 V u as representing a context for a different sense of the node u \u2208 V (lines 7-9). We denote, for example, bank 1 , bank 2 , and other labels as the node senses referred to as senses(bank). In the example in Table 2 , |senses(bank)| = 4. Given a sense u i \u2208 senses(u), we denote ctx(u i ) = C i u as a context of this sense of the node u \u2208 V. Execution of this procedure for all the words in V results in the set of senses for the global step (line 10):", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 394, |
| "end": 401, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sense", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "V = u\u2208V senses(u)", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Sense", |
| "sec_num": null |
| }, |
| { |
| "text": "3.2.2 Disambiguation of Neighbors. Although at the previous step we have induced node senses and mapped them to the corresponding contexts (Table 2) , the elements of these contexts do not contain sense information. For example, the context of bank 2 in Figure 3 has two elements {bank building ? , building ? }, the sense labels of which are currently not known. We recover the sense labels of nodes in a context using the sense disambiguated approach proposed by Faralli et al. (2016) as follows. We represent each context as a vector in a vector space model (Salton, Wong, and Yang 1975) constructed for all the contexts. Because the graph G is simple (Section 3) and the context of any sense\u00fb \u2208 V does not include the corresponding node u \u2208 V (Table 2) , we temporarily put it into context during disambiguation. This prevents the situation of non-matching when the context of a candidate sense v \u2208 senses(v) has only one element and that element is u, that is, ctx(v ) = {u}. We intentionally perform this insertion temporarily only during matching to prevent self-referencing. When a context ctx(\u00fb) \u2282 V is transformed into a vector, we assign to each element v \u2208 ctx(\u00fb) of this vector a weight equal to the weight of the edge {u, v} \u2208 E of the input graph G. If G is unweighted, we assign 1 if and only if {u, v} \u2208 E, otherwise 0 is assigned. Table 3 shows an example of the context vectors used for disambiguating the word building in the context of the sense bank 2 in Figure 3 . In this example the vectors essentially represent one-hot encoding as the example input graph is unweighted. ", |
| "cite_spans": [ |
| { |
| "start": 465, |
| "end": 486, |
| "text": "Faralli et al. (2016)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 561, |
| "end": 590, |
| "text": "(Salton, Wong, and Yang 1975)", |
| "ref_id": "BIBREF112" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 139, |
| "end": 148, |
| "text": "(Table 2)", |
| "ref_id": null |
| }, |
| { |
| "start": 254, |
| "end": 262, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 747, |
| "end": 756, |
| "text": "(Table 2)", |
| "ref_id": null |
| }, |
| { |
| "start": 1349, |
| "end": 1356, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1477, |
| "end": 1485, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sense", |
| "sec_num": null |
| }, |
| { |
| "text": "Contexts for two different senses of the node \"bank\": only its senses bank 1 and bank 2 are currently known, whereas the other nodes in contexts need to be disambiguated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "An example of context vectors for the node senses demonstrated in Figures 3 and 4 . Because the graph is unweighted, one-hot encoding has been used. For matching purposes, the word \"bank\" is temporarily added into ctx(bank 2 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 66, |
| "end": 81, |
| "text": "Figures 3 and 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 3", |
| "sec_num": null |
| }, |
| { |
| "text": "bank 2 1 1 1 0 0 building 1 1 1 0 1 0 building 2 0 0 0 0 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense bank bank building building construction edifice", |
| "sec_num": null |
| }, |
| { |
| "text": "Then, given a sense\u00fb \u2208 V of a node u \u2208 V and the context of this sense ctx(\u00fb) \u2282 V, we disambiguate each node v \u2208 ctx(\u00fb). For that, we find the sensev \u2208 senses(v) the context ctx(v) \u2282 V, which maximizes the similarity to the target context ctx(\u00fb). We compute the similarity using a context similarity measure sim : (ctx(a), ctx(b)) \u2192 R, \u2200ctx(a), ctx(b) \u2286 V. 6 Typical choices for the similarity measure are dot product, cosine similarity, Jaccard index, etc. Hence, we disambiguate each context element v \u2208 ctx(\u00fb):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense bank bank building building construction edifice", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "v = arg max v \u2208senses(v) sim(ctx(\u00fb) \u222a {u}, ctx(v ))", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Sense bank bank building building construction edifice", |
| "sec_num": null |
| }, |
| { |
| "text": "An example in Figure 4 illustrates the node sense disambiguation process. The context of the sense bank 2 is ctx(bank 2 ) = {building, bank building} and the disambiguation target is building. Having chosen cosine similarity as the context similarity measure, we compute the similarity between ctx(bank 2 \u222a {bank}) and the context of every sense of building in Table 3 : cos(ctx(bank 2 ) \u222a {bank}, ctx(building 1 )) = 2 3 and cos(ctx(bank 2 ) \u222a {bank}, ctx(building 2 )) = 0. Therefore, for the word building in the", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 22, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 361, |
| "end": 368, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sense bank bank building building construction edifice", |
| "sec_num": null |
| }, |
| { |
| "text": "Matching the meaning of the ambiguous node \"building\" in the context of the sense bank 2 . For matching purposes, the word \"bank\" is temporarily added into ctx(bank 2 ). context of bank 2 , its first sense, building 1 , should be used because its similarity value is higher.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 4", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, we construct a disambiguated context ctx(\u00fb) \u2282 V that is a sense-aware representation of ctx(\u00fb). This disambiguated context indicates which node senses were connected to\u00fb \u2208 V in the input graph G. For that, in lines 13-15, we apply the disambiguation procedure defined in Equation 4for every node v \u2208 ctx(\u00fb):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 4", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "ctx(\u00fb) = {v \u2208 V : v \u2208 ctx(\u00fb)}", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Figure 4", |
| "sec_num": null |
| }, |
| { |
| "text": "As the result of the local step, for each node u \u2208 V in the input graph, we induce the senses(u) \u2282 V of nodes and provide each sense\u00fb \u2208 V with a disambiguated context ctx(\u00fb) \u2286 V.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 4", |
| "sec_num": null |
| }, |
| { |
| "text": "The global step of WATSET constructs an intermediate sense graph expressing the connections between the node senses discovered at the local step. We assume that the nodes V of the sense graph are non-ambiguous, so running a hard clustering algorithm on this graph outputs clusters C covering the set of nodes V of the input graph G. 3, we construct the sense graph G = (V, E ) by establishing undirected edges between the senses connected through the disambiguated contexts (lines 16-17):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global Step: Sense Graph Construction and Clustering", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "E = {{\u00fb,v} \u2208 V 2 :v \u2208 ctx(\u00fb)}", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Sense Graph Construction. Using the set of node senses defined in Equation", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "Note that this edge construction approach disambiguates the edges E such that if a pair of nodes was connected in the input graph G, then the corresponding sense nodes will be connected in the sense graph G. As a result, the constructed sense graph G is a sense-aware representation of the input graph G. In the event G is weighted, we assign each edge {\u00fb,v} \u2208 E the same weight as the edge {u, v} \u2208 E has in the input graph. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph Construction. Using the set of node senses defined in Equation", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "Clustering of the sense graph G yields two clusters, {bank 1 , streambank 3 , riverbank 2 , . . . } and {bank 2 , bankbuilding 1 , building 2 , . . . }; if one removes the sense labels, the clusters will overlap, resulting in a soft clustering of the input graph G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "Clustering. Running a hard clustering algorithm on G produces the set of sense-aware clusters C; each sense-aware cluster C i \u2208 C is a subset of V (line 18). In order to obtain the set of clusters C that covers the set of nodes V of the input graph G, we simply remove the sense labels from the elements of clusters C (line 19):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "C = {{u \u2208 V :\u00fb \u2208 C i } \u2286 V : C i \u2208 C} (7)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "Figure 5 illustrates the sense graph and its clustering in the example of the node \"bank.\" The construction of a sense graph requires disambiguation of the input graph nodes. Note that traditional approaches to graph-based sense induction, such as the ones proposed by V\u00e9ronis (2004) , Biemann (2006) , and Hope and Keller (2013a), do not perform this step, but perform only local clustering of the graph because they do not aim at a global representation of clusters.", |
| "cite_spans": [ |
| { |
| "start": 269, |
| "end": 283, |
| "text": "V\u00e9ronis (2004)", |
| "ref_id": "BIBREF134" |
| }, |
| { |
| "start": 286, |
| "end": 300, |
| "text": "Biemann (2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "As the result of the global step, a set of clusters C of the input graph G is obtained, using an intermediate sense-aware graph G. The presented local-global graph clustering approach, WATSET, makes it possible to naturally achieve a soft clustering of a graph using hard clustering algorithms only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "The original WATSET algorithm, as previously published and described in Section 3.1, has context construction and disambiguation steps. These steps involve computation of a context similarity measure, which needs to be chosen as a hyper-parameter of the algorithm (Section 3.2.2). In this section, we propose a simplified version of WATSET (Algorithm 2) that requires no context similarity measure, which leads to faster computation in practice with less hyperparameter tuning. As our experiments throughout this article show, this simplified version demonstrates similar performance to the original WATSET algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In the input graph G a pair of nodes {u, v} \u2208 V 2 can be incident to one and only one edge. Otherwise, these nodes are not connected. Because of the use of a hard clustering algorithm for node sense induction (Section 2.2), in any pair of nodes {u, v} \u2208 E, the node v can appear in the context of only one sense of u and vice versa. Therefore, we can omit the context disambiguation step (Section 3.2.2) by tracking the node sense identifiers produced during sense induction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Algorithm 2 Simplified WATSET. Input: graph G = (V, E), hard clustering algorithms Cluster Local and Cluster Global . Output: clusters C.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "1: V \u2190 \u2205 2: for all u \u2208 V do Local Step: Sense Induction 3: V u \u2190 {v \u2208 V : {u, v} \u2208 E} Note that u / \u2208 V u 4: E u \u2190 {{v, w} \u2208 E : v, w \u2208 V u } 5: G u \u2190 (V u , E u ) 6: C u \u2190 Cluster Local (G u )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Cluster the open neighborhood of u 7:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "for all C i u \u2208 C u do 8: for all v \u2208 C i u do 9: senses[u][v] \u2190 i Node v is connected to the i-th sense of u 10: V \u2190 V \u222a {u i } 11: E \u2190 {{u senses[u][v] , v senses[v][u] } \u2208 V 2 : {u, v} \u2208 E} Global Step: Sense Graph Edges 12: G \u2190 (V, E )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Global", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step: Sense Graph Construction 13: C \u2190 Cluster Global (G)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Global", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step: Sense Graph Clustering 14:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "C \u2190 {{u \u2208 V :\u00fb \u2208 C i } \u2286 V : C i \u2208 C}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Remove the sense labels 15: return C Given a pair {u, v} \u2208 E, we reuse the sense information from Table 2 to determine which context of a sense\u00fb \u2208 V contains v. We denote this as senses", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 98, |
| "end": 105, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "[u][v] \u2208 N, which indicates v \u2208 ctx(u senses[u][v] )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": ", that is, the fact that node v is connected to the node u in the specified sense u senses[u] [v] . Following the example in Figure 2 , if the context of bank 1 contains the word streambank, then the context of one of the senses of streambank must contain the word bank (e.g., streambank 3 ). This information allows us to create Table 4 , which allows producing the set of sense-aware edges by simultaneously retrieving the corresponding sense identifiers:", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 97, |
| "text": "[v]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 125, |
| "end": 133, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 330, |
| "end": 337, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "E = {u senses[u][v] , v senses[v][u] } \u2208 V 2 : {u, v} \u2208 E", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "This allows us to construct the sense graph G in linear time O(|E|) by querying the node sense index to disambiguate the input edges E in a deterministic way. Other steps are identical to the original WATSET algorithm (Section 3.1). Simplified WATSET is presented in Algorithm 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simplified WATSET", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We analyze the computational complexity of the separate routines of WATSET and then present the overall complexity compared with other hard and soft clustering algorithms. Our analysis is based on the assumption that the context similarity measure in Equation (4) can be computed in linear time with respect to the number of dimensions d \u2208 N. For instance, such measures as cosine and Jaccard satisfy this requirement. In all our experiments throughout this article, we use the cosine similarity measure: sim(ctx(a), ctx(b)) = cos(ctx(a), ctx(b)), \u2200ctx(a), ctx(b) \u2286 V. Provided that the context vectors are normalized, the complexity of such a measure is bound by the complexity of an inner product of two vectors, which is O(|ctx(a) \u222a ctx(b)|). . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithmic Complexity", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Because the running time of our algorithm depends on the task-specific choice of two hard clustering algorithms, Cluster Local and Cluster Global , we report algorithmspecific analysis on two hard clustering algorithms that are popular in computational linguistics: CW by Biemann (2006) and MCL by van Dongen 2000. Given a graph G = (V, E), the computational complexity is O(|E|) for CW and O(|V| 3 ) for MCL. 7 Additionally, we denote deg max as the maximum degree of G. Note that although, in general, deg max is bound by |V|, in the real natural language-derived graphs this variable is distributed according to a power law. It is small for the majority of the nodes in a graph, making average running times acceptable in practice, as presented in Section 3.5.5.", |
| "cite_spans": [ |
| { |
| "start": 272, |
| "end": 286, |
| "text": "Biemann (2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithmic Complexity", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Induction. This operation is executed for every node of the input graph G, that is, |V| times. By definition of an undirected graph, the maximum number of neighbors of a node in G is deg max and the maximum number of edges in a neighborhood is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Node Sense", |
| "sec_num": "3.5.1" |
| }, |
| { |
| "text": "deg max (deg max \u22121) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Node Sense", |
| "sec_num": "3.5.1" |
| }, |
| { |
| "text": ". Thus, this operation takes O(|V| deg 2 max ) steps with CW and O(|V| deg 3 max ) steps with MCL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Node Sense", |
| "sec_num": "3.5.1" |
| }, |
| { |
| "text": "3.5.2 Disambiguation of Neighbors. Let senses max be the maximum number of senses for a node and ctx max be the maximum size of the node sense context. Thus, this operation takes O(|V| \u00d7 senses max \u00d7 ctx max ) steps to iterate over all the node sense contexts. At each iteration, it scans all the senses of the ambiguous node in context and computes a similarity between its context and the candidate sense context in a linear time (Section 3.5). This requires O(senses max \u00d7 ctx max ) steps per each node in context. Therefore, the whole operation takes O(|V| \u00d7 senses 2 max \u00d7 ctx 2 max ) steps. Because the maximum number of node senses is observed in a special case when the neighborhood is an unconnected graph, senses max \u2264 deg max . Given the fact that the maximum context size is observed in a special case when the neighborhood is a fully connected graph, ctx max \u2264 Table 5 Computational complexity of graph clustering algorithms, where |V| is the number of vertices, |E| is the number of edges, and deg max is the maximum degree of a vertex. For brevity, we do not insert rows corresponding to Simplified WATSET (Algorithm 2), which does not require the O(|V| deg 4 max ) term related to context disambiguation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 874, |
| "end": 881, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Node Sense", |
| "sec_num": "3.5.1" |
| }, |
| { |
| "text": "Chinese Whispers (Biemann 2006 Louvain method (Blondel et al. 2008) hard O(|V| log(|V|)) Clique Percolation (Palla et al. 2005) soft", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 30, |
| "text": "(Biemann 2006", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 46, |
| "end": 67, |
| "text": "(Blondel et al. 2008)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 108, |
| "end": 127, |
| "text": "(Palla et al. 2005)", |
| "ref_id": "BIBREF93" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm Hard or Soft Computational Complexity", |
| "sec_num": null |
| }, |
| { |
| "text": "2 |V| WATSET[CW, CW] soft O(|V| 2 deg 2 max + |V| deg 4 max ) WATSET[CW, MCL] soft O(|V| 3 deg 3 max + |V| deg 4 max ) WATSET[MCL, CW] soft O(|V| 2 deg 2 max + |V| deg 4 max ) WATSET[MCL, MCL] soft O(|V| 3 deg 3 max + |V| deg 4 max )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm Hard or Soft Computational Complexity", |
| "sec_num": null |
| }, |
| { |
| "text": "deg max . Thus, disambiguation of all the node sense contexts takes O(|V| deg 4 max ) steps. Note that because the simplified version of WATSET, as described in Section 3.4, does not perform context disambiguation, this term should be taken into account only for the original version of WATSET (Algorithm 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm Hard or Soft Computational Complexity", |
| "sec_num": null |
| }, |
| { |
| "text": "Clustering. Like the input graph G, the sense graph G is undirected, so it has at most |V| deg max nodes and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.5.3" |
| }, |
| { |
| "text": "|V| deg max (|V| deg max \u22121) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.5.3" |
| }, |
| { |
| "text": "edges. Thus, this operation takes O(|V| 2 deg 2 max ) steps with CW and O(|V| 3 deg 3 max ) steps with MCL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.5.3" |
| }, |
| { |
| "text": "3.5.4 Overall Complexity. Table 5 presents a comparison of WATSET to other hard and soft graph clustering algorithms popular in computational linguistics, 8 such as CW by Biemann (2006) , MCL by van Dongen 2000, and MaxMax by Hope and Keller (2013a) . Additionally, we compare WATSET with several graph clustering algorithms that are popular in network science, such as the Louvain method by Blondel et al. (2008) and CPM by Palla et al. (2005) . The notation WATSET[MCL, CW] means using MCL for local clustering and CW for global clustering (cf. the discussion on graph clustering algorithms in Section 2.1).", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 185, |
| "text": "Biemann (2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 226, |
| "end": 249, |
| "text": "Hope and Keller (2013a)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 392, |
| "end": 413, |
| "text": "Blondel et al. (2008)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 425, |
| "end": 444, |
| "text": "Palla et al. (2005)", |
| "ref_id": "BIBREF93" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 26, |
| "end": 33, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.5.3" |
| }, |
| { |
| "text": "The analysis shows that the most time-consuming operations in WATSET are sense graph clustering and context disambiguation. Although the overall computational complexity of our meta-algorithm is higher than that of the other methods, its computeintensive operations, such as node sense induction and context disambiguation, are Table 6 Parameters of the co-occurrence graphs for different corpus sizes in the Leipzig Corpora Collection, where |V| is the number of vertices, |E| is the number of edges, and deg max is the maximum degree of a vertex; time is measured in minutes. executed for every node independently, so the algorithm can easily be run in a parallel or a distributed way to reduce the running time.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 328, |
| "end": 335, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sense Graph", |
| "sec_num": "3.5.3" |
| }, |
| { |
| "text": "3.5.5 An Empirical Evaluation of Average Running Times. In order to evaluate the running time of WATSET on a real-world scenario, we applied it to the clustering of co-occurrence graphs. Word clusters discovered from co-occurrence graphs are the sets of semantically related polysemous words, so we ran our sense-aware clustering algorithm to obtain overlapping word clusters. We used the English word co-occurrence graphs from the Leipzig Corpora Collection by Goldhahn, Eckart, and Quasthoff (2012) because it is partitioned into corpora of different sizes. 9 We evaluated the graphs corresponding to five different English corpus sizes: 10K, 30K, 100K, 300K, and 1M tokens ( Table 6 ). The measurements were made independently among the graphs using the WATSET[CW, CW] algorithm with the lowest complexity bound by O(|V| 2 deg 2 max + |V| deg 4 max ). Because our implementation of WATSET in the Java programming language, as described in Section 7, is multi-threaded and runs node sense induction and context disambiguation steps in parallel, we study the benefit of multiple available central processing unit (CPU) cores to the overall running time. The single-threaded setup that uses only one CPU core will be referred to as sequential, while the multi-threaded setup that uses all the CPU cores available on the machine will be referred to as parallel.", |
| "cite_spans": [ |
| { |
| "start": 462, |
| "end": 500, |
| "text": "Goldhahn, Eckart, and Quasthoff (2012)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 560, |
| "end": 561, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 678, |
| "end": 685, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "|V|", |
| "sec_num": null |
| }, |
| { |
| "text": "For each graph, we ran WATSET five times. Following Hork\u00fd et al. (2015) , the first three runs were used off-record to warm-up the Java virtual machine. The next two runs were used for actual measurement. We used the following computational node for this experiment: two Intel Xeon E5-2630 v4 CPUs, 256 GB of ECC RAM, Ubuntu 16.04.4 LTS (Linux 4.13.0, x86 64), Oracle Java 8b121; 40 logical cores were available in total. Table 6 reports the running time mean and the standard deviation for both setups, sequential and parallel. Figure 6 shows the polynomial growth of O(|V| 2.52 ), which is smaller than the worst case of O(|V| 2 deg 2 max + |V| deg 4 max ). This is because in co-occurrence graphs, as well as in many other real-world graphs that also exhibit scale-free small world properties (Steyvers and Tenenbaum 2005) , the degree distribution among nodes is strongly right-skewed. This makes WATSET useful for processing real-world graphs. Both Table 6 and Figure 6 clearly confirm that WATSET scales well and can be parallelized on multiple CPU cores, which makes it possible to process very large graphs.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 71, |
| "text": "Hork\u00fd et al. (2015)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 796, |
| "end": 825, |
| "text": "(Steyvers and Tenenbaum 2005)", |
| "ref_id": "BIBREF121" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 422, |
| "end": 429, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 529, |
| "end": 537, |
| "text": "Figure 6", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 954, |
| "end": 961, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 966, |
| "end": 974, |
| "text": "Figure 6", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "|V|", |
| "sec_num": null |
| }, |
| { |
| "text": "A synset is a set of mutual synonyms, which can be represented as a clique graph where nodes are words and edges are synonymy relationships. Synsets represent word senses and are building blocks of thesauri and lexical ontologies, such as WordNet (Fellbaum 1998) . These resources are crucial for many NLP applications that require common sense reasoning, such as information retrieval (Gong, Cheang, and Hou U 2005) , sentiment analysis (Montejo-R\u00e1ez et al. 2014), and question answering (Kwok, Etzioni, and Weld 2001; Zhou et al. 2013) .", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 262, |
| "text": "(Fellbaum 1998)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 386, |
| "end": 416, |
| "text": "(Gong, Cheang, and Hou U 2005)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 489, |
| "end": 519, |
| "text": "(Kwok, Etzioni, and Weld 2001;", |
| "ref_id": "BIBREF67" |
| }, |
| { |
| "start": 520, |
| "end": 537, |
| "text": "Zhou et al. 2013)", |
| "ref_id": "BIBREF140" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Synset Induction", |
| "sec_num": "4." |
| }, |
| { |
| "text": "For most languages, no manually constructed resource is available that is comparable to the English WordNet in terms of coverage and quality (Braslavski et al. 2016) . For instance, Kiselev, Porshnev, and Mukhin (2015) present a comparative analysis of lexical resources available for the Russian language, concluding that there is no resource compared with WordNet in terms of completeness and availability for Russian. This lack of linguistic resources for many languages strongly motivates the development of new methods for automatic construction of WordNet-like resources. In this section, we apply WATSET for unsupervised synset induction from a synonymy graph and compare it with state-of-the-art graph clustering algorithms run on the same task.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 165, |
| "text": "(Braslavski et al. 2016)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 182, |
| "end": 218, |
| "text": "Kiselev, Porshnev, and Mukhin (2015)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Synset Induction", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Wikipedia, 10 Wiktionary, 11 OmegaWiki, 12 and other collaboratively created resources contain a large amount of lexical semantic information-yet are designed to be humanreadable and not formally structured. Although semantic relationships can be automatically extracted using tools such as DKPro JWKTL 13 by Zesch, M\u00fcller, and Gurevych (2008) and Wikokit 14 by Krizhanovsky and Smirnov (2013) , words in these relationships are not disambiguated. For instance, the synonymy pairs {bank, streambank} and {bank, banking company} will be connected via the word \"bank,\" although they refer to different senses. This problem stems from the fact that articles in Wiktionary and similar resources list \"undisambiguated\" synonyms. They are easy to disambiguate for humans while reading a dictionary article but can be a source of errors for language processing systems.", |
| "cite_spans": [ |
| { |
| "start": 309, |
| "end": 343, |
| "text": "Zesch, M\u00fcller, and Gurevych (2008)", |
| "ref_id": null |
| }, |
| { |
| "start": 362, |
| "end": 393, |
| "text": "Krizhanovsky and Smirnov (2013)", |
| "ref_id": "BIBREF66" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonymy Graph Construction and Clustering", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Although large-scale automatically constructed lexical semantic resources like Ba-belNet (Navigli and Ponzetto 2012) are available, they contain synsets with relationships other than synonymity. For instance, in BabelNet 4.0, the synset for bank as an institution contains among other things non-synonyms like Monetary intermediation and Moneylenders. 15 A synonymy dictionary can be perceived as a graph, where the nodes correspond to lexical units (words) and the edges connect pairs of the nodes when the synonymy relationship between them holds. Because such a graph can easily be obtained for arbitrary language, we expect that constructing and clustering a sense-aware representation of a synonymy graph yields plausible synsets covering polysemous words. 4.1.1 Synonymy Graph Construction. Given a synonymy dictionary, we construct the synonymy graph G = (V, E) as follows. The set of nodes V includes every lexical unit appearing in the input dictionary. An edge in the set of edges E \u2286 V 2 is established if and only if a pair of words are distinguished synonyms, according to the input synonymy dictionary. To enhance our representation with the contextual semantic similarity between synonyms, we assigned every edge {u, v} \u2208 E a weight equal to the cosine similarity of Skip-Gram word embeddings (Mikolov et al. 2013) . As a result, we obtained a weighted synonymy graph G.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 116, |
| "text": "(Navigli and Ponzetto 2012)", |
| "ref_id": "BIBREF86" |
| }, |
| { |
| "start": 352, |
| "end": 354, |
| "text": "15", |
| "ref_id": null |
| }, |
| { |
| "start": 1308, |
| "end": 1329, |
| "text": "(Mikolov et al. 2013)", |
| "ref_id": "BIBREF82" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonymy Graph Construction and Clustering", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Clustering. Because the graph G contains both monosemeous and polysemous words without indication of the particular senses, we run WATSET to obtain a soft clustering C of the synonymy graph G. Since our algorithm explicitly induces and clusters the word senses, the elements of the clusters C are by definition synsets, that is, sets of words that are synonymous with each other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synonymy Graph", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "We conduct our experiments on resources from two different languages. We evaluate our approach on two data sets for English to demonstrate its performance in a resource-rich language. Additionally, we evaluate it on two Russian data sets, because Russian is a good example of an under-resourced language with a clear need for synset induction (Kiselev, Porshnev, and Mukhin 2015) .", |
| "cite_spans": [ |
| { |
| "start": 343, |
| "end": 379, |
| "text": "(Kiselev, Porshnev, and Mukhin 2015)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "4.2.1 Experimental Set-Up. We compare WATSET with five popular graph clustering methods presented in Section 2.1: CW, MCL, MaxMax, ECO, and the CPM. The first two algorithms perform hard clustering algorithms, and the last three are soft clustering methods just like our method. Although the hard clustering algorithms are able to discover clusters that correspond to synsets composed of unambiguous words, they can produce wrong results in the presence of lexical ambiguity when a node should belong to several synsets. In our experiments, we use CW and MCL also as the underlying algorithms for local and global clustering in WATSET, so our comparison will show the difference between the \"plain\" underlying algorithms and their utilization in WATSET. We also report the performance of Simplified WATSET (Section 3.4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In our experiments, we rely on our own implementation of MaxMax and ECO, as reference implementations are not available. For CW, 16 MCL, 17 and CPM, 18 available implementations have been used. During the evaluation, we delete clusters equal to or larger than the threshold of 150 words, as they can hardly represent any meaningful synset. Only the clusters produced by the MaxMax algorithm were actually affected by this threshold.", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 151, |
| "text": "18", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Quality Measure. To evaluate the quality of the induced synsets, we transform them into synonymy pairs and computed precision, recall, and F 1 -score on the basis of the overlap of these synonymy pairs with the synonymy pairs from the gold standard data sets. The F 1 -score calculated this way is known as paired F-score (Manandhar et al. 2010 ; Hope and Keller 2013a). Let C be the set of obtained synsets and C G be the set of gold synsets. Given a synset containing n > 1 words, we generate n(n\u22121) 2 pairs of synonyms, so we transform C into a set of pairs P and C G into a set of gold pairs P G . We then compute the numbers of positive and negative answers as follows:", |
| "cite_spans": [ |
| { |
| "start": 322, |
| "end": 344, |
| "text": "(Manandhar et al. 2010", |
| "ref_id": "BIBREF76" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "TP = |P \u222a P G | (9) FP = |P \\ P G | (10) FN = |P G \\ P| (11)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where TP is the number of true positives, FP is the number of false positives, and FN is the number of false negatives. As a result, we use the standard definitions of precision as Pr = TP TP+FP , recall as Re = TP TP+FN , and F 1 -score as F 1 = 2\u2022Pr\u2022Re Pr+Re . The advantage of this measure compared with other cluster evaluation measures, such as fuzzy B-Cubed (Jurgens and Klapaftis 2013) and normalized modified purity (Kawahara, Peterson, and Palmer 2014) , is its straightforward interpretability.", |
| "cite_spans": [ |
| { |
| "start": 424, |
| "end": 461, |
| "text": "(Kawahara, Peterson, and Palmer 2014)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Statistical Testing. We evaluate the statistical significance of the experimental results using a McNemar's test (1947) . Given the results of two algorithms, we build a 2 \u00d7 2 contingency table and compute the p-value of the test using the Statsmodels toolkit (Seabold and Perktold 2010) . 19 Since the hypothesis tested by the McNemar's test is whether the results from both algorithms are similar against the alternative that they are not, we use the p-value of this test to assess the significance of the difference between F 1 -scores (Dror et al. 2018) . We consider the performance of one algorithm to be higher than the performance of another if its F 1 -score is larger and the corresponding p-value is smaller than a significance level of 0.01.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 119, |
| "text": "McNemar's test (1947)", |
| "ref_id": null |
| }, |
| { |
| "start": 260, |
| "end": 287, |
| "text": "(Seabold and Perktold 2010)", |
| "ref_id": "BIBREF117" |
| }, |
| { |
| "start": 290, |
| "end": 292, |
| "text": "19", |
| "ref_id": null |
| }, |
| { |
| "start": 539, |
| "end": 557, |
| "text": "(Dror et al. 2018)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Gold Standards. We conduct our evaluation on four lexical semantic resources for two different natural languages. Statistics of the gold standard data sets are present in Table 7 . We report the number of lexical units (# words), synsets (# synsets), and the generated synonymy pairs (# pairs). We use WordNet, 20 a popular English lexical database constructed by expert lexicographers (Fellbaum 1998) . WordNet contains general vocabulary and appears to be the de facto gold standard in similar tasks (Hope and Keller 2013a). We used WordNet 3.1 to derive the synonymy pairs from synsets. Additionally, to compare to an automatically constructed lexical resource, we use BabelNet, 21 a large-scale multilingual semantic network based on WordNet, Wikipedia, and other resources (Navigli and Ponzetto 2012) . We retrieved all the synonymy pairs from the BabelNet 3.7 synsets marked as English, using the BabelNet Extract tool .", |
| "cite_spans": [ |
| { |
| "start": 386, |
| "end": 401, |
| "text": "(Fellbaum 1998)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 778, |
| "end": 805, |
| "text": "(Navigli and Ponzetto 2012)", |
| "ref_id": "BIBREF86" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 171, |
| "end": 178, |
| "text": "Table 7", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "As a lexical ontology for Russian, we use RuWordNet 22 by Loukachevitch et al. (2016) , containing both general vocabulary and domain-specific synsets related to sport, finance, economics, and so forth. Up to one half of the words in this resource are multi-word expressions (Kiselev, Porshnev, and Mukhin 2015) , which is due to the coverage of domain-specific vocabulary. RuWordNet is a WordNet-like version of the RuThes thesaurus that is constructed in the traditional way, namely by a small group of expert lexicographers (Loukachevitch 2011) . In addition, we use Yet Another RussNet 23 (YARN) by Braslavski et al. (2016) as another gold standard for Russian. The resource is constructed using crowdsourcing and mostly covers general vocabulary. In particular, non-expert users are allowed to edit synsets in a collaborative way, loosely supervised by a team of project curators. Because of the ongoing development of the resource, we selected as the silver standard only those synsets that were edited at least eight times in order to filter out noisy incomplete synsets. 24 We do not use BabelNet for evaluating the Russian synsets, as our manual inspection during prototyping showed, on average, a much lower quality than its English subset.", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 85, |
| "text": "Loukachevitch et al. (2016)", |
| "ref_id": "BIBREF75" |
| }, |
| { |
| "start": 275, |
| "end": 311, |
| "text": "(Kiselev, Porshnev, and Mukhin 2015)", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 527, |
| "end": 547, |
| "text": "(Loukachevitch 2011)", |
| "ref_id": null |
| }, |
| { |
| "start": 603, |
| "end": 627, |
| "text": "Braslavski et al. (2016)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1079, |
| "end": 1081, |
| "text": "24", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Input Data. For each language, we constructed a synonymy graph using openly available synonymy dictionaries. The statistics of the graphs used as the input in the further experiments are shown in Table 8 . For English, synonyms were extracted from the English Wiktionary, 25 which is the largest Wiktionary at the present moment in terms of the lexical coverage, using the DKPro JWKTL tool by Zesch, M\u00fcller, and Gurevych (2008) . English words have been extracted from the dump.", |
| "cite_spans": [ |
| { |
| "start": 393, |
| "end": 427, |
| "text": "Zesch, M\u00fcller, and Gurevych (2008)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 196, |
| "end": 203, |
| "text": "Table 8", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For Russian, synonyms from three sources were combined to improve lexical coverage of the input dictionary and to enforce confidence in jointly observed synonyms: (1) synonyms listed in the Russian Wiktionary extracted using the Wikokit tool by Krizhanovsky and Smirnov (2013) ; (2) the dictionary of Abramov (1999) ; and (3) the Universal Dictionary of Concepts (Dikonov 2013) . Whereas the two latter resources are specific to Russian, Wiktionary is available for most languages. Note that the same input synonymy dictionaries were used by authors of YARN to construct synsets using crowdsourcing. The results on the YARN data set show how closely an automatic synset induction method can approximate manually created synsets provided the same starting material. 26 Because of the vocabulary differences between the input data and the gold standard data sets, we use the intersection between the lexicon of the gold standard and the united lexicon of all the compared configurations of the algorithms during all the experiments in this section.", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 276, |
| "text": "Krizhanovsky and Smirnov (2013)", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 301, |
| "end": 315, |
| "text": "Abramov (1999)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 363, |
| "end": 377, |
| "text": "(Dikonov 2013)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 765, |
| "end": 767, |
| "text": "26", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We tuned the hyper-parameters for such methods as CPM (Palla et al. 2005) and ECO (Gon\u00e7alo Oliveira and Gomes 2014) on the evaluation data set. We do not perform any tuning of WATSET because the underlying local and global clustering algorithms, CW and MCL, are parameter-free, so we use default configurations of these and their variations. As CPM k = 3 we denote that this method showed the best performance using the threshold value of k = 3. For ECO, we found the threshold value of \u03b8 = 0.05 yielding the best results, as opposed to the value of \u03b8 = 0.2 suggested by Gon\u00e7alo Oliveira and Gomes (2014) . We also study the performance impact of different edge-weighting approaches for the same input graph. For that, we present the results of running the same algorithms in three different setups: ones that assigns every edge the constant weight of 1, count that weights the edge {u, v} \u2208 E with the number of times a synonymy pair appeared in the input dictionary, and sim that uses cosine similarity between word embeddings, as described in Section 4.1.1. For English, we use the commonly used 300-dimensional word embeddings trained on the 100 billion tokens Google News corpus. 27 For Russian, we use the 500-dimensional embeddings from the Russian Distributional Thesaurus trained on a 12.9 billion tokens corpus of books, which yielded the state-of-art performance on a shared task on Russian semantic similarity Figure 7 presents an overview of the evaluation results on both data sets. Because the synonymy graph construction step is the same for all the experiments, we start our analysis with the comparison of different edge-weighting approaches introduced in Section 4.2.2: constant values (ones), frequencies (count), and semantic similarity scores (sim) based on word vector similarity. Results across various configurations and methods indicate that using the weights based on the similarity scores provided by word embeddings is the best strategy for all methods except Max-Max on the English data sets. However, its performance using the ones weighting does not exceed the other methods using the sim weighting. Therefore, we report all further results on the basis of the sim weights. The edge weighting scheme impacts Russian more for most algorithms. The CW algorithm, however, remains sensitive to the weighting also for the English data set due to its randomized nature.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 73, |
| "text": "(Palla et al. 2005)", |
| "ref_id": "BIBREF93" |
| }, |
| { |
| "start": 579, |
| "end": 604, |
| "text": "Oliveira and Gomes (2014)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1422, |
| "end": 1430, |
| "text": "Figure 7", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parameter Tuning.", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Comparison of the synset induction methods on data sets for English. All methods rely on the similarity edge weighting (sim); best configurations of each method in terms of F 1 -scores are shown for each data set. Results are sorted by F 1 -score on BabelNet; top three values of each measure are boldfaced, and statistically significant results are marked with an asterisk ( * ). Simplified WATSET is denoted as WATSET \u00a7. Tables 9 and 10 present evaluation results for both languages. For each method, we show the best configurations in terms of F 1 -score. One may note that the granularity of the resulting synsets, especially for Russian, is very different, ranging from 4,000 synsets for the CPM k = 3 method to 67,645 induced by the ECO method. Both tables report the number of words, synsets, and synonyms after pruning huge clusters larger than 150 words. Without this pruning, the MaxMax and CPM methods tend to discover giant components obtaining almost zero precision as we generate all possible pairs of nodes in such clusters. The other methods did not exhibit such behavior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 9", |
| "sec_num": null |
| }, |
| { |
| "text": "The disambiguation of the input graph performed by the WATSET method splits nodes belonging to several local communities to several nodes, significantly facilitating the clustering task otherwise complicated by the presence of the hubs that wrongly link semantically unrelated nodes. WATSET robustly outperformed all other methods, Table 10 Results on data sets for Russian sorted by F 1 -score on Yet Another RussNet (YARN); top three values of each measure are boldfaced and statistically significant results are marked with an asterisk ( * ). Simplified WATSET is denoted as WATSET \u00a7. according to the F 1 -score on all the data sets for English (Table 9) and Russian (Table 10) MCL] has significantly outperformed all the other algorithms (p 0.01). Interestingly, in all the cases, the toughest competitor was a hard clustering algorithm-MCL (van Dongen 2000). We observed that the \"plain\" MCL successfully groups monosemous words, but isolates the neighborhood of polysemous words, which results in the recall drop in comparison to WATSET. CW operates more quickly due to a simplified update step. On the same graph, CW tends to produce larger clusters than MCL. This leads to a higher recall of \"plain\" CW as compared with the \"plain\" MCL, at the cost of lower precision. Although MCL demonstrated highly competitive results, the best configuration of WATSET has statistically significantly outperformed it on all the data sets.", |
| "cite_spans": [ |
| { |
| "start": 682, |
| "end": 686, |
| "text": "MCL]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 332, |
| "end": 340, |
| "text": "Table 10", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 649, |
| "end": 658, |
| "text": "(Table 9)", |
| "ref_id": null |
| }, |
| { |
| "start": 671, |
| "end": 681, |
| "text": "(Table 10)", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Using MCL instead of CW for sense induction in WATSET expectedly produced more fine-grained senses. However, at the global clustering step, these senses erroneously tend to form coarse-grained synsets connecting unrelated senses of the ambiguous words. This explains the generally higher recall of WATSET [MCL, \u2022] . Despite the randomized nature of CW, variance across runs do not affect the overall ranking. The rank of different weighting schemes on the node degree of CW top/lin/log can change, while the rank of the best CW configuration compared to other methods remains the same.", |
| "cite_spans": [ |
| { |
| "start": 305, |
| "end": 313, |
| "text": "[MCL, \u2022]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "The MaxMax algorithm showed mixed results. On the one hand, it outputs large clusters uniting more than a hundred nodes. This inevitably leads to a high recall, as it is clearly seen in the results for Russian because such synsets still pass under our cluster size threshold of 150 words. Its synsets on the English data sets are even larger and have been pruned, which resulted in the low recall. On the other hand, smaller synsets having at most 10-15 words were identified correctly. MaxMax appears to be extremely sensitive to edge weighting, which also complicates its application in practice.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "The CPM algorithm showed unsatisfactory results, emitting giant components encompassing thousands of words. Such clusters were automatically pruned, but the remaining clusters are quite correct synsets, which is confirmed by the high precision values. When increasing the minimal number of elements in the clique k, recall improves, but at the cost of a dramatic precision drop. We suppose that the network structure assumptions exploited by CPM do not accurately model the structure of our synonymy graphs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, the ECO method yielded the worst results because most of the cluster candidates failed to pass through the constant threshold used for estimating whether a pair of words should be included in the same cluster. Most synsets produced by this method were trivial (i.e., containing only a single word). The remaining synsets for both languages have at most three words that have been connected by chance due to the edge noising procedure used in this method, resulting in a low recall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "The results obtained on all gold standards ( Figure 7) show similar trends in terms of relative ranking of the methods. Yet absolute scores of YARN and RuWordNet are substantially different because of the inherent difference of these data sets. RuWordNet is more domain-specific in terms of vocabulary, so our input set of generic synonymy dictionaries has a limited coverage on this data set. On the other hand, recall calculated on YARN is substantially higher as this resource was manually built on the basis of synonymy dictionaries used in our experiments. Table 11 presents examples of the obtained synsets of various sizes for the top WATSET configuration on English. As one might observe, the quality of the results is highly plausible. Because in this configuration we assigned edge weights based on the cosine of the angle between Skip-Gram word vectors (Mikolov et al. 2013) , we should note that such an approach assigns high values of similarity not just to synonymous words, but to antonymous and generally any lexically related words. This is a common problem with lexical embedding spaces, which we tried to evade by explicitly using a synonymy dictionary as an input. For example, \"audio play\" and \"radio play,\" or \"accusative\" and \"oblique,\" are semantically related expressions, but really not synonyms. Such a problem can be addressed using techniques such as retrofitting (Faruqui et al. 2015 ) and contextualization (Peters et al. 2018) .", |
| "cite_spans": [ |
| { |
| "start": 864, |
| "end": 885, |
| "text": "(Mikolov et al. 2013)", |
| "ref_id": "BIBREF82" |
| }, |
| { |
| "start": 1393, |
| "end": 1413, |
| "text": "(Faruqui et al. 2015", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1438, |
| "end": 1458, |
| "text": "(Peters et al. 2018)", |
| "ref_id": "BIBREF102" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 45, |
| "end": 54, |
| "text": "Figure 7)", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 562, |
| "end": 570, |
| "text": "Table 11", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "However, one limitation of all the approaches considered in this section is the dependence on the completeness of the input dictionary of synonyms. In some parts of the input synonymy graph, important bridges between words can be missing, leading to smaller-than-desired synsets. A promising extension of the present methodology is using distributional models to enhance connectivity of the graph by cautiously adding extra relationships .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Cross-Resource Evaluation. In order to estimate the upper bound of precision, recall, and F 1 -score in our synset induction experiments, we conducted a cross-resource evaluation between the used gold-standard data sets (Table 12) . Similarly to the experimental setup described in Section 4.2.1, we transformed synsets from every data set into sets of synonymy pairs. Then, for every pair of gold standard data sets, we computed the pairwise precision, recall, and F 1 -score by assessing synset-induced synonymy pairs of one data set on the pairs of another data set. As a result, we see that the low absolute numbers in evaluation are due to an inherent vocabulary mismatch between the input dictionaries of synonyms and the gold data sets because no single resource for Russian can obtain high recall scores on another one. Surprisingly, even BabelNet, which integrates most of the available lexical resources, still does not reach a recall substantially larger than 50%. 29 Note that the results of this cross-data set evaluation are not directly comparable to results in Table 10 since in our experiments we use much smaller input dictionaries than those used by BabelNet. Our cross-resource evaluation demonstrates that unlike WordNet and BabelNet, which are built on a similar conceptual basis, RuWordNet and YARN have a very different structure, so an algorithm that shows good results on one will likely not perform very well on another.", |
| "cite_spans": [ |
| { |
| "start": 976, |
| "end": 978, |
| "text": "29", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 220, |
| "end": 230, |
| "text": "(Table 12)", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 1077, |
| "end": 1085, |
| "text": "Table 10", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, our goal is to investigate the applicability of our graph clustering technique in a different task. Namely, we explore how semantic frames-more complex linguistic structures than synsets-can be induced from text using WATSET. A semantic frame is a central concept of the Frame Semantics theory (Fillmore 1982) . A frame is a structure that describes a certain situation or action (e.g., \"Dining\" or \"Kidnapping\") in terms of participants involved in these actions, which fill semantic roles of this frame and words commonly describing such situations. Figure 8 illustrates a part of the \"Kidnapping\" semantic frame from the FrameNet resource. 30 Recent years have seen much work on frame semantics, enabled by the availability of a large set of frame definitions, as well as a manually annotated text corpus provided by the FrameNet project (Baker, Fillmore, and Lowe 1998) . FrameNet data enabled the development of wide-coverage frame parsers using supervised learning (Gildea and Jurafsky 2002; Erk and Pad\u00f3 2006; Das et al. 2014 , inter alia), as well as its application to a wide range of tasks, ranging from answer extraction in Question Answering (Shen and Lapata 2007) and Textual Entailment (Burchardt et al. 2009; Ben Aharon, Szpektor, and Dagan 2010), to event-based predictions of stock markets (Xie et al. 2013) .", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 326, |
| "text": "(Fillmore 1982)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 660, |
| "end": 662, |
| "text": "30", |
| "ref_id": null |
| }, |
| { |
| "start": 858, |
| "end": 890, |
| "text": "(Baker, Fillmore, and Lowe 1998)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 988, |
| "end": 1014, |
| "text": "(Gildea and Jurafsky 2002;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 1015, |
| "end": 1033, |
| "text": "Erk and Pad\u00f3 2006;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1034, |
| "end": 1049, |
| "text": "Das et al. 2014", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 1171, |
| "end": 1193, |
| "text": "(Shen and Lapata 2007)", |
| "ref_id": "BIBREF118" |
| }, |
| { |
| "start": 1324, |
| "end": 1341, |
| "text": "(Xie et al. 2013)", |
| "ref_id": "BIBREF138" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 569, |
| "end": 577, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Semantic Frame Induction", |
| "sec_num": "5." |
| }, |
| { |
| "text": "However, frame-semantic resources are arguably expensive and time-consuming to build due to difficulties in defining the frames, their granularity, and domain. The complexity of the frame construction and annotation tasks require expertise in the underlying knowledge. Consequently, such resources exist only for a few languages (Boas 2009) and even English is lacking domain-specific frame-based resources. Possible inroads are cross-lingual semantic annotation transfer (Pad\u00f3 and Lapata 2009; Figure 8 Definition, examples, core semantic roles, and frame invoking lexical units of the semantic frame \"Kidnapping\" from the FrameNet resource.", |
| "cite_spans": [ |
| { |
| "start": 329, |
| "end": 340, |
| "text": "(Boas 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 472, |
| "end": 494, |
| "text": "(Pad\u00f3 and Lapata 2009;", |
| "ref_id": "BIBREF92" |
| }, |
| { |
| "start": 495, |
| "end": 503, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Semantic Frame Induction", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Hartmann, Eckle-Kohler, and Gurevych 2016) or linking FrameNet to other lexicalsemantic or ontological resources (Narayanan et al. 2003; Tonelli and Pighin 2009; Laparra and Rigau 2010; Gurevych et al. 2012, inter alia) . But whereas the arguably simpler task of PropBank-based Semantic Role Labeling has been successfully addressed by unsupervised approaches (Lang and Lapata 2010; Titov and Klementiev 2011), fully unsupervised frame-based semantic annotation exhibits far more challenges, starting with the preliminary step of automatically inducing a set of semantic frame definitions that would drive a subsequent text annotation. We aim at overcoming these issues by automatizing the process of FrameNet construction through unsupervised frame induction techniques using WATSET.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 136, |
| "text": "(Narayanan et al. 2003;", |
| "ref_id": "BIBREF85" |
| }, |
| { |
| "start": 137, |
| "end": 161, |
| "text": "Tonelli and Pighin 2009;", |
| "ref_id": "BIBREF128" |
| }, |
| { |
| "start": 162, |
| "end": 185, |
| "text": "Laparra and Rigau 2010;", |
| "ref_id": "BIBREF69" |
| }, |
| { |
| "start": 186, |
| "end": 219, |
| "text": "Gurevych et al. 2012, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Semantic Frame Induction", |
| "sec_num": "5." |
| }, |
| { |
| "text": "According to our statistics on the dependency-parsed FrameNet corpus of over 150 thousand sentences (Bauer, F\u00fcrstenau, and Rambow 2012) , the SUBJ and OBJ relationships are the two most common shortest paths between frame evoking elements (FEEs) and their roles, accounting for 13.5% of instances of a heavy-tail distribution of over 11,000 different paths that occur three times or more in the FrameNet data. Although this might seem a simplification that does not cover prepositional phrases and frames filling the roles of other frames in a nested fashion, we argue that the overall frame inventory can be induced on the basis of this restricted set of constructions, leaving other paths and more complex instances for further work. Thus, we expect the triples obtained from such a Web-scale corpus as DepCC (Panchenko et al. 2018a) to cover most core arguments sufficiently. In contrast to the recent approaches like the one by Jauhar and Hovy (2017) , the approach we describe in this section induces semantic frames without any supervision, yet captures only two core roles: the subject and the object of a frame triggered by verbal predicates. Note that it is not generally correct to expect that the SVO triples obtained by a dependency parser are necessarily the core arguments of a predicate. Such roles can be implicit, that is, unexpressed in a given context (Schenk Table 13 Example of a tricluster of lexical units corresponding to the \"Kidnapping\" frame from FrameNet. and Chiarcos 2016), so additional syntactic relationships between frame elements could be taken into account (Kallmeyer, QasemiZadeh, and Cheung 2018) . We cast the frame induction problem as a triclustering task (Zhao and Zaki 2005; Ignatov et al. 2015) . Triclustering is a generalization of traditional clustering and biclustering problems (Mirkin 1996, page 144) , aiming at simultaneously clustering objects along three dimensions, namely, subject, verb, and object in our case (cf. Table 13 ). First, triclustering allows us to avoid the prevalent pipelined architecture of frame induction approaches, for example, the one by Kawahara, Peterson, and Palmer (2014) , where two independent clusterings are needed. Second, benchmarking frame induction as triclustering against other methods on dependency triples makes it possible to abstract away the evaluation of frame induction algorithms from other factors, for example, the input corpus or pre-processing steps, thus allowing a fair comparison of different induction models.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 135, |
| "text": "(Bauer, F\u00fcrstenau, and Rambow 2012)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 811, |
| "end": 835, |
| "text": "(Panchenko et al. 2018a)", |
| "ref_id": "BIBREF94" |
| }, |
| { |
| "start": 932, |
| "end": 954, |
| "text": "Jauhar and Hovy (2017)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 1593, |
| "end": 1634, |
| "text": "(Kallmeyer, QasemiZadeh, and Cheung 2018)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 1697, |
| "end": 1717, |
| "text": "(Zhao and Zaki 2005;", |
| "ref_id": "BIBREF139" |
| }, |
| { |
| "start": 1718, |
| "end": 1738, |
| "text": "Ignatov et al. 2015)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 1827, |
| "end": 1850, |
| "text": "(Mirkin 1996, page 144)", |
| "ref_id": null |
| }, |
| { |
| "start": 2116, |
| "end": 2153, |
| "text": "Kawahara, Peterson, and Palmer (2014)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1379, |
| "end": 1387, |
| "text": "Table 13", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 1972, |
| "end": 1980, |
| "text": "Table 13", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Semantic Frame Induction", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We focused on a simple setup for semantic frame induction using two roles and SVO triples, arguing that it still can be useful as frame roles are primarily expressed by subjects and objects, giving rise to semantic structures extracted in an unsupervised way with high coverage. Thus, given a vocabulary V and a set of SVO triples T \u2286 V 3 from a syntactically analyzed corpus, our approach for frame induction, called Triframes, constructs a triple graph and clusters it using the WATSET algorithm described in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Triframes reduces the frame induction problem to a simpler graph clustering problem. The algorithm has three steps: construction, clustering, and extraction. The triple graph construction step, as described in Section 5.1.1, uses a d-dimensional word embedding model v \u2208 V \u2192 v \u2208 R d to embed triples in a dense vector space for establishing edges between them. The graph clustering step, as described in Section 5.1.2, uses a clustering algorithm like WATSET to obtain sets of triples corresponding to the instances of the semantic frames. The final aggregation step, as described in Section 5.1.3, transforms the discovered triple clusters into frame-semantic representations. Triframes is parameterized by the number of nearest neighbors k \u2208 N for establishing edges and a graph clustering algorithm Cluster. The complete pseudocode of Triframes is presented in Algorithm 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "5.1.1 SVO Triple Similarity Graph Construction. We construct the triple graph G = (T, E) in which the triples are connected to each other according to the semantic similarity of their elements: subjects, verbs, objects. To express similarity, we embed the triples using Algorithm 3 Unsupervised Semantic Frame Induction from Subject-Verb-Object Triples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Input: a set of SVO triples T \u2286 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "V 3 , an embedding model v \u2208 V \u2192 v \u2208 R d ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "t \u2190 s \u2295 p \u2295 o 3: E \u2190 {(t, t ) \u2208 T 2 : t \u2208 NN k (t), t = t }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Construct edges using nearest neighbors 4:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "G \u2190 (T, E) 5: F \u2190 \u2205 6: for all C i \u2208 Cluster(G) do", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Cluster the graph 7:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "f s \u2190 {s \u2208 V : (s, v, o) \u2208 C i } Aggregate subjects 8:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "f v \u2190 {v \u2208 V : (s, v, o) \u2208 C i } Aggregate verbs 9: f o \u2190 {o \u2208 V : (s, v, o) \u2208 C i } Aggregate objects 10: F \u2190 F \u222a {(f s , f v , f o )} 11: return F", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "distributional representations of words. In particular, we use a word embedding model to map every triple t = (s, p, o) \u2208 T to a (3d)-dimensional vector t = s \u2295 p \u2295 o (lines 1-2). Such a representation enables computing the distance between the triples as a whole rather than between individual elements of them. The use of distributional models like Skip-Gram (Mikolov et al. 2013) makes it possible to take into account the contextual information of the whole triple. The concatenation of the vectors for words forming triples leads to the creation of a (|T| \u00d7 3d)-dimensional vector space. Figure 9 illustrates this idea: We expect structurally similar triples of different elements to be located in a dense vector space close to each other, and non-similar triples to be located far away from each other.", |
| "cite_spans": [ |
| { |
| "start": 361, |
| "end": 382, |
| "text": "(Mikolov et al. 2013)", |
| "ref_id": "BIBREF82" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 593, |
| "end": 601, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Given a triple t \u2208 T, we denote the k \u2208 N nearest neighbors extraction procedure of its concatenated embedding from the formed vector space as NN k (t) \u2286 T \\ {t}. Then, we use the triple embeddings to generate the undirected graph G = (T, E) by constructing the edge set E \u2286 T 2 . For that, we retrieve k nearest neighbors of each triple vector t \u2208 R 3d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Frame Induction as a Triclustering Task", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Concatenation of the vectors corresponding to the triple elements, subjects, verbs, and objects, expresses the structural similarity of the triples. and establish cosine similarity-weighted edges between the corresponding triples. We establish edges only between the triples appearing in k nearest neighbors (lines 3-4):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "E = {(t, t ) \u2208 T 2 : t \u2208 NN k (t)}", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "As a result, the constructed triple graph G has a clustered structure in which the clusters are sets of SVO triples representing the same frame.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Clustering. We assume that the triples representing similar contexts fill similar roles, which is explicitly encoded by the concatenation of the corresponding vectors of the words constituting the triple (Figure 9) . We use the WATSET algorithm to obtain the clustering of the SVO triple graph G (line 6). As described in Section 3, our algorithm treats the SVO triples as the vertices T of the input graph G = (T, E), induces their senses ( Figure 10) , and constructs an intermediate sense-aware representation that is clustered using a hard clustering algorithm like CW (Biemann 2006) . WATSET is a suitable algorithm for this problem because of its performance on the related synset induction task (Section 4), its fuzzy nature, and its ability to find the number of frames automatically.", |
| "cite_spans": [ |
| { |
| "start": 573, |
| "end": 587, |
| "text": "(Biemann 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 204, |
| "end": 214, |
| "text": "(Figure 9)", |
| "ref_id": null |
| }, |
| { |
| "start": 442, |
| "end": 452, |
| "text": "Figure 10)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Similarity Graph", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "Finally, for each cluster C i \u2208 C, we aggregate the subjects, the verbs, and the objects of the contained triples into separate sets (lines 7-9). As a result, each cluster is transformed into a triframe, which is a triple that is composed of the subjects f s \u2286 V, the verbs f v \u2286 V, and the objects f o \u2286 V. For example, the triples shown in Figure 9 will form a triframe ({man, people, woman}, {make, earn}, {profit, money}).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 342, |
| "end": 350, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Aggregating Triframes.", |
| "sec_num": "5.1.3" |
| }, |
| { |
| "text": "Currently, there is no universally accepted approach for evaluating unsupervised frame induction methods. All the previously developed methods were evaluated on completely different incomparable setups and used different input corpora (Titov and Klementiev 2012; Materna 2013; O'Connor 2013, etc.) . We propose a unified methodology by treating the complex multi-stage frame induction task as a straightforward triple clustering task.", |
| "cite_spans": [ |
| { |
| "start": 235, |
| "end": 262, |
| "text": "(Titov and Klementiev 2012;", |
| "ref_id": "BIBREF127" |
| }, |
| { |
| "start": 263, |
| "end": 276, |
| "text": "Materna 2013;", |
| "ref_id": "BIBREF78" |
| }, |
| { |
| "start": 277, |
| "end": 297, |
| "text": "O'Connor 2013, etc.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "5.2.1 Experimental Setup. We compare our method, Triframes WATSET, to several available state-of-the-art baselines applicable to our data set of triples (Section 2.3). LDA-Frames by Materna (2012 Materna ( , 2013 ) is a frame induction method based on topic modeling. Higher-Order Skip-Gram (HOSG) by Cotterell et al. (2017) generalizes the Skip-Gram model (Mikolov et al. 2013) by extending it from word-context co-occurrence matrices to tensors factorized with a polyadic decomposition. In our case, this tensor consisted of SVO triple counts. NOAC by Egurnov, Ignatov, and Mephu Nguifo (2017) is an extension of the Object-Attribute-Condition (OAC) triclustering algorithm by Ignatov et al. (2015) to numerically weighted triples. This incremental algorithm searches for dense regions in triadic data. Also, we use five simple baselines. In the Triadic baselines, independent word embeddings of subject, object, and verb are concatenated and then clustered using k-means (Hartigan and Wong 1979) and spectral clustering (Shi and Malik 2000) . In Triframes CW, instead of WATSET, we use CW, a hard graph clustering algorithm (Biemann 2006) . We also evaluate the performance of Simplified WATSET (Section 3.4). Finally, two trivial baselines are Singletons that creates a single cluster per instance and Whole that creates one cluster for all elements.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 195, |
| "text": "Materna (2012", |
| "ref_id": "BIBREF77" |
| }, |
| { |
| "start": 196, |
| "end": 212, |
| "text": "Materna ( , 2013", |
| "ref_id": "BIBREF78" |
| }, |
| { |
| "start": 301, |
| "end": 324, |
| "text": "Cotterell et al. (2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 357, |
| "end": 378, |
| "text": "(Mikolov et al. 2013)", |
| "ref_id": "BIBREF82" |
| }, |
| { |
| "start": 563, |
| "end": 595, |
| "text": "Ignatov, and Mephu Nguifo (2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 679, |
| "end": 700, |
| "text": "Ignatov et al. (2015)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 974, |
| "end": 998, |
| "text": "(Hartigan and Wong 1979)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 1023, |
| "end": 1043, |
| "text": "(Shi and Malik 2000)", |
| "ref_id": "BIBREF120" |
| }, |
| { |
| "start": 1127, |
| "end": 1141, |
| "text": "(Biemann 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Quality Measure. Following the approach for verb class evaluation by Kawahara, Peterson, and Palmer (2014) , we use normalized modified purity (nmPU) and normalized inverse purity (niPU) as the quality measures for overlapping clusterings. Given the clustering C and the gold clustering C G , normalized modified purity quantifies the clustering precision as the average of the weighted overlap \u03b4 C i (C i \u2229 C j G ) between each cluster C i \u2208 C and the gold cluster C i G \u2208 C G , which maximizes the overlap with C i :", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 106, |
| "text": "Kawahara, Peterson, and Palmer (2014)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "nmPU = 1 |C| |C| i\u2208N:|C i |>1 max 1\u2264j\u2264|C G | \u03b4 C i (C i \u2229 C j G )", |
| "eq_num": "(13)" |
| } |
| ], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "where the weighted overlap is the sum of the weights", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "C i,v for each word v \u2208 C i in i-th cluster: \u03b4 C i (C i \u2229 C j G ) = v\u2208C i \u2229C j G C i,v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": ". Note that nmPU counts all the singleton clusters as wrong. Similarly, normalized inverse purity (collocation) quantifies the clustering recall:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "niPU = 1 |C G | |G| j=1 max 1\u2264i\u2264|C| \u03b4 C j G (C i \u2229 C j G )", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Then, nmPU and niPU are combined together as the harmonic mean to yield the overall clustering F 1 -score, computed as F 1 = 2 nmPU\u2022niPU nmPU+niPU , which we use to rank the approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Our framework can be extended to the evaluation of more than two roles by generating more roles per frame. Currently, given a set of gold triples generated from the FrameNet, each triple element has a role-for example, \"Victim,\" \"Predator,\" and \"FEE.\" We use a fuzzy clustering evaluation measure that operates not on triples, but instead on a set of tuples. Consider for instance a gold triple (Freddy: Predator, kidnap: FEE, kid: Victim). It will be converted to three pairs (Freddy, Predator), (kidnap, FEE), (kid, Victim). Each cluster in both C and C G is transformed into a union of all constituent typed pairs. The quality measures are finally calculated between these two sets of tuples corresponding to C and C G . Note that one can easily pull in more than two core roles by adding to this gold standard set of tuples other roles of the frame, e.g., {(forest, Location)}. In our experiments, we focused on two main roles as our contribution is related to the application of triclustering methods. However, if more advanced methods of clustering are used, yielding clusters of arbitrary modality (n-clustering), one could also use our evaluation scheme.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Testing. Because the normalization term of the quality measures used in this experiment does not allow us to compute a contingency table, we cannot directly apply the McNemar's test or a location test to evaluate the statistical significance of the results as we did in our synset induction experiment (Section 4.2.1). Thus, we have applied a bootstrapping approach for statistical significance evaluation as follows. Given a set of clusters C and a set of gold standard clusters C G , we bootstrap an N-sized distribution of F 1 -scores. On each iteration, we take a sample C with replacements of |C| elements from C. Then, we compute nmPU, niPU, and F 1 on C against the gold standard clustering C G . Finally, for each pair of compared algorithms we use a two-tailed t-test (Welch 1947) from the Apache Commons Math library 31 to assess the significance of the difference in means between the corresponding bootstrap F 1 -score distributions. Thus, we consider the performance of one algorithm to be higher than the performance of another if both the p-value of the t-test is smaller than the significance level of 0.01 and the mean bootstrap F 1 -score of the first method is larger than that of the second. Because of a high computational complexity of bootstrapping (Dror et al. 2018) , we had to limit the value of N to 5,000 in the frame induction experiment and to 10,000 in the verb clustering experiment.", |
| "cite_spans": [ |
| { |
| "start": 777, |
| "end": 789, |
| "text": "(Welch 1947)", |
| "ref_id": "BIBREF136" |
| }, |
| { |
| "start": 1272, |
| "end": 1290, |
| "text": "(Dror et al. 2018)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical", |
| "sec_num": null |
| }, |
| { |
| "text": "Gold Standard Data Sets. We constructed a gold standard set of triclusters. Each tricluster corresponds to a FrameNet frame, similarly to the one illustrated in Table 13 . We extracted frame annotations from the over 150,000 sentences from FrameNet 1.7 (Baker, Fillmore, and Lowe 1998) . We used the frame, FEE, and argument labels in this data set to generate triples in the form (word i : role 1 , word j : FEE, word k : role 2 ), where word i/j/k corresponds to the roles and FEE in the sentence. We omitted roles expressed by multiple words as we use dependency parses, where one node represents a single word only.", |
| "cite_spans": [ |
| { |
| "start": 253, |
| "end": 285, |
| "text": "(Baker, Fillmore, and Lowe 1998)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 161, |
| "end": 169, |
| "text": "Table 13", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Statistical", |
| "sec_num": null |
| }, |
| { |
| "text": "For the sentences where more than two roles are present, all possible triples were generated. For instance, consider the sentence \"Two men kidnapped a soccer club employee at the train station,\" where \"men\" has the semantic role of Perpetrator, \"employee\" has the semantic role of Victim, \"station\" has the semantic role of Place, and the word \"kidnapped\" is a frame-evoking lexical element (see Figure 8) . In this sentence containing three semantic roles, the following triples will be generated: (men: Perpetrator, kidnap: FEE, employee: Victim), (men: Perpetrator, kidnap: FEE, station: Place), (employee: Victim, kidnap: FEE, station: Place). Sentences with less than two semantic roles were not considered. Finally, for each frame, we selected only two roles that are the most frequently co-occurring in the FrameNet annotated texts. This has left us with about 10 5 instances for the evaluation. For purposes of the evaluation, we operate on the (Korhonen et al. 2003) 246 110 62", |
| "cite_spans": [ |
| { |
| "start": 953, |
| "end": 975, |
| "text": "(Korhonen et al. 2003)", |
| "ref_id": "BIBREF64" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 396, |
| "end": 405, |
| "text": "Figure 8)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Statistical", |
| "sec_num": null |
| }, |
| { |
| "text": "intersection of triples from DepCC and FrameNet. Experimenting on the full set of DepCC triples is only possible for several methods that scale well (WATSET, CW, k-means), but is prohibitively expensive for other methods (LDA-Frames, NOAC) because of the input data size combined with the complexity of these algorithms. During prototyping, we found that removing the triples containing pronouns from both the input and the gold standard data set dramatically reduces the number of instances without the change of ranks in the evaluation results. Thus, we decided to perform our experiments on the whole data set without such a filtering. In addition to the frame induction evaluation, where subjects, objects, and verbs are evaluated together, we also used a data set of polysemous verb classes introduced by Korhonen, Krymolowski, and Marx (2003) and used by Kawahara, Peterson, and Palmer (2014) . Statistics of both data sets are summarized in Table 14 . Note that the polysemous verb data set is rather small, whereas the FrameNet triples set is fairly large, enabling reliable comparisons.", |
| "cite_spans": [ |
| { |
| "start": 810, |
| "end": 848, |
| "text": "Korhonen, Krymolowski, and Marx (2003)", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 861, |
| "end": 898, |
| "text": "Kawahara, Peterson, and Palmer (2014)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 948, |
| "end": 956, |
| "text": "Table 14", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Statistical", |
| "sec_num": null |
| }, |
| { |
| "text": "Input Data. In our evaluation, we use subject-verb-object triples from the DepCC data set (Panchenko et al. 2018a ), 32 which is a dependency-parsed version of the Common Crawl corpus, and the standard 300-dimensional Skip-Gram word embedding model trained on Google News corpus (Mikolov et al. 2013) . All the evaluated algorithms are executed on the same set of triples, eliminating variations due to different corpora or pre-processing.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 113, |
| "text": "(Panchenko et al. 2018a", |
| "ref_id": "BIBREF94" |
| }, |
| { |
| "start": 279, |
| "end": 300, |
| "text": "(Mikolov et al. 2013)", |
| "ref_id": "BIBREF82" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical", |
| "sec_num": null |
| }, |
| { |
| "text": "2 Parameter Tuning. We tested various hyper-parameters of each of these algorithms and report the best results overall per frame induction algorithm. We run 500 iterations of the LDA-Frames model with the default parameters (Materna 2013) . For HOSG by Cotterell et al. (2017) , we trained three vector arrays (for subjects, verbs, and objects) on the 108,073 SVO triples from the FrameNet corpus, using the implementation provided by the authors. 33 Training was performed with 5 negative samples, 300-dimensional vectors, and 10 epochs. We constructed an embedding of a triple by concatenating embeddings for subjects, verbs, and objects, and clustered them using k-means with the number of clusters set to 10,000 (this value provided the best performance). We tested several configurations of the NOAC method by Egurnov, Ignatov, and Mephu Nguifo (2017) , varying the minimum density of the cluster: The density of 0.25 led to the best results. For our Triframes method, we tried different values of k \u2208 {5, 10, 30, 100}, while the best results were obtained on k = 30 for both Triframes WATSET and CW. Both Triadic baselines show the best results on k = 500.", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 238, |
| "text": "(Materna 2013)", |
| "ref_id": "BIBREF78" |
| }, |
| { |
| "start": 253, |
| "end": 276, |
| "text": "Cotterell et al. (2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 448, |
| "end": 450, |
| "text": "33", |
| "ref_id": null |
| }, |
| { |
| "start": 824, |
| "end": 856, |
| "text": "Ignatov, and Mephu Nguifo (2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Frame evaluation results on the triples from the FrameNet 1.7 corpus (Baker, Fillmore, and Lowe 1998) . The results are sorted by descending order of the Frame F 1 -score. Best results are boldfaced and statistically significant results are marked with an asterisk ( * ). Simplified WATSET is denoted as WATSET \u00a7. 5.2.3 Results and Discussion. We perform two experiments to evaluate our approach: (1) a frame induction experiment on the FrameNet annotated corpus by Bauer, F\u00fcrstenau, and Rambow (2012) ; (2) the polysemous verb clustering experiment on the data set by Korhonen, Krymolowski, and Marx (2003) . The first is based on the newly introduced frame induction evaluation scheme (cf. Section 5.2.1). The second one evaluates the quality of verb clusters only on a standard data set from prior work.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 101, |
| "text": "(Baker, Fillmore, and Lowe 1998)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 466, |
| "end": 501, |
| "text": "Bauer, F\u00fcrstenau, and Rambow (2012)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 569, |
| "end": 607, |
| "text": "Korhonen, Krymolowski, and Marx (2003)", |
| "ref_id": "BIBREF64" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 15", |
| "sec_num": null |
| }, |
| { |
| "text": "Frame Induction Experiment. In Table 15 and Figure 11 , the results of the experiment are presented. Triframes based on WATSET clustering outperformed the other methods on both Verb F 1 and overall Frame F 1 . The HOSG-based clustering proved to be the most competitive baseline, yielding decent scores according to all four measures. The NOAC approach captured the frame grouping of slot fillers well but failed to establish good verb clusters. Note that NOAC and HOSG use only the graph of syntactic triples and do not rely on pre-trained word embeddings. This suggests a high complementarity of signals based on distributional similarity and global structure of the triple graph. Finally, the simpler Triadic baselines relying on hard clustering algorithms showed low performance, similar to that of LDA-Frames, justifying the more elaborate WATSET method. Although we, due to computational reasons (Section 5.2.1), have statistically evaluated only Frame F 1 results, we found all the results except HOSG to be statistically significant (p 0.01). Although triples are intuitively less ambiguous than words, still some frequent and generic triples like (she, make, it) can act as hubs in the graph, making it difficult to split it into semantically plausible clusters. The poor results of the CW hard clustering algorithm illustrate this. Because the hubs are ambiguous (i.e., can belong to multiple clusters), the use of the WATSET fuzzy clustering algorithm that splits the hubs by disambiguating them leads to the best results (see Table 15 ). We found that on average, WATSET tends to create smaller clusters than its closest competitors, HOSG and NOAC. For instance, an average frame produced by Triframes WATSET[CW top , CW top ] has 2.87 \u00b1 4.60 subjects, 3.77 \u00b1 16.31 verbs, and 3.27 \u00b1 6.31 objects. NOAC produced on average 8.95 \u00b1 15.05 subjects, 133.94 \u00b1 227.60 verbs, and 15.17 \u00b1 18.37 objects per frame. HOSG produced on average 3.00 \u00b1 4.20 subjects, 6.49 \u00b1 12.15 verbs, and 2.81 \u00b1 4.89", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 39, |
| "text": "Table 15", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 44, |
| "end": 53, |
| "text": "Figure 11", |
| "ref_id": null |
| }, |
| { |
| "start": 1538, |
| "end": 1546, |
| "text": "Table 15", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "F 1 -score values measured on the FrameNet Corpus (Bauer, F\u00fcrstenau, and Rambow 2012) . Each block corresponds to the top performance of the method in Table 15. objects per frame. We conclude that WATSET was producing smaller clusters in general, which appear to be meaningful yet insufficiently coarse-grained, according to the gold standard verb data set used.", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 85, |
| "text": "(Bauer, F\u00fcrstenau, and Rambow 2012)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 151, |
| "end": 160, |
| "text": "Table 15.", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Verb Clustering Experiment. Table 16 presents the evaluation results on the second data set for the best models identified in the first data set. The LDA-Frames yielded the best results with our approach performing comparably in terms of the F 1 -score. We attribute the low performance of the Triframes method based on CW clustering (Triframes CW) to its hard partitioning output, whereas the evaluation data set contains fuzzy clusters. The simplified version of WATSET has statistically significantly outperformed all other approaches. Although the LDA-Frames algorithm showed a higher value of F 1 than the original version of WATSET in this experiment, we found that its sampled F 1 -score is 44.98 \u00b1 0.04, while Triframes WATSET[CW top , CW top ] showed 47.88 \u00b1 0.01. Thus, we infer that our method has demonstrated non-significantly lower performance on this verb clustering task. In turn, the NOAC approach showed significantly worse results than both LDA-Frames and our approach (p 0.01). Different rankings in Tables 15 and 16 also suggest that frame induction cannot simply be treated as verb clustering and requires a separate task.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 28, |
| "end": 36, |
| "text": "Table 16", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Manual Evaluation of the Induced Frames. In addition to the experiments based on gold standard lexical resources, we also performed a manual evaluation. In particular, we assessed the quality of the frames produced by the Triframes WATSET[CW top , CW top ] approach using n = 30 nearest neighbors for constructing a triple graph, which showed the best performance during automatic evaluation (Tables 15 and 16) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 392, |
| "end": 410, |
| "text": "(Tables 15 and 16)", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "To prepare the data for a manual annotation, we sampled 100 random frames and manually annotated them with three different annotators. For the convenience of the annotators, before drawing a sample we removed pronouns and prepositions from the frame elements while keeping them containing at least two different lexical units. This is to remove rather meaningful triples, for example, (her, make, it), which are, however, present in large amounts in the FrameNet gold standard data set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Evaluation results on the data set of polysemous verb classes by Korhonen, Krymolowski, and Marx (2003) . The results are sorted by the descending order of F 1 -score. Best results are boldfaced and statistically significant results are marked with an asterisk ( * ). Simplified WATSET is denoted as WATSET \u00a7. In this study, annotators were instructed to annotate a frame as \"good\" if its elements (SVO) generally make sense together and each element is a reasonable set of lexical units. In total, the annotators judged 63 frames out of 100 to be good with a Fleiss (1971) \u03ba agreement of 0.816. 34 Although this is a rather general definition, the high agreement rate seems to suggest that it still provides a meaningful definition shared across annotators. Figure 12 presents examples of \"good\" frames, that is, those which are labeled as semantically plausible by our annotators. Figure 13 shows examples of \"bad\" frames according to the same criteria. These frames are available for download. 35", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 103, |
| "text": "Korhonen, Krymolowski, and Marx (2003)", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 560, |
| "end": 573, |
| "text": "Fleiss (1971)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 596, |
| "end": 598, |
| "text": "34", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 759, |
| "end": 768, |
| "text": "Figure 12", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 883, |
| "end": 892, |
| "text": "Figure 13", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 16", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we investigate the applicability of our graph clustering technique in another unsupervised resource induction task. The first two experiments investigated the acquisition of two linguistic symbolic structures from two different types of graphsnamely, synsets induced from graphs of synonyms (Section 4) and semantic frames induced from graphs of distributionally related syntactic triples (Section 5). In this section, we show how WATSET can be used to induce a third type of structure, namely, semantic classes from a graph of distributionally related words, also known as a distributional thesaurus (or DT) (see Lin 1998; Biermann and Riedl 2013) . In the context of this article, semantic classes will be considered as semantically plausible groups of words or word senses that have some common semantic feature.", |
| "cite_spans": [ |
| { |
| "start": 631, |
| "end": 640, |
| "text": "Lin 1998;", |
| "ref_id": "BIBREF73" |
| }, |
| { |
| "start": 641, |
| "end": 665, |
| "text": "Biermann and Riedl 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Distributional Semantic Class Induction", |
| "sec_num": "6." |
| }, |
| { |
| "text": "The following sections will provide details of this experiment. In particular, Section 6.1 presents two data sets that are used as gold standard clustering in the experiments. Section 6.2 presents the input graphs that are clustered using our approach to induce semantic structure. Finally, in Section 6.3 results of the experiments are presented and discussed comparing them to the baseline clustering algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Unsupervised Distributional Semantic Class Induction", |
| "sec_num": "6." |
| }, |
| { |
| "text": "A semantic class is a set of words that share the same semantic feature (Kozareva, Riloff, and Hovy 2008) . Depending on the definition of the notion of the semantic feature, the granularity and sizes of semantic classes may vary greatly. Examples of concrete semantic classes include sets of animals (dog, cat, . . . ), vehicles (car, motorcycle, . . . ), and fruit trees (apple tree, peach tree, . . . ). In this experiment, we use a gold standard derived from a reference lexicographical database, namely, WordNet (Fellbaum 1998 ).", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 105, |
| "text": "(Kozareva, Riloff, and Hovy 2008)", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 517, |
| "end": 531, |
| "text": "(Fellbaum 1998", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Classes in Lexical Semantic Resources", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "A summary of the noun semantic classes in WordNet supersenses (Ciaramita and Johnson 2003) .", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 90, |
| "text": "(Ciaramita and Johnson 2003)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 14", |
| "sec_num": null |
| }, |
| { |
| "text": "This allows us to benchmark the ability of WATSET to reconstruct the semantic lexicon of such a reliable reference resource that has been widely used in NLP for many decades.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 14", |
| "sec_num": null |
| }, |
| { |
| "text": "6.1.1 WordNet Supersenses. The first data set used in our experiments consists of 26 broad semantic classes, also known as supersenses in the literature (Ciaramita and Johnson 2003) : person, communication, artifact, act, group, food, cognition, possession, location, substance, state, time, attribute, object, process, tops, phenomenon, event, quantity, motive, animal, body, feeling, shape, plant, and relation.", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 181, |
| "text": "(Ciaramita and Johnson 2003)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 14", |
| "sec_num": null |
| }, |
| { |
| "text": "This system of broad semantic categories was used by lexicographers who originally constructed WordNet to thematically order the synsets; Figure 14 shows the distribution of the 82,115 noun synsets from WordNet 3.1 across the supersenses. In our experiments in this section, these classes are used as gold standard clustering of word senses as recorded in WordNet. One can observe a Zipfian-like power-law (Zipf 1949) distribution with a few clusters, such as artifact and person, accounting for a large fraction of all nouns in the resource. Overall, in this experiment we decided to focus on nouns, as the input distributional thesauri used in this experiment (as presented in Section 6.2) are most studied for modeling of noun semantics (Panchenko et al. 2016b) .", |
| "cite_spans": [ |
| { |
| "start": 406, |
| "end": 417, |
| "text": "(Zipf 1949)", |
| "ref_id": "BIBREF141" |
| }, |
| { |
| "start": 740, |
| "end": 764, |
| "text": "(Panchenko et al. 2016b)", |
| "ref_id": "BIBREF94" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 138, |
| "end": 147, |
| "text": "Figure 14", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 14", |
| "sec_num": null |
| }, |
| { |
| "text": "The WordNet supersenses were applied later also for word sense disambiguation as a system of broad sense labels (Flekova and Gurevych 2016) . For BabelNet, there is a similar data set called BabelDomains (Camacho-Collados and Navigli 2017) produced by automatically labeling BabelNet synsets with 32 different domains based on the topics of Wikipedia featured articles. Despite the larger size, however, BabelDomains provides only a silver standard (being semi-automatically created). We thus opt in the following to use WordNet supersenses only, because they provide instead a gold standard created by human experts. 6.1.2 Flat Cuts of the WordNet Taxonomy. The second type of semantic classes used in our study are more semantically specific and defined as subtrees of WordNet at some fixed", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 139, |
| "text": "(Flekova and Gurevych 2016)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 14", |
| "sec_num": null |
| }, |
| { |
| "text": "Relationship between the number of semantic classes and path length from the WordNet (Fellbaum 1998 ) root. We have chosen d \u2208 {4, 5, 6} for our experiments.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 99, |
| "text": "(Fellbaum 1998", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "path length of d steps from the root node. We used the following procedure to gather these semantic classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "First, we find a set of synsets that are located an exact distance of d edges from the root node. Each such starting node (e.g., the synset plant material.n.01) identifies one semantic class. This starting node and all its descendants (e.g., cork.n.01, coca.n.03, ethyl alcohol.n.1, methylated spirit.n.01, and so on, in the case of the plant material example) are included in the semantic class. Finally, we remove semantic classes that contain only one element as our goal is to create a gold standard data set for clustering. Figure 15 illustrates distribution of the number of semantic classes as a function of the path length from the root. As one may observe, the largest number of clusters is obtained for the path length d of 7. In our experiments, we use three versions of these WordNet \"taxonomy cuts,\" which correspond to d \u2208 {4, 5, 6}, because the cluster sizes generated at these levels are already substantially larger than those from the supersense data set while providing a complementary evaluation at different levels of granularities. Although at some levels, such as d = 2, the number of semantic classes is similar to the number of supersenses (Ciaramita and Johnson 2003) , there is no one-to-one relationship between them. As Richardson, Smeaton, and Murphy (1994) point out, this cut-based derivative resource might bias toward the concepts belonging to shallow hierarchies: the node for \"horse\" is 10 levels from the root, whereas the node for \"cow\" is 13 levels deep. However, we believe that it adds an additional perspective to our evaluation while keeping the interpretability at the same time. Examples of the extracted semantic classes are presented in Table 17 .", |
| "cite_spans": [ |
| { |
| "start": 1165, |
| "end": 1193, |
| "text": "(Ciaramita and Johnson 2003)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1249, |
| "end": 1287, |
| "text": "Richardson, Smeaton, and Murphy (1994)", |
| "ref_id": "BIBREF105" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 529, |
| "end": 538, |
| "text": "Figure 15", |
| "ref_id": null |
| }, |
| { |
| "start": 1684, |
| "end": 1692, |
| "text": "Table 17", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "A distributional thesaurus (Lin 1998) is an undirected graph of semantically related words, with edges such as {Python, Perl}. We base our approach on the distributional hypothesis (Firth 1957; Turney and Pantel 2010; Clark 2015) to generate graphs of semantically related words for this experiment. The graphs represent k nearest neighboring of words that are semantically related to each other in a vector space. More specifically, the dimensions of the vector space represent salient syntactic dependencies of each word extracted using a dependency parser. For this, we use the JoBimText framework for computation of count-based distributional models from raw text collections (Biemann and Riedl 2013) . 36 Although similar graphs could be derived also from neural distributional models, such as Word2Vec (Mikolov et al. 2013) , it was shown in Riedl (2016) and Riedl and Biemann (2017) that the quality of syntactically-based graphs is generally superior.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 37, |
| "text": "(Lin 1998)", |
| "ref_id": "BIBREF73" |
| }, |
| { |
| "start": 181, |
| "end": 193, |
| "text": "(Firth 1957;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 194, |
| "end": 217, |
| "text": "Turney and Pantel 2010;", |
| "ref_id": "BIBREF129" |
| }, |
| { |
| "start": 218, |
| "end": 229, |
| "text": "Clark 2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 680, |
| "end": 704, |
| "text": "(Biemann and Riedl 2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 707, |
| "end": 709, |
| "text": "36", |
| "ref_id": null |
| }, |
| { |
| "start": 808, |
| "end": 829, |
| "text": "(Mikolov et al. 2013)", |
| "ref_id": "BIBREF82" |
| }, |
| { |
| "start": 848, |
| "end": 860, |
| "text": "Riedl (2016)", |
| "ref_id": "BIBREF107" |
| }, |
| { |
| "start": 865, |
| "end": 889, |
| "text": "Riedl and Biemann (2017)", |
| "ref_id": "BIBREF108" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of a Distributional Thesaurus", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The JoBimText framework involves several steps. First, it takes an unlabeled input text corpus and performs dependency parsing so as to extract features representing each word. Each word is represented by a bag of syntactic dependencies such as conj and(Ruby, \u2022) or prep in(code, \u2022), extracted from the dependencies of MaltParser (Nivre, Hall, and Nilsson 2006) , which are further collapsed using the tool by Ruppert et al. (2015) in the notation of Stanford Dependencies (de Marneffe, MacCartney, and Manning 2006).", |
| "cite_spans": [ |
| { |
| "start": 330, |
| "end": 361, |
| "text": "(Nivre, Hall, and Nilsson 2006)", |
| "ref_id": "BIBREF90" |
| }, |
| { |
| "start": 410, |
| "end": 431, |
| "text": "Ruppert et al. (2015)", |
| "ref_id": "BIBREF111" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of a Distributional Thesaurus", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Next, semantically related words are computed for each word in the input corpus. Features of each word are weighted and ranked using the Local Mutual Information measure (Evert 2005) . Subsequently, these word representations are pruned, keeping 1,000 most salient features per word (fpw) and 1,000 most salient words per feature (wpf), where fpw and wpf are the parameters specific to the JoBimText framework. The pruning reduces computational complexity and noise. Finally, word similarities are computed as the number of common features for two words. This is, again, followed by a pruning step in which for every word, only the k of 200 most similar terms are kept. The ensemble of all of these words is the distributional thesaurus, which is used in the following experiments. Note that each word in such a thesaurus (i.e., a graph of semantically related words) is potentially ambiguous.", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 182, |
| "text": "(Evert 2005)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construction of a Distributional Thesaurus", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "An example of the lexical unit \"java\" and a part of its neighborhood in a distributional thesaurus. This polysemous word is not disambiguated, so it acts as a hub between two different senses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 16", |
| "sec_num": null |
| }, |
| { |
| "text": "The last stage of the JoBimText approach performs induction of senses, although here we do not use output of this stage, but instead apply the WATSET algorithm to the distributional thesaurus with ambiguous word entries. The process of computation of a distributional thesaurus using the JoBimText framework is described in greater detail in Biemann et al. (2018, Section 4.1) .", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 376, |
| "text": "Biemann et al. (2018, Section 4.1)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 16", |
| "sec_num": null |
| }, |
| { |
| "text": "As an input corpus, we use a text collection of about 9.3 billion tokens that consists of a concatenation of Wikipedia, 37 ukWaC (Ferraresi et al. 2008) , Gigaword (Graff and Cieri 2003) , and LCC (Richter et al. 2006) corpora. Given the large size of these corpora, the graphs are built using an implementation of the JoBimText framework in Apache Spark, 38 which enables efficient distributed computation of large text collection on a distributed computational cluster. 39 Figure 16 shows an example from the obtained distributional thesaurus. As in the experiments described in Sections 4 and 5, we assume that polysemous nodes serve as hubs that connect different unrelated clusters.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 152, |
| "text": "(Ferraresi et al. 2008)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 164, |
| "end": 186, |
| "text": "(Graff and Cieri 2003)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 197, |
| "end": 218, |
| "text": "(Richter et al. 2006)", |
| "ref_id": "BIBREF106" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 475, |
| "end": 484, |
| "text": "Figure 16", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 16", |
| "sec_num": null |
| }, |
| { |
| "text": "We cast the semantic class induction problem as a task of clustering distributionally related graphs of words and word senses, which is conceptually similar to our synset induction task in Section 4. Figure 17 shows an example of the sense graph (Section 3.3) built by WATSET before running a global clustering algorithm, which induces the senseaware semantic classes based on the distributional thesaurus example in Figure 16 . ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 200, |
| "end": 209, |
| "text": "Figure 17", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 417, |
| "end": 426, |
| "text": "Figure 16", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "An example of the sense graph built by WATSET for two senses of the lexical unit \"java\" using CW log for local clustering. In contrast to Figure 16 , in this disambiguated distributional thesaurus the node corresponding to the lexical unit \"java\" is split: java 11 is connected to programming languages and java 17 is connected to drinks.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 138, |
| "end": 147, |
| "text": "Figure 16", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "6.3.1 Experimental Set-Up. Similarly to our synset induction experiment (Section 4.2.1), we study the performance of clustering algorithms by comparing the clustering of the same input distributional thesaurus to a gold standard clustering. We used the same implementations and algorithms as all other experiments reported in this paper, such as MCL by van Dongen (2000), CW by Biemann (2006) , and MaxMax (Hope and Keller 2013a). We did not evaluate such algorithms as CPM (Palla et al. 2005) and ECO (Gon\u00e7alo Oliveira and Gomes 2014) because of their poor performance shown on the synset induction task.", |
| "cite_spans": [ |
| { |
| "start": 378, |
| "end": 392, |
| "text": "Biemann (2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 474, |
| "end": 493, |
| "text": "(Palla et al. 2005)", |
| "ref_id": "BIBREF93" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "Input Data. We use the distributional thesaurus as described in Section 6.2. Because the original distributional thesaurus graph has approximately 600 million edges, we pruned it by removing all the edges having the minimal weight (i.e., 0.001 in our case). Also, because of the difference in lexicons between the gold standards and the input graph, we performed additional pruning by removing all the edges connecting words missing the gold standard lexicons. As a result, we obtained four different pruned input graphs (Table 18 ). We performed no parameter tuning in this experiment, so we report the bestperforming configuration of each method among other ones.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 521, |
| "end": 530, |
| "text": "(Table 18", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "Gold Standard. We use two different kinds of semantic classes for evaluation purposes. Both of the semantic class types used are based on the WordNet lexical database Table 18 Properties of the input data sets used in the semantic class induction experiment compared with the original distributional thesaurus (DT) by Biemann and Riedl (2013) .", |
| "cite_spans": [ |
| { |
| "start": 318, |
| "end": 342, |
| "text": "Biemann and Riedl (2013)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 175, |
| "text": "Table 18", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 17", |
| "sec_num": null |
| }, |
| { |
| "text": "# of nodes # of edges Unpruned (Biemann and Riedl 2013) 4,430,170 595,916,414 Supersenses (Ciaramita 2003) 37,937 6,944,731 (Fellbaum 1998) yet they have widely different granularities. First, we use the WordNet supersenses data set by Ciaramita and Johnson (2003) . Second, we use our path-based gold standards of lengths 4, 5, and 6, as described in Section 6.1.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 55, |
| "text": "(Biemann and Riedl 2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 90, |
| "end": 106, |
| "text": "(Ciaramita 2003)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 124, |
| "end": 139, |
| "text": "(Fellbaum 1998)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 236, |
| "end": 264, |
| "text": "Ciaramita and Johnson (2003)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DT Pruning Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Quality Measure. In the synset induction experiment (Section 4.2.1) we use the pairwise F 1 -score (Manandhar et al. 2010) as the performance indicator. However, because the average size of a cluster in this experiment is much higher (Table 18 and Figure 14) , we found that the enumeration of 2-combinations of semantic class elements is not computationally tractable in reasonable time on relatively large data sets like the ones we use in this experiment. For example, a cluster of 10,000 elements needs to be transformed into a sufficiently large set of 1 2 \u00d7 10 5 \u00d7 (10 5 \u2212 1) \u2248 5 \u00d7 10 9 pairs, which is inconvenient for processing. Therefore, we used the same quality measure as in our unsupervised lexical semantic frame induction experiment (Section 5.2.1), namely, normalized modified purity (nmPU), and normalized inverse purity (niPU), as defined by Kawahara, Peterson, and Palmer (2014) .", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 122, |
| "text": "(Manandhar et al. 2010)", |
| "ref_id": "BIBREF76" |
| }, |
| { |
| "start": 861, |
| "end": 898, |
| "text": "Kawahara, Peterson, and Palmer (2014)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 234, |
| "end": 243, |
| "text": "(Table 18", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 248, |
| "end": 258, |
| "text": "Figure 14)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "DT Pruning Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Statistical Testing. Because the chosen quality measure does not allow the computation of a contingency table, we use exactly the same procedure for statistical testing as in the experiment on lexical semantic frame induction (Section 5.2.1). Due to a high computational complexity of the bootstrapping statistical testing procedure (Dror et al. 2018) , we limited the number of samples N to 5, 000 in this experiment.", |
| "cite_spans": [ |
| { |
| "start": 333, |
| "end": 351, |
| "text": "(Dror et al. 2018)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DT Pruning Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Comparison to Baselines. Table 19 shows the evaluation results on the WordNet supersenses data set. We found that our approach, WATSET[CW lin , CW log ], shows statistically significantly better results with respect to F 1 -score (p 0.01) than all the methods, apart from Simplified WATSET in the same configuration. The experimental results in Table 20 obtained on different variations of our WordNet-based gold standard, as described in Section 6.1, confirm a high performance of WATSET on all the evaluation data sets. Thus, results of experiments on these four types of semantic classes of greatly variable granularity (from 26 classes for the supersenses to 11,274 classes for the flat cut with d = 6) lead to similar conclusions about the advantage of the WATSET approach, as compared to the baseline clustering algorithms. Table 21 shows examples of the obtained semantic classes of various sizes for the best WATSET configuration on the WordNet supersenses data set. During error analysis", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 25, |
| "end": 33, |
| "text": "Table 19", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 345, |
| "end": 353, |
| "text": "Table 20", |
| "ref_id": null |
| }, |
| { |
| "start": 830, |
| "end": 838, |
| "text": "Table 21", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion.", |
| "sec_num": "6.3.2" |
| }, |
| { |
| "text": "Comparison of the graph clustering methods against the WordNet supersenses data set by Ciaramita and Johnson (2003) ; best configurations of each method in terms of F 1 -scores are shown. Results are sorted by F 1 -score; top values of each measure are boldfaced, and statistically significant results are marked with an asterisk ( * ). Simplified WATSET is denoted as WATSET \u00a7. ", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 115, |
| "text": "Ciaramita and Johnson (2003)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 19", |
| "sec_num": null |
| }, |
| { |
| "text": "Evaluation results on path-limited versions of WordNet by 4, 5, and 6; best configurations of each method in terms of F 1 -scores are shown. Results are sorted by F 1 -score on the d = 6 WordNet slice; top values of each measure are boldfaced. Simplified WATSET is denoted as WATSET \u00a7. Table 21 Sample semantic classes induced by the WATSET[CW lin , CW log ] method, according to the WordNet supersenses data set by Ciaramita and Johnson (2003) .", |
| "cite_spans": [ |
| { |
| "start": 416, |
| "end": 444, |
| "text": "Ciaramita and Johnson (2003)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 286, |
| "end": 294, |
| "text": "Table 21", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 20", |
| "sec_num": null |
| }, |
| { |
| "text": "Size Semantic Class 7 dye, switch-hitter, dimaggio, hitter, gwynn, three-hitter, muser 13 worm, octopus, pike, anguillidae, congridae, conger, anguilliformes, eel, marine, grouper, muraenidae, moray, elver 16 gothic, excelsior, roman, microgramma, stymie, dingbat, italic, century, trajan, outline, twentieth, bodoni, serif, lydian, headline, goudy 20 nickel, steel, alloy, chrome, titanium, cent, farthing, cobalt, brass, denomination, fineness, paisa, copperware, dime, cupronickel, centavo, avo, threepence, coin, centime 23 prochlorperazine, nicotine, tadalafil, billionth, ricin, pravastatin, multivitamin, milligram, anticoagulation, carcinogen, microgram, niacin, l-dopa, lowering, arsenic, morphine, nevirapine, caffeine, ritonavir, aspirin, neostigmine, rem, milliwatt 54 integer, calculus, theta, pyx, curvature, saturation, predicate, . . . 40 more words. . . , viscosity, brightness, variance, lattice, polynomial, rho, determinant 369 electronics, siren, dinky, banjo, luo, shawm, shaker, helicon, rhodes, conducting, . . . 349 more words. . . , narrator, paradiddle, clavichord, chord, consonance, sextet, zither, cantor, viscera, axiom 1,093 egg, pinworm, forager, decidua, psittacus, chimera, coursing, silkworm, spirochete, radicle, . . . 1073 more words. . . , earthworm, annelida, integument, pisum, biter, wilt, heartwood, shellfish, swarm, cryptomonad we found two primary causes of errors: incorrectly identified edges and overly specific sense contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Because we performed only a minimal pruning of the input distributional thesaurus, this contains many edges with low weights that typically represent mistakenly recognized relationships between words. Such edges, when appearing between two disjoint meaningful clusters, act as hubs, which WATSET puts in both clusters. For example, a sense graph in Figure 17 has a node soap 18 incorrectly connected to a drinksrelated node java 17 instead of the node java 11 , which is more related to programming languages. 40 Reliable distinction between \"legitimate\" polysemous nodes and incorrectly placed hubs is a direction for future work.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 349, |
| "end": 358, |
| "text": "Figure 17", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "The node sense induction approach of WATSET, as described in Section 2.2, takes into account only the neighborhood of the target node, which is a first-order ego network (Everett and Borgatti 2005) . As we observe throughout all the experiments in this article, WATSET tends to produce more fine-grained senses than one might expect. These fine-grained senses, in turn, lead to the global clustering algorithm to include incoherent nodes to clusters as in Table 21 . We believe that taking into account additional features, such as second-order ego networks, to induce coarse-grained senses could potentially improve the overall performance of our algorithm (at a higher computational cost).", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 197, |
| "text": "(Everett and Borgatti 2005)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 456, |
| "end": 464, |
| "text": "Table 21", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "We found a generally poor performance of MCL in this experiment due to its tendency to produce fine-grained clusters by isolating hubs from their neighborhoods. Although this behavior improved the results on the synset induction task (Section 4.2.3), our distributional thesaurus is a more complex resource as it expresses semantic relationships other than synonymity, so the incorrectly identified edges affect MCL as well as WATSET.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Impact of Distributional Thesaurus Pruning on Ambiguity. In order to study the effect of pruning, we performed another experiment on a DT that was pruned using a relatively high edge weight threshold of 0.01, which is 10 times larger than the minimal threshold we used in the experiment described in Section 6.3. A manual inspection of the pruned graph showed that most, if not all, nodes were either monosemeous words or proper nouns, so hard clustering algorithms should have an advantage in this scenario. Table 22 confirms that in this setup soft clustering algorithms, such as WATSET and MaxMax, are clearly outperformed by hard clustering algorithms, which are more suitable for processing monosemous word graphs. Because our algorithm explicitly performs node sense induction to produce fine-grained clusters, we found that an average semantic class produced by WATSET[CW top , CW top ] has 10.77 \u00b1 187.37 words, whereas CW log produced semantic classes of 133.46 \u00b1 1, 317.97 words on average.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 509, |
| "end": 517, |
| "text": "Table 22", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "To summarize, in contrast with synonymy dictionaries, whose completeness and availability are limited (Section 4.2.3), a distributional thesaurus can be constructed for any language provided with a relatively large text corpus. However, we found that they need to be carefully pruned to reduce the error rate of clustering algorithms (Panchenko et al. 2018b ).", |
| "cite_spans": [ |
| { |
| "start": 334, |
| "end": 357, |
| "text": "(Panchenko et al. 2018b", |
| "ref_id": "BIBREF94" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Comparison of the graph clustering methods on the pruned DT with an edge threshold of 0.01 against the WordNet supersenses data set by Ciaramita and Johnson (2003) ; best configurations of each method in terms of F 1 -scores are shown. Results are sorted by F 1 -score; top values of each measure are boldfaced. Simplified WATSET is denoted as WATSET \u00a7. ", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 163, |
| "text": "Ciaramita and Johnson (2003)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 22", |
| "sec_num": null |
| }, |
| { |
| "text": "In this article, we presented WATSET, a generic meta-algorithm for fuzzy graph clustering. This algorithm creates an intermediate representation of the input graph that naturally reflects the \"ambiguity\" of its nodes. Then, it uses hard clustering to discover clusters in this \"disambiguated\" intermediate graph. This enables straightforward semantic-aware grouping of relevant objects together. We refer to WATSET as a metaalgorithm because it does not perform graph clustering per se. Instead, it encapsulates the existing clustering algorithms and builds a sense-aware representation of the input graph, which we call a sense graph. Although we use the sense graph in this article exclusively for clustering, we believe that it can be useful for more applications. The experiments show that our algorithm performs fuzzy graph clustering with a high accuracy. This is empirically confirmed by successfully applying WATSET to complex language processing, including tasks like unsupervised induction of synsets from a synonymy graph, semantic frames from dependency triples, as well as semantic class induction from a distributional thesaurus. In all cases, the algorithm successfully handled the ambiguity of underlying linguistic objects, yielding the state-of-the-art results in the respective tasks. WATSET is computationally tractable and its local steps can easily be run in parallel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7." |
| }, |
| { |
| "text": "As future work we plan to apply WATSET to other types of linguistic networks to address more natural language processing tasks, such as taxonomy induction based on networks of noisy hypernyms extracted from text (Panchenko et al. 2016a) . Additionally, an interesting future challenge is the development of a scalable graph clustering algorithm that can natively run in a parallel distributed manner (e.g., on a large distributed computational cluster). The currently available algorithms, such as MCL (van Dongen 2000) and CW (Biemann 2006) , cannot be trivially implemented in such a fully distributed environment, limiting the scale of language graph they can be applied to. Another direction of future work is using WATSET in downstream applications. We believe that our algorithm can successfully detect structure in a wide range of different linguistic and non-linguistic data sets, which can help in processing out-ofvocabulary items or resource-poor languages or domains without explicit supervision.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 236, |
| "text": "(Panchenko et al. 2016a)", |
| "ref_id": "BIBREF94" |
| }, |
| { |
| "start": 527, |
| "end": 541, |
| "text": "(Biemann 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Implementation. We offer an efficient open source multi-threaded implementation of WATSET (Algorithm 1) in the Java programming language. 41 It uses a thread pool to simultaneously perform local steps, such as node sense induction (lines 1-9, one word per thread) and context disambiguation (lines 11-15, one sense per thread). Our implementation includes Simplified WATSET (Algorithm 2) and also features both a command-line interface and an application programming interface for integration into other graph and language processing pipelines in a generic way. Additionally, we bundle with it our own implementations of Markov Clustering (van Dongen 2000), Chinese Whispers (Biemann 2006) , and MaxMax (Hope and Keller 2013a) algorithms. Also, we offer an implementation of the Triframes frame induction approach 42 and an implementation of the semantic class induction approach. 43 The data sets produced during this study are available on Zenodo. 44 ", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 140, |
| "text": "41", |
| "ref_id": null |
| }, |
| { |
| "start": 675, |
| "end": 689, |
| "text": "(Biemann 2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 881, |
| "end": 883, |
| "text": "43", |
| "ref_id": null |
| }, |
| { |
| "start": 950, |
| "end": 952, |
| "text": "44", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7." |
| }, |
| { |
| "text": "This article builds upon and expands on andUstalov et al. (2018).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://ontopt.dei.uc.pt.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://tac.nist.gov/2010/Summarization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://web.archive.org/web/20110327090414/http://labs.google.com/sets. 5 A simple graph has no loops, i.e., u = v, \u2200{u, v} \u2208 E. We use this property for context disambiguation in Section 3.2.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the sake of brevity, by context similarity we mean similarity between context vectors in a sparse vector space model(Salton, Wong, and Yang 1975).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Although MCL can be implemented more efficiently than O(|V| 3 ), cf. van Dongen (2000, page 125), we would like to use the consistent worst case scenario notation for all the mentioned clustering algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Our survey was based on Mihalcea and Radev (2011), Di Marco and, andLewis and Steedman (2013a).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://wortschatz.uni-leipzig.de/en/download.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.wikipedia.org. 11 http://www.wiktionary.org. 12 http://www.omegawiki.org. 13 https://dkpro.github.io/dkpro-jwktl. 14 https://github.com/componavt/wikokit. 15 https://babelnet.org/synset?word=bn:00008364n.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/uhh-lt/chinese-whispers. 17 https://micans.org/mcl/. 18 https://networkx.github.io.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://www.statsmodels.org/. 20 https://wordnet.princeton.edu. 21 https://www.babelnet.org. 22 https://ruwordnet.ru/en. 23 https://russianword.net/en.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In YARN, an edit operation can be an addition or a removal of a synset element; an average synset in our data set contains 6.77 \u00b1 3.54 words. 25 We used the Wiktionary dumps of February 1, 2017. 26 We used the YARN dumps of February 7, 2017.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://code.google.com/archive/p/word2vec/. 28 https://doi.org/10.5281/zenodo.163857.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We used BabelNet 3.7 extracting all 3,497,327 synsets that were marked as Russian. 30 https://framenet.icsi.berkeley.edu/fndrupal/luIndex.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://commons.apache.org/proper/commons-math/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/depcc.html. 33 https://github.com/azpoliak/skip-gram-tensor.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the DKPro Agreement toolkit byMeyer et al. (2014) to compute the inter-annotator agreement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The examples are from the file triw2v-watset-n30-top-top-triples.txt is available in the \"Downloads\" section of our GitHub repository at https://github.com/uhh-lt/triframes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.jobimtext.org.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://doi.org/10.5281/zenodo.229904. 38 https://spark.apache.org. 39 https://github.com/uhh-lt/josimtext.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Strictly speaking, SOAP (Simple Object Access Protocol) is not a programming language, so the presence of this node in the graphs demonstrated inFigures 16 and 17is a mistake.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/nlpub/watset-java.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the \"JOIN-T\" and \"ACQuA\" projects, the Deutscher Akademischer Austauschdienst (DAAD), and the Russian Foundation for Basic Research (RFBR) under the project no. 16-37-00354 \u043c\u043e\u043b_\u0430. We also thank Andrew Krizhanovsky for providing a parsed Wiktionary, Natalia Loukachevitch for the provided RuWordNet data set, Mikhail Chernoskutov for early discussions on computational complexity of WATSET, and Denis Shirgin, who actually suggested the WATSET name. Furthermore, we thank Dmitry Egurnov, Dmitry Ignatov, and Dmitry Gnatyshak for help in operating the NOAC method using the multimodal clustering toolbox. We are grateful to Ryan Cotterell and Adam Poliak for a discussion and an implementation of the High-Order Skip Gram (HOSG) method. We thank Bonaventura Coppolla for discussions and preliminary work on graph-based frame induction and Andrei Kutuzov, who conducted experiments with the HOSG-based baseline related to the frame induction experiment. We thank Stefano Faralli for early work on graph-based sense disambiguation. We thank Rotem Dror for discussion of the theoretical background underpinning the statistical testing approach that we use in this article. We are grateful to Federico Nanni and Gregor Wiedemann for proofreading this article. Finally, we thank three anonymous reviewers for insightful comments on the present article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "\u0421\u043b\u043e\u0432\u0430\u0440\u044c \u0440\u0443\u0441\u0441\u043a\u0438\u0445 \u0441\u0438\u043d\u043e\u043d\u0438\u043c\u043e\u0432 \u0438 \u0441\u0445\u043e\u0434\u043d\u044b\u0445 \u043f\u043e \u0441\u043c\u044b\u0441\u043b\u0443 \u0432\u044b\u0440\u0430\u0436\u0435\u043d\u0438\u0439 [The Dictionary of Russian Synonyms and Semantically Related Expressions", |
| "authors": [ |
| { |
| "first": "Nikolay", |
| "middle": [], |
| "last": "Abramov", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abramov, Nikolay. 1999. \u0421\u043b\u043e\u0432\u0430\u0440\u044c \u0440\u0443\u0441\u0441\u043a\u0438\u0445 \u0441\u0438\u043d\u043e\u043d\u0438\u043c\u043e\u0432 \u0438 \u0441\u0445\u043e\u0434\u043d\u044b\u0445 \u043f\u043e \u0441\u043c\u044b\u0441\u043b\u0443 \u0432\u044b\u0440\u0430\u0436\u0435\u043d\u0438\u0439 [The Dictionary of Russian Synonyms and Semantically Related Expressions], 7th edition. \u0420\u0443\u0441\u0441\u043a\u0438\u0435 \u0441\u043b\u043e\u0432\u0430\u0440\u0438 [Russian Dictionaries], Moscow. In Russian.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Data-driven synset induction and disambiguation for wordnet development", |
| "authors": [ |
| { |
| "first": "Marianna", |
| "middle": [], |
| "last": "Apidianaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Language Resources and Evaluation", |
| "volume": "48", |
| "issue": "4", |
| "pages": "655--677", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Apidianaki, Marianna and Beno\u00eet Sagot. 2014. Data-driven synset induction and disambiguation for wordnet development. Language Resources and Evaluation, 48(4):655-677.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The Berkeley FrameNet project", |
| "authors": [ |
| { |
| "first": "Collin", |
| "middle": [ |
| "F" |
| ], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Charles", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "B" |
| ], |
| "last": "Fillmore", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "86--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baker, Collin F., Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics -Volume 1, pages 86-90, Montr\u00e9al.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Breaking sticks and ambiguities with adaptive skip-gram", |
| "authors": [ |
| { |
| "first": "Sergey", |
| "middle": [], |
| "last": "Bartunov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Kondrashkin", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Osokin", |
| "suffix": "" |
| }, |
| { |
| "first": "Dmitry", |
| "middle": [ |
| "P" |
| ], |
| "last": "Vetrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "51", |
| "issue": "", |
| "pages": "130--138", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bartunov, Sergey, Dmitry Kondrashkin, Anton Osokin, and Dmitry P. Vetrov. 2016. Breaking sticks and ambiguities with adaptive skip-gram. Journal of Machine Learning Research, 51:130-138.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The dependency-parsed Framenet corpus", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Hagen", |
| "middle": [], |
| "last": "F\u00fcrstenau", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "3861--3867", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bauer, Daniel, Hagen F\u00fcrstenau, and Owen Rambow. 2012. The dependency-parsed Framenet corpus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, pages 3861-3867, Istanbul.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Generating entailment rules from FrameNet", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ben Aharon", |
| "suffix": "" |
| }, |
| { |
| "first": "Idan", |
| "middle": [], |
| "last": "Roni", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Szpektor", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACL 2010 Conference Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "241--246", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ben Aharon, Roni, Idan Szpektor, and Ido Dagan. 2010. Generating entailment rules from FrameNet. In Proceedings of the ACL 2010 Conference Short Papers, pages 241-246, Uppsala.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Chinese Whispers: An efficient graph clustering algorithm and its application to natural language processing problems", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "73--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Biemann, Chris. 2006. Chinese Whispers: An efficient graph clustering algorithm and its application to natural language processing problems. In Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing, 73-80, New York, NY.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Structure Discovery in Natural Language. Theory and Applications of Natural Language Processing", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Biemann, Chris. 2012. Structure Discovery in Natural Language. Theory and Applications of Natural Language Processing. Springer Berlin Heidelberg.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A framework for enriching lexical semantic resources with distributional semantics", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Faralli", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Natural Language Engineering", |
| "volume": "24", |
| "issue": "2", |
| "pages": "265--312", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Biemann, Chris, Stefano Faralli, Alexander Panchenko, and Simone Paolo Ponzetto. 2018. A framework for enriching lexical semantic resources with distributional semantics. Natural Language Engineering, 24(2):265-312.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Text: Now in 2D! A framework for lexical expansion with contextual similarity", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Journal of Language Modelling", |
| "volume": "1", |
| "issue": "1", |
| "pages": "55--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Biemann, Chris and Martin Riedl. 2013. Text: Now in 2D! A framework for lexical expansion with contextual similarity. Journal of Language Modelling, 1(1):55-95.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "DBpedia-A crystallization point for the Web of Data", |
| "authors": [ |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Bizer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgi", |
| "middle": [], |
| "last": "Kobilarov", |
| "suffix": "" |
| }, |
| { |
| "first": "S\u00f6ren", |
| "middle": [], |
| "last": "Auer", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Becker", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Cyganiak", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Hellmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Journal of Web Semantics", |
| "volume": "7", |
| "issue": "3", |
| "pages": "154--165", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bizer, Christian, Jens Lehmann, Georgi Kobilarov, S\u00f6ren Auer, Christian Becker, Richard Cyganiak, and Sebastian Hellmann. 2009. DBpedia-A crystallization point for the Web of Data. Journal of Web Semantics, 7(3):154-165.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Latent dirichlet allocation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "I" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jordan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "993--1022", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blei, David M., Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Fast unfolding of communities in large networks", |
| "authors": [ |
| { |
| "first": "Vincent", |
| "middle": [ |
| "D" |
| ], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-Loup", |
| "middle": [], |
| "last": "Guillaume", |
| "suffix": "" |
| }, |
| { |
| "first": "Renaud", |
| "middle": [], |
| "last": "Lambiotte", |
| "suffix": "" |
| }, |
| { |
| "first": "Etienne", |
| "middle": [], |
| "last": "Lefebvre", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Statistical Mechanics: Theory and Experiment", |
| "volume": "", |
| "issue": "10", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blondel, Vincent D., Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10):P10008.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Multilingual FrameNets in Computational Lexicography: Methods and Applications. Trends in Linguistics. Studies and Monographs", |
| "authors": [ |
| { |
| "first": "Hans", |
| "middle": [ |
| "C" |
| ], |
| "last": "Boas", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boas, Hans C. 2009. Multilingual FrameNets in Computational Lexicography: Methods and Applications. Trends in Linguistics. Studies and Monographs. Mouton de Gruyter.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Assessing the impact of frame semantics on textual entailment", |
| "authors": [ |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Braslavski", |
| "suffix": "" |
| }, |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Ustalov", |
| "suffix": "" |
| }, |
| { |
| "first": "Mukhin", |
| "middle": [], |
| "last": "Mukhin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuri", |
| "middle": [], |
| "last": "Kiselev", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 8th Global WordNet Conference", |
| "volume": "15", |
| "issue": "", |
| "pages": "527--550", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Braslavski, Pavel, Dmitry Ustalov, Mukhin Mukhin, and Yuri Kiselev. 2016. YARN: Spinning-in-progress. In Proceedings of the 8th Global WordNet Conference, pages 58-65, Bucharest. Burchardt, Aljoscha, Marco Pennacchiotti, Stefan Thater, and Manfred Pinkal. 2009. Assessing the impact of frame semantics on textual entailment. Natural Language Engineering, 15(4):527-550.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "BabelDomains: Large-scale domain labeling of lexical resources", |
| "authors": [ |
| { |
| "first": "Jose", |
| "middle": [], |
| "last": "Camacho-Collados", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter", |
| "volume": "2", |
| "issue": "", |
| "pages": "223--228", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Camacho-Collados, Jose and Roberto Navigli. 2017. BabelDomains: Large-scale domain labeling of lexical resources. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 223-228, Valencia.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A comparison of graph-based word sense induction clustering algorithms in a pseudoword evaluation framework", |
| "authors": [ |
| { |
| "first": "Flavio", |
| "middle": [], |
| "last": "Cecchini", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Massimiliano", |
| "suffix": "" |
| }, |
| { |
| "first": "Elisabetta", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Fersini", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cecchini, Flavio Massimiliano, Martin Riedl, Elisabetta Fersini, and Chris Biemann. 2018. A comparison of graph-based word sense induction clustering algorithms in a pseudoword evaluation framework.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Language Resources and Evaluation", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "733--770", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Language Resources and Evaluation, 733-770.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Probabilistic Frame Induction", |
| "authors": [ |
| { |
| "first": "Jackie", |
| "middle": [ |
| "C K" |
| ], |
| "last": "Cheung", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucy", |
| "middle": [], |
| "last": "Hoifung Poon", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "837--846", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cheung, Jackie C. K., Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic Frame Induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 837-846, Atlanta, GA.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Supersense tagging of unknown nouns in WordNet", |
| "authors": [ |
| { |
| "first": "Massimiliano", |
| "middle": [], |
| "last": "Ciaramita", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "168--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ciaramita, Massimiliano and Mark Johnson. 2003. Supersense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 168-175, Sapporo.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Vector Space Models of Lexical Meaning", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clark, Stephen. 2015. Vector Space Models of Lexical Meaning, 2nd edition. John Wiley & Sons, Inc.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Clustering paraphrases by word sense", |
| "authors": [ |
| { |
| "first": "Anne", |
| "middle": [], |
| "last": "Cocos", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1463--1472", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cocos, Anne and Chris Callison-Burch. 2016. Clustering paraphrases by word sense. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1463-1472, San Diego, CA.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Explaining and generalizing skip-gram through exponential family principal component analysis", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Poliak", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter", |
| "volume": "2", |
| "issue": "", |
| "pages": "175--181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cotterell, Ryan, Adam Poliak, Benjamin Van Durme, and Jason Eisner. 2017. Explaining and generalizing skip-gram through exponential family principal component analysis. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 175-181, Valencia.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Frame-semantic parsing", |
| "authors": [ |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Desai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "T" |
| ], |
| "last": "Andr\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Computational Linguistics", |
| "volume": "40", |
| "issue": "1", |
| "pages": "9--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Das, Dipanjan, Desai Chen, Andr\u00e9 F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguistics., 40(1):9-56.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Clustering and diversifying Web search results with graph-based word sense induction", |
| "authors": [ |
| { |
| "first": "Jia", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Jia", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei ; De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Marie-Catherine", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Maccartney", |
| "suffix": "" |
| }, |
| { |
| "first": ". ; Di", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Marco", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 1916, |
| "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation", |
| "volume": "39", |
| "issue": "", |
| "pages": "709--754", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, Miami Beach, FL. de Marneffe, Marie-Catherine, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, pages 449-454, Genoa. de Saussure, Ferdinand. 1916. Cours de linguistique generate. Payot, Paris, France. Di Marco, Antonio and Roberto Navigli. 2013. Clustering and diversifying Web search results with graph-based word sense induction. Computational Linguistics, 39(3):709-754.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Development of lexical basis for the Universal Dictionary of UNL Concepts", |
| "authors": [ |
| { |
| "first": "Vyachelav", |
| "middle": [ |
| "G" |
| ], |
| "last": "Dikonov", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference", |
| "volume": "12", |
| "issue": "", |
| "pages": "212--221", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dikonov, Vyachelav G. 2013. Development of lexical basis for the Universal Dictionary of UNL Concepts. In Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference \"Dialogue,\" volume 12(19), pages 212-221, Moscow. van Dongen, Stijn. 2000. Graph Clustering by Flow Simulation. Ph.D. thesis, University of Utrecht.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Discovering corpus-specific word senses", |
| "authors": [ |
| { |
| "first": "Beate", |
| "middle": [], |
| "last": "Dorow", |
| "suffix": "" |
| }, |
| { |
| "first": "Dominic", |
| "middle": [], |
| "last": "Widdows", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Tenth Conference on European Chapter", |
| "volume": "2", |
| "issue": "", |
| "pages": "79--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dorow, Beate and Dominic Widdows. 2003. Discovering corpus-specific word senses. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics -Volume 2, pages 79-82, Budapest.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Using curvature and Markov clustering in graphs for lexical acquisition and word sense discrimination", |
| "authors": [ |
| { |
| "first": "Beate", |
| "middle": [], |
| "last": "Dorow", |
| "suffix": "" |
| }, |
| { |
| "first": "Dominic", |
| "middle": [], |
| "last": "Widdows", |
| "suffix": "" |
| }, |
| { |
| "first": "Katarina", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-Pierre", |
| "middle": [], |
| "last": "Eckmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Danilo", |
| "middle": [], |
| "last": "Sergi", |
| "suffix": "" |
| }, |
| { |
| "first": "Elisha", |
| "middle": [], |
| "last": "Moses", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the MEANING-2005 Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dorow, Beate, Dominic Widdows, Katarina Ling, Jean-Pierre Eckmann, Danilo Sergi, and Elisha Moses. 2005. Using curvature and Markov clustering in graphs for lexical acquisition and word sense discrimination. In Proceedings of the MEANING-2005 Workshop, Trento.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "The hitchhiker's guide to testing statistical significance in natural language processing", |
| "authors": [ |
| { |
| "first": "Rotem", |
| "middle": [], |
| "last": "Dror", |
| "suffix": "" |
| }, |
| { |
| "first": "Gili", |
| "middle": [], |
| "last": "Baumer", |
| "suffix": "" |
| }, |
| { |
| "first": "Segev", |
| "middle": [], |
| "last": "Shlomov", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "31--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dror, Rotem, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne. Egurnov, Dmitry, Dmitry Ignatov, and Engelbert Mephu Nguifo. 2017. Mining triclusters of similar values in triadic real-valued contexts. In 14th International Conference on Formal Concept Analysis - Supplementary Proceedings, pages 31-47, Rennes.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "SHALMANESER -A toolchain for shallow semantic parsing", |
| "authors": [ |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "527--532", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erk, Katrin and Sebastian Pad\u00f3. 2006. SHALMANESER -A toolchain for shallow semantic parsing. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, pages 527-532, Genoa.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Ego network betweenness", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Everett", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "P" |
| ], |
| "last": "Borgatti", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Social Networks", |
| "volume": "27", |
| "issue": "1", |
| "pages": "31--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Everett, Martin and Stephen P. Borgatti. 2005. Ego network betweenness. Social Networks, 27(1):31-38.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "The Statistics of Word Cooccurrences: Word Pairs and Collocations", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Evert", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evert, Stefan. 2005. The Statistics of Word Cooccurrences: Word Pairs and Collocations. Ph.D. thesis, University of Stuttgart.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Linked disambiguated distributional semantic networks", |
| "authors": [ |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Faralli", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "The Semantic Web -ISWC 2016: 15th International Semantic Web Conference, Proceedings, Part II", |
| "volume": "", |
| "issue": "", |
| "pages": "56--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Faralli, Stefano, Alexander Panchenko, Chris Biemann, and Simone Paolo Ponzetto. 2016. Linked disambiguated distributional semantic networks, In The Semantic Web - ISWC 2016: 15th International Semantic Web Conference, Proceedings, Part II, pages 56-64, Kobe.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Retrofitting word vectors to semantic lexicons", |
| "authors": [ |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Dodge", |
| "suffix": "" |
| }, |
| { |
| "first": "Sujay", |
| "middle": [], |
| "last": "Kumar Jauhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1606--1615", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Faruqui, Manaal, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606-1615, Denver, CO.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "WordNet: An Electronic Database", |
| "authors": [ |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fellbaum, Christiane. 1998. WordNet: An Electronic Database. MIT Press.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Introducing and evaluating ukWaC, a very large Web-derived corpus of English", |
| "authors": [ |
| { |
| "first": "Adriano", |
| "middle": [], |
| "last": "Ferraresi", |
| "suffix": "" |
| }, |
| { |
| "first": "Eros", |
| "middle": [], |
| "last": "Zanchetta", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvia", |
| "middle": [], |
| "last": "Bernardini", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 4th Web as Corpus Workshop (WAC-4): Can We Beat Google?", |
| "volume": "", |
| "issue": "", |
| "pages": "47--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ferraresi, Adriano, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large Web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4): Can We Beat Google?, pages 47-54, Marrakech.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "In Frame semantics, In Linguistics in the Morning Calm", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fillmore", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "111--137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fillmore, Charles J. 1982. In Frame semantics, In Linguistics in the Morning Calm, Hanshin Publishing Co., pages 111-137, Seoul.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "A synopsis of linguistic theory 1930-1955", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "R" |
| ], |
| "last": "Firth", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "Studies in Linguistic Analysis", |
| "volume": "", |
| "issue": "", |
| "pages": "1--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Firth, John R. 1957. A synopsis of linguistic theory 1930-1955, In Studies in Linguistic Analysis, Blackwell, Oxford, UK, pages 1-32.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Measuring nominal scale agreement among many raters", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [ |
| "L" |
| ], |
| "last": "Fleiss", |
| "suffix": "" |
| } |
| ], |
| "year": 1971, |
| "venue": "Psychological Bulletin", |
| "volume": "76", |
| "issue": "5", |
| "pages": "378--382", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fleiss, Joseph L. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378-382.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Supersense embeddings: A unified model for supersense interpretation, prediction, and utilization", |
| "authors": [ |
| { |
| "first": "Lucie", |
| "middle": [], |
| "last": "Flekova", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "2029--2041", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Flekova, Lucie and Iryna Gurevych. 2016. Supersense embeddings: A unified model for supersense interpretation, prediction, and utilization. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2029-2041, Berlin.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Community detection in graphs", |
| "authors": [ |
| { |
| "first": "Santo", |
| "middle": [], |
| "last": "Fortunato", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Physics Reports", |
| "volume": "486", |
| "issue": "3", |
| "pages": "75--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fortunato, Santo. 2010. Community detection in graphs. Physics Reports, 486(3):75-174.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Automatic labeling of semantic roles", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Computational Linguistics", |
| "volume": "28", |
| "issue": "3", |
| "pages": "245--288", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gildea, Daniel and Martin Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245-288.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Building large monolingual dictionaries at the Leipzig Corpora Collection: From 100 to 200 languages", |
| "authors": [ |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Goldhahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Eckart", |
| "suffix": "" |
| }, |
| { |
| "first": "Uwe", |
| "middle": [], |
| "last": "Quasthoff", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "759--765", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Goldhahn, Dirk, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig Corpora Collection: From 100 to 200 languages. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, pages 759-765, Istanbul.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "ECO and Onto.PT: A flexible approach for creating a Portuguese wordnet automatically", |
| "authors": [ |
| { |
| "first": "Gon\u00e7alo", |
| "middle": [], |
| "last": "Oliveira", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Gomes", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Language Resources and Evaluation", |
| "volume": "48", |
| "issue": "2", |
| "pages": "373--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gon\u00e7alo Oliveira, Hugo and Paolo Gomes. 2014. ECO and Onto.PT: A flexible approach for creating a Portuguese wordnet automatically. Language Resources and Evaluation, 48(2):373-393.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Web query expansion by WordNet", |
| "authors": [ |
| { |
| "first": "Zhiguo", |
| "middle": [], |
| "last": "Gong", |
| "suffix": "" |
| }, |
| { |
| "first": "Chan", |
| "middle": [], |
| "last": "Wa Cheang", |
| "suffix": "" |
| }, |
| { |
| "first": "Leong", |
| "middle": [], |
| "last": "Hou", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 16th International Conference on Database and Expert Systems Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "166--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gong, Zhiguo, Chan Wa Cheang, and Leong Hou U. 2005. Web query expansion by WordNet. In Proceedings of the 16th International Conference on Database and Expert Systems Applications, pages 166-175, Copenhagen.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "UBY -A large-scale unified lexical-semantic resource based on LMF", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Graff", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Cieri", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| }, |
| { |
| "first": "Judith", |
| "middle": [], |
| "last": "Iryna", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvana", |
| "middle": [], |
| "last": "Eckle-Kohler", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Hartmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [ |
| "M" |
| ], |
| "last": "Matuschek", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Meyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wirth", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "580--590", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graff, David and Christopher Cieri. 2003. English Gigaword. https: //catalog.ldc.upenn.edu/ldc2003t05. Gurevych, Iryna, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M. Meyer, and Christian Wirth. 2012. UBY -A large-scale unified lexical-semantic resource based on LMF. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 580-590, Avignon.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "A pattern dictionary for natural language processing", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Pustejovsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "10", |
| "issue": "", |
| "pages": "63--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hanks, Patrick and James Pustejovsky. 2005. A pattern dictionary for natural language processing. Revue Fran\u00e7aise de linguistique appliqu\u00e9e, 10(2):63-82.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Algorithm AS 136: A k-means clustering algorithm", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "A" |
| ], |
| "last": "Hartigan", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "Anthony" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "Journal of the Royal Statistical Society. Series C (Applied Statistics)", |
| "volume": "28", |
| "issue": "1", |
| "pages": "100--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hartigan, John A. and M. Anthony Wong. 1979. Algorithm AS 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):100-108.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Generating Training Data for Semantic Role Labeling based on Label Transfer from Linked Lexical Resources", |
| "authors": [ |
| { |
| "first": "Silvana", |
| "middle": [], |
| "last": "Hartmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Judith", |
| "middle": [], |
| "last": "Eckle-Kohler", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "197--213", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hartmann, Silvana, Judith Eckle-Kohler, and Iryna Gurevych. 2016. Generating Training Data for Semantic Role Labeling based on Label Transfer from Linked Lexical Resources. Transactions of the Association for Computational Linguistics, 4:197-213.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Automatic acquisition of hyponyms from large text corpora", |
| "authors": [ |
| { |
| "first": "Marti", |
| "middle": [ |
| "A" |
| ], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 14th Conference on Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "539--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hearst, Marti A. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th Conference on Computational Linguistics -Volume 2, pages 539-545, Nantes.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "MaxMax: A graph-based soft clustering algorithm applied to word sense induction", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hope", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Linguistics and Intelligent Text Processing: 14th International Conference, CICLing", |
| "volume": "", |
| "issue": "", |
| "pages": "368--381", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hope, David and Bill Keller. 2013a, MaxMax: A graph-based soft clustering algorithm applied to word sense induction, In Computational Linguistics and Intelligent Text Processing: 14th International Conference, CICLing 2013, pages 368-381, Samos.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "UoS: A graph-based system for graded word sense induction", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hope", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
| "volume": "2", |
| "issue": "", |
| "pages": "689--694", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hope, David and Bill Keller. 2013b. UoS: A graph-based system for graded word sense induction. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 689-694, Atlanta, GA.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "DOs and DON'Ts of conducting performance measurements in Java", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hork\u00fd", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Vojt\u011bch", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonin", |
| "middle": [], |
| "last": "Libi\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Petr", |
| "middle": [], |
| "last": "Steinhauser", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "T\u016fma", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering", |
| "volume": "", |
| "issue": "", |
| "pages": "337--340", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hork\u00fd, Vojt\u011bch, Peter Libi\u010d, Antonin Steinhauser, and Petr T\u016fma. 2015. DOs and DON'Ts of conducting performance measurements in Java. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, pages 337-340, Austin, TX.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Unsupervised discovery of domain-specific knowledge from text", |
| "authors": [ |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Chunliang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Anselmo", |
| "middle": [], |
| "last": "Pe\u00f1as", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1466--1475", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hovy, Dirk, Chunliang Zhang, Eduard Hovy, and Anselmo Pe\u00f1as. 2011. Unsupervised discovery of domain-specific knowledge from text. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1466-1475, Portland, OR.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Collaboratively built semi-structured content and artificial intelligence: The story so far", |
| "authors": [ |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Artificial Intelligence", |
| "volume": "194", |
| "issue": "", |
| "pages": "2--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hovy, Eduard, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semi-structured content and artificial intelligence: The story so far. Artificial Intelligence, 194:2-27.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Improving word representations via global context and multiple word prototypes", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [ |
| "H" |
| ], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
| "volume": "1", |
| "issue": "", |
| "pages": "873--882", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huang, Eric H., Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers -Volume 1, pages 873-882, Jeju Island.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Triadic formal concept analysis and triclustering: searching for optimal patterns", |
| "authors": [ |
| { |
| "first": "Dmitry", |
| "middle": [ |
| "I" |
| ], |
| "last": "Ignatov", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Dmitry", |
| "suffix": "" |
| }, |
| { |
| "first": "Sergei", |
| "middle": [ |
| "O" |
| ], |
| "last": "Gnatyshak", |
| "suffix": "" |
| }, |
| { |
| "first": "Boris", |
| "middle": [ |
| "G" |
| ], |
| "last": "Kuznetsov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mirkin", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Machine Learning", |
| "volume": "101", |
| "issue": "", |
| "pages": "271--302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ignatov, Dmitry I., Dmitry V. Gnatyshak, Sergei O. Kuznetsov, and Boris G. Mirkin. 2015. Triadic formal concept analysis and triclustering: searching for optimal patterns. Machine Learning, 101(1-3):271-302.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Embedded semantic lexicon induction with joint global and local optimization", |
| "authors": [ |
| { |
| "first": "Sujay", |
| "middle": [], |
| "last": "Jauhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)", |
| "volume": "", |
| "issue": "", |
| "pages": "209--219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jauhar, Sujay Kumar and Eduard Hovy. 2017. Embedded semantic lexicon induction with joint global and local optimization. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), pages 209-219, Vancouver.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "SemEval-2013 Task 13: Word sense induction for graded and non-graded senses", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Jurgens", |
| "suffix": "" |
| }, |
| { |
| "first": "Ioannis", |
| "middle": [], |
| "last": "Klapaftis", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
| "volume": "2", |
| "issue": "", |
| "pages": "290--299", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jurgens, David and Ioannis Klapaftis. 2013. SemEval-2013 Task 13: Word sense induction for graded and non-graded senses. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 290-299, Atlanta, GA.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Coarse lexical frame acquisition at the syntax-semantics interface using a latent-variable PCFG model", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Behrang", |
| "middle": [], |
| "last": "Qasemizadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Jackie Chi Kit", |
| "middle": [], |
| "last": "Cheung", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "130--141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kallmeyer, Laura, Behrang QasemiZadeh, and Jackie Chi Kit Cheung. 2018. Coarse lexical frame acquisition at the syntax-semantics interface using a latent-variable PCFG model. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 130-141, New Orleans, LA.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "A step-wise usage-based method for inducing polysemy-aware verb classes", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kawahara", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "W" |
| ], |
| "last": "Daisuke", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Peterson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1030--1040", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kawahara, Daisuke, Daniel W. Peterson, and Martha Palmer. 2014. A step-wise usage-based method for inducing polysemy-aware verb classes. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, pages 1030-1040, Baltimore, MD.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "\u0421\u043e\u0432\u0440\u0435\u043c\u0435\u043d\u043d\u043e\u0435 \u0441\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0435 \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0445 \u0442\u0435\u0437\u0430\u0443\u0440\u0443\u0441\u043e\u0432 \u0440\u0443\u0441\u0441\u043a\u043e\u0433\u043e \u044f\u0437\u044b\u043a\u0430: \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u043e, \u043f\u043e\u043b\u043d\u043e\u0442\u0430 \u0438 \u0434\u043e\u0441\u0442\u0443\u043f\u043d\u043e\u0441\u0442\u044c [Current Status of Russian Electronic Thesauri: Quality, Completeness and Availability", |
| "authors": [ |
| { |
| "first": "Yuri", |
| "middle": [], |
| "last": "Kiselev", |
| "suffix": "" |
| }, |
| { |
| "first": "Sergey", |
| "middle": [ |
| "V" |
| ], |
| "last": "Porshnev", |
| "suffix": "" |
| }, |
| { |
| "first": "Mikhail", |
| "middle": [], |
| "last": "Mukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiselev, Yuri, Sergey V. Porshnev, and Mikhail Mukhin. 2015. \u0421\u043e\u0432\u0440\u0435\u043c\u0435\u043d\u043d\u043e\u0435 \u0441\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0435 \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0445 \u0442\u0435\u0437\u0430\u0443\u0440\u0443\u0441\u043e\u0432 \u0440\u0443\u0441\u0441\u043a\u043e\u0433\u043e \u044f\u0437\u044b\u043a\u0430: \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u043e, \u043f\u043e\u043b\u043d\u043e\u0442\u0430 \u0438 \u0434\u043e\u0441\u0442\u0443\u043f\u043d\u043e\u0441\u0442\u044c [Current Status of Russian Electronic Thesauri: Quality, Completeness and Availability].", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Clustering polysemic subcategorization frame distributions semantically", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuval", |
| "middle": [], |
| "last": "Krymolowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Zvika", |
| "middle": [], |
| "last": "Marx", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "64--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Korhonen, Anna, Yuval Krymolowski, and Zvika Marx. 2003. Clustering polysemic subcategorization frame distributions semantically. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics -Volume 1, pages 64-71, Sapporo.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Semantic class learning from the Web with hyponym pattern linkage graphs", |
| "authors": [ |
| { |
| "first": "Zornitsa", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL-08: HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "1048--1056", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kozareva, Zornitsa, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the Web with hyponym pattern linkage graphs. In Proceedings of ACL-08: HLT, pages 1048-1056, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "An approach to automated construction of a general-purpose lexical ontology based on Wiktionary", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [ |
| "A" |
| ], |
| "last": "Krizhanovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "V" |
| ], |
| "last": "Smirnov", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Journal of Computer and Systems Sciences International", |
| "volume": "52", |
| "issue": "2", |
| "pages": "215--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Krizhanovsky, Andrew A. and Alexander V. Smirnov. 2013. An approach to automated construction of a general-purpose lexical ontology based on Wiktionary. Journal of Computer and Systems Sciences International, 52(2):215-225.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "Scaling question answering to the Web", |
| "authors": [ |
| { |
| "first": "Cody", |
| "middle": [], |
| "last": "Kwok", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "ACM Transactions on Information Systems", |
| "volume": "19", |
| "issue": "3", |
| "pages": "242--262", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kwok, Cody, Oren Etzioni, and Daniel S. Weld. 2001. Scaling question answering to the Web. ACM Transactions on Information Systems, 19(3):242-262.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Unsupervised Induction of Semantic Roles", |
| "authors": [ |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Lang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "939--947", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lang, Joel and Mirella Lapata. 2010. Unsupervised Induction of Semantic Roles. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 939-947, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "Proceedings of the Seventh International Conference on Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Egoitz", |
| "middle": [], |
| "last": "Laparra", |
| "suffix": "" |
| }, |
| { |
| "first": "German", |
| "middle": [], |
| "last": "Rigau", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1214--1219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laparra, Egoitz and German Rigau. 2010. eXtended WordFrameNet. In Proceedings of the Seventh International Conference on Language Resources and Evaluation, pages 1214-1219, Valletta.", |
| "links": null |
| }, |
| "BIBREF70": { |
| "ref_id": "b70", |
| "title": "Combined distributional and logical semantics", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "179--192", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lewis, Mike and Mark Steedman. 2013a. Combined distributional and logical semantics. Transactions of the Association of Computational Linguistics, 1:179-192.", |
| "links": null |
| }, |
| "BIBREF71": { |
| "ref_id": "b71", |
| "title": "Unsupervised induction of cross-lingual semantic relations", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "681--692", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lewis, Mike and Mark Steedman. 2013b. Unsupervised induction of cross-lingual semantic relations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 681-692, Seattle, WA.", |
| "links": null |
| }, |
| "BIBREF72": { |
| "ref_id": "b72", |
| "title": "Do multi-sense embeddings improve natural language understanding?", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1722--1732", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Jiwei and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1722-1732, Lisbon.", |
| "links": null |
| }, |
| "BIBREF73": { |
| "ref_id": "b73", |
| "title": "An information-theoretic definition of similarity", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the Fifteenth International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "296--304", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Dekang. 1998. An information-theoretic definition of similarity. In Proceedings of the Fifteenth International Conference on Machine Learning, pages 296-304, Madison, WI.", |
| "links": null |
| }, |
| "BIBREF74": { |
| "ref_id": "b74", |
| "title": "Induction of semantic classes from natural language text", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "317--322", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Dekang and Patrick Pantel. 2001. Induction of semantic classes from natural language text. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 317-322, San Francisco, CA. Loukachevitch, Natalia V. 2011. \u0422\u0435\u0437\u0430\u0443\u0440\u0443\u0441\u044b \u0432 \u0437\u0430\u0434\u0430\u0447\u0430\u0445 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0433\u043e \u043f\u043e\u0438\u0441\u043a\u0430 [Thesauri in Information Retrieval Tasks].", |
| "links": null |
| }, |
| "BIBREF75": { |
| "ref_id": "b75", |
| "title": "Creating Russian WordNet by conversion", |
| "authors": [ |
| { |
| "first": "Natalia", |
| "middle": [ |
| "V" |
| ], |
| "last": "Loukachevitch", |
| "suffix": "" |
| }, |
| { |
| "first": "German", |
| "middle": [], |
| "last": "Lashevich", |
| "suffix": "" |
| }, |
| { |
| "first": "Anastasia", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gerasimova", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [ |
| "V" |
| ], |
| "last": "Ivanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Boris", |
| "middle": [ |
| "V" |
| ], |
| "last": "Dobrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Computational Linguistics and Intellectual Technologies: Papers from the Annual Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "405--415", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Loukachevitch, Natalia V., German Lashevich, Anastasia A. Gerasimova, Vladimir V. Ivanov, and Boris V. Dobrov. 2016. Creating Russian WordNet by conversion. In Computational Linguistics and Intellectual Technologies: Papers from the Annual Conference \"Dialogue 2016,\" pages 405-415, Moscow.", |
| "links": null |
| }, |
| "BIBREF76": { |
| "ref_id": "b76", |
| "title": "SemEval-2010 Task 14: Word sense induction & disambiguation", |
| "authors": [ |
| { |
| "first": "Suresh", |
| "middle": [], |
| "last": "Manandhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ioannis", |
| "middle": [], |
| "last": "Klapaftis", |
| "suffix": "" |
| }, |
| { |
| "first": "Dmitriy", |
| "middle": [], |
| "last": "Dligach", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "63--68", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manandhar, Suresh, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 Task 14: Word sense induction & disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63-68, Uppsala.", |
| "links": null |
| }, |
| "BIBREF77": { |
| "ref_id": "b77", |
| "title": "LDA-frames: An unsupervised approach to generating semantic frames", |
| "authors": [ |
| { |
| "first": "Ji\u0159\u00ed", |
| "middle": [], |
| "last": "Materna", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "13th International Conference, Proceedings, Part I", |
| "volume": "", |
| "issue": "", |
| "pages": "376--387", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Materna, Ji\u0159\u00ed. 2012. In LDA-frames: An unsupervised approach to generating semantic frames, In 13th International Conference, Proceedings, Part I, pages 376-387, New Delhi.", |
| "links": null |
| }, |
| "BIBREF78": { |
| "ref_id": "b78", |
| "title": "Parameter estimation for LDA-frames", |
| "authors": [ |
| { |
| "first": "Ji\u0159\u00ed", |
| "middle": [], |
| "last": "Materna", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "482--486", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Materna, Ji\u0159\u00ed. 2013. Parameter estimation for LDA-frames. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 482-486, Atlanta, GA.", |
| "links": null |
| }, |
| "BIBREF79": { |
| "ref_id": "b79", |
| "title": "Note on the sampling error of the difference between correlated proportions or percentages", |
| "authors": [ |
| { |
| "first": "Tara", |
| "middle": [], |
| "last": "Mcintosh", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 1947, |
| "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "12", |
| "issue": "", |
| "pages": "153--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McIntosh, Tara and James R. Curran. 2009. Reducing semantic drift with bagging and distributional similarity. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 396-404, Suntec. McNemar, Quinn. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.", |
| "links": null |
| }, |
| "BIBREF80": { |
| "ref_id": "b80", |
| "title": "Computing lexical chains with graph clustering", |
| "authors": [ |
| { |
| "first": "Olena", |
| "middle": [], |
| "last": "Medelyan", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the ACL: Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "85--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Medelyan, Olena. 2007. Computing lexical chains with graph clustering. In Proceedings of the 45th Annual Meeting of the ACL: Student Research Workshop, pages 85-90, Prague.", |
| "links": null |
| }, |
| "BIBREF81": { |
| "ref_id": "b81", |
| "title": "DKPro agreement: An open-source Java library for measuring inter-rater agreement", |
| "authors": [ |
| { |
| "first": "Christian", |
| "middle": [ |
| "M" |
| ], |
| "last": "Meyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Margot", |
| "middle": [], |
| "last": "Mieskes", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Stab", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "105--109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meyer, Christian M., Margot Mieskes, Christian Stab, and Iryna Gurevych. 2014. DKPro agreement: An open-source Java library for measuring inter-rater agreement. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations, pages 105-109, Dublin. Mihalcea, Rada and Dragomir Radev. 2011. Graph-based Natural Language Processing and Information Retrieval. Cambridge University Press, Cambridge, UK.", |
| "links": null |
| }, |
| "BIBREF82": { |
| "ref_id": "b82", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "26", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality, In Advances in Neural Information Processing Systems 26, pages 3111-3119, Las Vegas, NV.", |
| "links": null |
| }, |
| "BIBREF83": { |
| "ref_id": "b83", |
| "title": "Clustering algorithms: A review", |
| "authors": [ |
| { |
| "first": "Boris", |
| "middle": [], |
| "last": "Mirkin", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Mathematical Classification and Clustering", |
| "volume": "", |
| "issue": "", |
| "pages": "109--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mirkin, Boris. 1996. Clustering algorithms: A review, In Mathematical Classification and Clustering. Springer US, Boston, MA, pages 109-168.", |
| "links": null |
| }, |
| "BIBREF84": { |
| "ref_id": "b84", |
| "title": "Ranked WordNet graph for sentiment polarity classification in Twitter", |
| "authors": [ |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Modi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Klementiev", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure", |
| "volume": "28", |
| "issue": "", |
| "pages": "93--107", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Modi, Ashutosh, Ivan Titov, and Alexandre Klementiev. 2012. Unsupervised induction of frame-semantic representations. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 1-7, Montr\u00e9al. Montejo-R\u00e1ez, Arturo, Eugenio Mart\u00ednez-C\u00e1mara, M. Teresa Mart\u00edn-Valdivia, and L. Alfonso Ure\u00f1a L\u00f3pez. 2014. Ranked WordNet graph for sentiment polarity classification in Twitter. Computer Speech & Language, 28(1):93-107.", |
| "links": null |
| }, |
| "BIBREF85": { |
| "ref_id": "b85", |
| "title": "FrameNet meets the semantic Web: Lexical semantics for the Web", |
| "authors": [ |
| { |
| "first": "Srini", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Collin", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Fillmore", |
| "suffix": "" |
| }, |
| { |
| "first": "Miriam", |
| "middle": [], |
| "last": "Petruck", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "The Semantic Web -ISWC 2003: Second International Semantic Web Conference, Proceedings", |
| "volume": "", |
| "issue": "", |
| "pages": "771--787", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Narayanan, Srini, Collin Baker, Charles Fillmore, and Miriam Petruck. 2003. FrameNet meets the semantic Web: Lexical semantics for the Web. In The Semantic Web -ISWC 2003: Second International Semantic Web Conference, Proceedings. pages 771-787, Sanibel Island, FL.", |
| "links": null |
| }, |
| "BIBREF86": { |
| "ref_id": "b86", |
| "title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", |
| "authors": [ |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [], |
| "last": "Paolo Ponzetto ; Neelakantan", |
| "suffix": "" |
| }, |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeevan", |
| "middle": [], |
| "last": "Shankar", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "193", |
| "issue": "", |
| "pages": "1059--1069", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Navigli, Roberto and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217-250. Neelakantan, Arvind, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1059-1069, Doha.", |
| "links": null |
| }, |
| "BIBREF87": { |
| "ref_id": "b87", |
| "title": "Finding and evaluating community structure in networks", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [ |
| "E J" |
| ], |
| "last": "Newman", |
| "suffix": "" |
| }, |
| { |
| "first": "Michelle", |
| "middle": [], |
| "last": "Girvan", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Physical Review E", |
| "volume": "69", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Newman, Mark E. J. and Michelle Girvan. 2004. Finding and evaluating community structure in networks. Physical Review E, 69(2):026113.", |
| "links": null |
| }, |
| "BIBREF88": { |
| "ref_id": "b88", |
| "title": "Semantic class induction and coreference resolution", |
| "authors": [ |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "536--543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ng, Vincent. 2007. Semantic class induction and coreference resolution. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 536-543, Prague.", |
| "links": null |
| }, |
| "BIBREF89": { |
| "ref_id": "b89", |
| "title": "A simple and efficient method to generate word sense representations", |
| "authors": [ |
| { |
| "first": "Nieto", |
| "middle": [], |
| "last": "Pi\u00f1a", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "465--472", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nieto Pi\u00f1a, Luis and Richard Johansson. 2015. A simple and efficient method to generate word sense representations. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 465-472, Hissar.", |
| "links": null |
| }, |
| "BIBREF90": { |
| "ref_id": "b90", |
| "title": "MaltParser: A data-driven parser-generator for dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "2216--2219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, Joakim, Johan Hall, and Jens Nilsson. 2006. MaltParser: A data-driven parser-generator for dependency parsing. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, pages 2216-2219, Genoa.", |
| "links": null |
| }, |
| "BIBREF91": { |
| "ref_id": "b91", |
| "title": "Learning frames from text with an unsupervised latent variable model", |
| "authors": [ |
| { |
| "first": "Brendan", |
| "middle": [], |
| "last": "O'connor", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Machine Learning Department", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O'Connor, Brendan. 2013. Learning frames from text with an unsupervised latent variable model. Technical report, Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA.", |
| "links": null |
| }, |
| "BIBREF92": { |
| "ref_id": "b92", |
| "title": "Cross-lingual annotation projection of semantic roles", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "36", |
| "issue": "1", |
| "pages": "307--340", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pad\u00f3, Sebastian and Mirella Lapata. 2009. Cross-lingual annotation projection of semantic roles. Journal of Artificial Intelligence Research, 36(1):307-340.", |
| "links": null |
| }, |
| "BIBREF93": { |
| "ref_id": "b93", |
| "title": "Uncovering the overlapping community structure of complex networks in nature and society", |
| "authors": [ |
| { |
| "first": "Gergely", |
| "middle": [], |
| "last": "Palla", |
| "suffix": "" |
| }, |
| { |
| "first": "Imre", |
| "middle": [], |
| "last": "Derenyi", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Nature", |
| "volume": "435", |
| "issue": "", |
| "pages": "814--818", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Palla, Gergely, Imre Derenyi, Illes Farkas, and Tamas Vicsek. 2005. Uncovering the overlapping community structure of complex networks in nature and society. Nature, 435:814-818.", |
| "links": null |
| }, |
| "BIBREF94": { |
| "ref_id": "b94", |
| "title": "Dmitry Ustalov, Nikolay Arefyev, Denis Paperno, Natalia Konstantinova, Natalia Loukachevitch, and Chris Biemann. 2017. Human and machine judgements for Russian semantic relatedness", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Faralli", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugen", |
| "middle": [], |
| "last": "Ruppert", |
| "suffix": "" |
| }, |
| { |
| "first": "Steffen", |
| "middle": [], |
| "last": "Remus", |
| "suffix": "" |
| }, |
| { |
| "first": "Hubert", |
| "middle": [], |
| "last": "Naets", |
| "suffix": "" |
| }, |
| { |
| "first": "Cedrick", |
| "middle": [], |
| "last": "Fairon", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugen", |
| "middle": [], |
| "last": "Ruppert", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Faralli", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Miyazaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Simon", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Ustalov", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Faralli", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Analysis of Images, Social Networks and Texts: 5th International Conference, Revised Selected Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "1541--1551", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Panchenko, Alexander, Stefano Faralli, Eugen Ruppert, Steffen Remus, Hubert Naets, Cedrick Fairon, Simone Paolo Ponzetto, and Chris Biemann. 2016a. TAXI at SemEval-2016 Task 13: A taxonomy induction method based on lexico-syntactic patterns, substrings and focused crawling. In Proceedings of the 10th International Workshop on Semantic Evaluation, pages 1320-1327, San Diego, CA. Panchenko, Alexander, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, and Chris Biemann. 2018a. Building a Web-scale dependency-parsed corpus from common crawl. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pages 1816-1823, Miyazaki. Panchenko, Alexander, Johannes Simon, Martin Riedl, and Chris Biemann. 2016b. Noun sense induction and disambiguation using graph-based distributional semantics. In Proceedings of the 13th Conference on Natural Language Processing, pages 192-202, Bochum. Panchenko, Alexander, Dmitry Ustalov, Nikolay Arefyev, Denis Paperno, Natalia Konstantinova, Natalia Loukachevitch, and Chris Biemann. 2017. Human and machine judgements for Russian semantic relatedness. In Analysis of Images, Social Networks and Texts: 5th International Conference, Revised Selected Papers, pages 221-235, Yekaterinburg. Panchenko, Alexander, Dmitry Ustalov, Stefano Faralli, Simone Paolo Ponzetto, and Chris Biemann. 2018b. Improving hypernymy extraction with distributional semantic classes. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pages 1541-1551, Miyazaki.", |
| "links": null |
| }, |
| "BIBREF95": { |
| "ref_id": "b95", |
| "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lillian", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics Main Volume", |
| "volume": "", |
| "issue": "", |
| "pages": "938--947", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pang, Bo and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics Main Volume, pages 271-278, Barcelona. Pantel, Patrick, Eric Crestan, Arkady Borkovsky, Ana-Maria Popescu, and Vishnu Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 938-947, Singapore.", |
| "links": null |
| }, |
| "BIBREF96": { |
| "ref_id": "b96", |
| "title": "Discovering word senses from text", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "613--619", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pantel, Patrick and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 613-619, Edmonton.", |
| "links": null |
| }, |
| "BIBREF97": { |
| "ref_id": "b97", |
| "title": "Automatically labeling semantic classes", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "Deepak", |
| "middle": [], |
| "last": "Ravichandran", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "321--328", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pantel, Patrick and Deepak Ravichandran. 2004. Automatically labeling semantic classes. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 321-328, Boston, MA.", |
| "links": null |
| }, |
| "BIBREF98": { |
| "ref_id": "b98", |
| "title": "PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification", |
| "authors": [ |
| { |
| "first": "Ellie", |
| "middle": [], |
| "last": "Pavlick", |
| "suffix": "" |
| }, |
| { |
| "first": "Pushpendre", |
| "middle": [], |
| "last": "Rastogi", |
| "suffix": "" |
| }, |
| { |
| "first": "Juri", |
| "middle": [], |
| "last": "Ganitkevitch", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "425--430", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pavlick, Ellie, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 425-430, Beijing.", |
| "links": null |
| }, |
| "BIBREF99": { |
| "ref_id": "b99", |
| "title": "Distinguishing word senses in untagged text", |
| "authors": [ |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Bruce", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "197--207", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pedersen, Ted and Rebecca Bruce. 1997. Distinguishing word senses in untagged text. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 197-207, Providence, RI.", |
| "links": null |
| }, |
| "BIBREF100": { |
| "ref_id": "b100", |
| "title": "Making sense of word embeddings", |
| "authors": [ |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Pelevina", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikolay", |
| "middle": [], |
| "last": "Arefiev", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 1st Workshop on Representation Learning for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "174--183", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pelevina, Maria, Nikolay Arefiev, Chris Biemann, and Alexander Panchenko. 2016. Making sense of word embeddings. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 174-183, Berlin.", |
| "links": null |
| }, |
| "BIBREF101": { |
| "ref_id": "b101", |
| "title": "GloVe: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532-1543, Doha.", |
| "links": null |
| }, |
| "BIBREF102": { |
| "ref_id": "b102", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "2227--2237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peters, Matthew, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, LA.", |
| "links": null |
| }, |
| "BIBREF103": { |
| "ref_id": "b103", |
| "title": "Semantic lexicon induction from Twitter with pattern relatedness and flexible term length", |
| "authors": [ |
| { |
| "first": "Ashequl", |
| "middle": [], |
| "last": "Qadir", |
| "suffix": "" |
| }, |
| { |
| "first": "Pablo", |
| "middle": [ |
| "N" |
| ], |
| "last": "Mendes", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gruhl", |
| "suffix": "" |
| }, |
| { |
| "first": "Neal", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI-15", |
| "volume": "", |
| "issue": "", |
| "pages": "2432--2439", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qadir, Ashequl, Pablo N. Mendes, Daniel Gruhl, and Neal Lewis. 2015. Semantic lexicon induction from Twitter with pattern relatedness and flexible term length. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI-15, pages 2432-2439, Austin, TX.", |
| "links": null |
| }, |
| "BIBREF104": { |
| "ref_id": "b104", |
| "title": "Multi-prototype vector-space models of word meaning", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Reisinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "109--117", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reisinger, Joseph and Raymond J. Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 109-117, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF105": { |
| "ref_id": "b105", |
| "title": "Using WordNet as a knowledge base for measuring semantic similarity between words. Working Paper CA-1294", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "F" |
| ], |
| "last": "Ray", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Smeaton", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richardson, Ray, Alan F. Smeaton, and John Murphy. 1994. Using WordNet as a knowledge base for measuring semantic similarity between words. Working Paper CA-1294, School of Computer Applications, Dublin City University, Dublin, Ireland.", |
| "links": null |
| }, |
| "BIBREF106": { |
| "ref_id": "b106", |
| "title": "Exploiting the Leipzig corpora collection", |
| "authors": [ |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Richter", |
| "suffix": "" |
| }, |
| { |
| "first": "Uwe", |
| "middle": [], |
| "last": "Quasthoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Erla", |
| "middle": [], |
| "last": "Hallsteinsd\u00f3ttir", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of 5th Slovenian and 1st International Language Technologies Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richter, Matthias, Uwe Quasthoff, Erla Hallsteinsd\u00f3ttir, and Chris Biemann. 2006. Exploiting the Leipzig corpora collection. In Proceedings of 5th Slovenian and 1st International Language Technologies Conference, Ljubljana. http://nl.ijs.si/ isjt06/proc/13_Richter.pdf", |
| "links": null |
| }, |
| "BIBREF107": { |
| "ref_id": "b107", |
| "title": "Unsupervised Methods for Learning and Using Semantics of Natural Language", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riedl, Martin. 2016. Unsupervised Methods for Learning and Using Semantics of Natural Language. Ph.D. thesis, Technische Universit\u00e4t Darmstadt.", |
| "links": null |
| }, |
| "BIBREF108": { |
| "ref_id": "b108", |
| "title": "There's no 'count or predict' but task-based selection for distributional models", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 12th International Conference on Computational Semantics -Short papers", |
| "volume": "", |
| "issue": "", |
| "pages": "264--272", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riedl, Martin and Chris Biemann. 2017. There's no 'count or predict' but task-based selection for distributional models. In Proceedings of the 12th International Conference on Computational Semantics -Short papers, pages 264-272, Montpellier.", |
| "links": null |
| }, |
| "BIBREF109": { |
| "ref_id": "b109", |
| "title": "A latent Dirichlet allocation method for selectional preferences", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Mausam", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [ |
| "Etzioni" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "424--434", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ritter, Alan, Mausam, and Oren Etzioni. 2010. A latent Dirichlet allocation method for selectional preferences. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 424-434, Uppsala.", |
| "links": null |
| }, |
| "BIBREF110": { |
| "ref_id": "b110", |
| "title": "EgoSet: Exploiting word Ego-networks and user-generated ontology for multifaceted set expansion", |
| "authors": [ |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Rong", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhe", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Qiaozhu", |
| "middle": [], |
| "last": "Mei", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Adar", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Ninth ACM International Conference on Web Search and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "645--654", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rong, Xin, Zhe Chen, Qiaozhu Mei, and Eytan Adar. 2016. EgoSet: Exploiting word Ego-networks and user-generated ontology for multifaceted set expansion. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, pages 645-654, San Francisco, CA.", |
| "links": null |
| }, |
| "BIBREF111": { |
| "ref_id": "b111", |
| "title": "Rule-based dependency parse collapsing and propagation for German and English", |
| "authors": [ |
| { |
| "first": "Eugen", |
| "middle": [], |
| "last": "Ruppert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Klesy", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "58--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruppert, Eugen, Jonas Klesy, Martin Riedl, and Chris Biemann. 2015. Rule-based dependency parse collapsing and propagation for German and English. In Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology, pages 58-66, Duisburg and Essen.", |
| "links": null |
| }, |
| "BIBREF112": { |
| "ref_id": "b112", |
| "title": "A vector space model for automatic indexing", |
| "authors": [ |
| { |
| "first": "Gerard", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "Chungshu", |
| "middle": [ |
| "S" |
| ], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Communications of the ACM", |
| "volume": "18", |
| "issue": "11", |
| "pages": "613--620", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Salton, Gerard, Andrew Wong, and Chungshu S. Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613-620.", |
| "links": null |
| }, |
| "BIBREF113": { |
| "ref_id": "b113", |
| "title": "More like these\": Growing entity classes from seeds", |
| "authors": [ |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Sarmento", |
| "suffix": "" |
| }, |
| { |
| "first": "Valentin", |
| "middle": [], |
| "last": "Jijkuon", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugenio", |
| "middle": [], |
| "last": "Maarten De Rijke", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Oliveira", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management", |
| "volume": "", |
| "issue": "", |
| "pages": "959--962", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sarmento, Luis, Valentin Jijkuon, Maarten de Rijke, and Eugenio Oliveira. 2007. \"More like these\": Growing entity classes from seeds. In Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, pages 959-962, Lisbon.", |
| "links": null |
| }, |
| "BIBREF114": { |
| "ref_id": "b114", |
| "title": "Graph clustering", |
| "authors": [ |
| { |
| "first": "Satu", |
| "middle": [], |
| "last": "Schaeffer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Elisa", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computer Science Review", |
| "volume": "1", |
| "issue": "1", |
| "pages": "27--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schaeffer, Satu Elisa. 2007. Graph clustering. Computer Science Review, 1(1):27-64.", |
| "links": null |
| }, |
| "BIBREF115": { |
| "ref_id": "b115", |
| "title": "Unsupervised learning of prototypical fillers for implicit semantic role labeling", |
| "authors": [ |
| { |
| "first": "Niko", |
| "middle": [], |
| "last": "Schenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Chiarcos", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1473--1479", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schenk, Niko and Christian Chiarcos. 2016. Unsupervised learning of prototypical fillers for implicit semantic role labeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1473-1479, San Diego, CA.", |
| "links": null |
| }, |
| "BIBREF116": { |
| "ref_id": "b116", |
| "title": "Automatic word sense discrimination", |
| "authors": [ |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "1", |
| "pages": "97--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sch\u00fctze, Hinrich. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97-123.", |
| "links": null |
| }, |
| "BIBREF117": { |
| "ref_id": "b117", |
| "title": "Statsmodels: Econometric and statistical modeling with Python", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Seabold", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Skipper", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Perktold", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 9th Python in Science Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "57--61", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seabold, Skipper and Josef Perktold. 2010. Statsmodels: Econometric and statistical modeling with Python. In Proceedings of the 9th Python in Science Conference, pages 57-61, Austin, TX.", |
| "links": null |
| }, |
| "BIBREF118": { |
| "ref_id": "b118", |
| "title": "Using semantic roles to improve question answering", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "12--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shen, Dan and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 12-21, Prague.", |
| "links": null |
| }, |
| "BIBREF119": { |
| "ref_id": "b119", |
| "title": "SetExpan: Corpus-based set expansion via context feature selection and rank ensemble", |
| "authors": [ |
| { |
| "first": "Jiaming", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zeqiu", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongming", |
| "middle": [], |
| "last": "Lei", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingbo", |
| "middle": [], |
| "last": "Shang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiawei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Proceedings, Part I", |
| "volume": "", |
| "issue": "", |
| "pages": "288--304", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shen, Jiaming, Zeqiu Wu, Dongming Lei, Jingbo Shang, Xiang Ren, and Jiawei Han. 2017. SetExpan: Corpus-based set expansion via context feature selection and rank ensemble. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Proceedings, Part I, pages 288-304, Skopje.", |
| "links": null |
| }, |
| "BIBREF120": { |
| "ref_id": "b120", |
| "title": "Normalized cuts and image segmentation", |
| "authors": [ |
| { |
| "first": "Jianbo", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jitendra", |
| "middle": [], |
| "last": "Malik", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
| "volume": "22", |
| "issue": "8", |
| "pages": "888--905", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shi, Jianbo and Jitendra Malik. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888-905.", |
| "links": null |
| }, |
| "BIBREF121": { |
| "ref_id": "b121", |
| "title": "The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steyvers", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [ |
| "B" |
| ], |
| "last": "Tenenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Cognitive Science", |
| "volume": "29", |
| "issue": "1", |
| "pages": "41--78", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steyvers, Mark and Joshua B. Tenenbaum. 2005. The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth. Cognitive Science, 29(1):41-78.", |
| "links": null |
| }, |
| "BIBREF122": { |
| "ref_id": "b122", |
| "title": "Overview of the fourth message understanding evaluation and conference", |
| "authors": [ |
| { |
| "first": "Beth", |
| "middle": [ |
| "M" |
| ], |
| "last": "Sundheim", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 4th Conference on Message Understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "3--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sundheim, Beth M. 1992. Overview of the fourth message understanding evaluation and conference. In Proceedings of the 4th Conference on Message Understanding, pages 3-21, McLean, VA.", |
| "links": null |
| }, |
| "BIBREF123": { |
| "ref_id": "b123", |
| "title": "A bootstrapping method for learning semantic lexicons using extraction pattern contexts", |
| "authors": [ |
| { |
| "first": "Partha", |
| "middle": [], |
| "last": "Talukdar", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Pratim", |
| "suffix": "" |
| }, |
| { |
| "first": "Marius", |
| "middle": [], |
| "last": "Reisinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Deepak", |
| "middle": [], |
| "last": "Pasca", |
| "suffix": "" |
| }, |
| { |
| "first": "Rahul", |
| "middle": [], |
| "last": "Ravichandran", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Bhagat", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "214--221", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Talukdar, Partha Pratim, Joseph Reisinger, Marius Pasca, Deepak Ravichandran, Rahul Bhagat, and Fernando Pereira. 2008. Weakly-supervised acquisition of labeled class instances using graph random walks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 582-590, Honolulu, HI. Thelen, Michael and Ellen Riloff. 2002. A bootstrapping method for learning semantic lexicons using extraction pattern contexts. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 214-221, Pennsylvania, PA.", |
| "links": null |
| }, |
| "BIBREF124": { |
| "ref_id": "b124", |
| "title": "Multi-modal word synset induction", |
| "authors": [ |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Thomason", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "4116--4122", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomason, Jesse and Raymond J. Mooney. 2017. Multi-modal word synset induction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4116-4122, Melbourne.", |
| "links": null |
| }, |
| "BIBREF125": { |
| "ref_id": "b125", |
| "title": "A probabilistic model for learning multi-prototype word embeddings", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanjun", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiang", |
| "middle": [], |
| "last": "Bian", |
| "suffix": "" |
| }, |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Enhong", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Tie-Yan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "151--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tian, Fei, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 151-160, Dublin.", |
| "links": null |
| }, |
| "BIBREF126": { |
| "ref_id": "b126", |
| "title": "A Bayesian model for unsupervised semantic parsing", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Klementiev", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1445--1455", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Titov, Ivan and Alexandre Klementiev. 2011. A Bayesian model for unsupervised semantic parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1445-1455, Portland, OR.", |
| "links": null |
| }, |
| "BIBREF127": { |
| "ref_id": "b127", |
| "title": "A Bayesian approach to unsupervised semantic role induction", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Klementiev", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "12--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Titov, Ivan and Alexandre Klementiev. 2012. A Bayesian approach to unsupervised semantic role induction. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 12-22, Avignon.", |
| "links": null |
| }, |
| "BIBREF128": { |
| "ref_id": "b128", |
| "title": "New features for FrameNet -WordNet mapping", |
| "authors": [ |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Tonelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniele", |
| "middle": [], |
| "last": "Pighin", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "219--227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tonelli, Sara and Daniele Pighin. 2009. New features for FrameNet -WordNet mapping. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 219-227, Boulder, CO.", |
| "links": null |
| }, |
| "BIBREF129": { |
| "ref_id": "b129", |
| "title": "From frequency to meaning: Vector space models of semantics", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [ |
| "D" |
| ], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "37", |
| "issue": "", |
| "pages": "141--188", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Turney, Peter D. and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141-188.", |
| "links": null |
| }, |
| "BIBREF130": { |
| "ref_id": "b130", |
| "title": "Fighting with the sparsity of the synonymy dictionaries for automatic synset induction", |
| "authors": [ |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Ustalov", |
| "suffix": "" |
| }, |
| { |
| "first": "Mikhail", |
| "middle": [], |
| "last": "Chernoskutov", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Analysis of Images, Social Networks and Texts: 6th International Conference, Revised Selected Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "94--105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ustalov, Dmitry, Mikhail Chernoskutov, Chris Biemann, and Alexander Panchenko. 2017. Fighting with the sparsity of the synonymy dictionaries for automatic synset induction. In Analysis of Images, Social Networks and Texts: 6th International Conference, Revised Selected Papers, pages 94-105, Moscow.", |
| "links": null |
| }, |
| "BIBREF131": { |
| "ref_id": "b131", |
| "title": "A tool for effective extraction of synsets and semantic relations from BabelNet", |
| "authors": [ |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Ustalov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Siberian Symposium on Data Science and Engineering", |
| "volume": "", |
| "issue": "", |
| "pages": "10--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ustalov, Dmitry and Alexander Panchenko. 2017. A tool for effective extraction of synsets and semantic relations from BabelNet. In Proceedings of the 2017 Siberian Symposium on Data Science and Engineering, pages 10-13, Novosibirsk.", |
| "links": null |
| }, |
| "BIBREF132": { |
| "ref_id": "b132", |
| "title": "Watset: Automatic induction of synsets from a graph of synonyms", |
| "authors": [ |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Ustalov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1579--1590", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ustalov, Dmitry, Alexander Panchenko, and Chris Biemann. 2017. Watset: Automatic induction of synsets from a graph of synonyms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1579-1590, Vancouver.", |
| "links": null |
| }, |
| "BIBREF133": { |
| "ref_id": "b133", |
| "title": "Unsupervised semantic frame induction using triclustering", |
| "authors": [ |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Ustalov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Panchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrei", |
| "middle": [], |
| "last": "Kutuzov", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "55--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ustalov, Dmitry, Alexander Panchenko, Andrei Kutuzov, Chris Biemann, and Simone Paolo Ponzetto. 2018. Unsupervised semantic frame induction using triclustering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 55-62, Melbourne.", |
| "links": null |
| }, |
| "BIBREF134": { |
| "ref_id": "b134", |
| "title": "HyperLex: Lexical cartography for information retrieval", |
| "authors": [ |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "V\u00e9ronis", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computer Speech & Language", |
| "volume": "18", |
| "issue": "3", |
| "pages": "223--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V\u00e9ronis, Jean. 2004. HyperLex: Lexical cartography for information retrieval. Computer Speech & Language, 18(3):223-252.", |
| "links": null |
| }, |
| "BIBREF135": { |
| "ref_id": "b135", |
| "title": "Iterative set expansion of named entities using the Web", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [ |
| "C" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "W" |
| ], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Eighth IEEE International Conference on Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "1091--1096", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang, Richard C. and William W. Cohen. 2008. Iterative set expansion of named entities using the Web. In 2008 Eighth IEEE International Conference on Data Mining, pages 1091-1096, Pisa.", |
| "links": null |
| }, |
| "BIBREF136": { |
| "ref_id": "b136", |
| "title": "The generalization of 'Student's' problem when several different population variances are involved", |
| "authors": [ |
| { |
| "first": "Bernard", |
| "middle": [], |
| "last": "Welch", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| } |
| ], |
| "year": 1947, |
| "venue": "Biometrika", |
| "volume": "34", |
| "issue": "1-2", |
| "pages": "28--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Welch, Bernard Lewis. 1947. The generalization of 'Student's' problem when several different population variances are involved. Biometrika, 34(1-2):28-35.", |
| "links": null |
| }, |
| "BIBREF137": { |
| "ref_id": "b137", |
| "title": "A graph model for unsupervised lexical acquisition", |
| "authors": [ |
| { |
| "first": "Dominic", |
| "middle": [], |
| "last": "Widdows", |
| "suffix": "" |
| }, |
| { |
| "first": "Beate", |
| "middle": [], |
| "last": "Dorow", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 19th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Widdows, Dominic and Beate Dorow. 2002. A graph model for unsupervised lexical acquisition. In Proceedings of the 19th International Conference on Computational Linguistics -Volume 1, pages 1-7, Taipei.", |
| "links": null |
| }, |
| "BIBREF138": { |
| "ref_id": "b138", |
| "title": "Extracting lexical semantic knowledge from Wikipedia and Wiktionary", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [ |
| "J" |
| ], |
| "last": "Boyi", |
| "suffix": "" |
| }, |
| { |
| "first": "Leon", |
| "middle": [], |
| "last": "Passonneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [ |
| "G" |
| ], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Creamer", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1646--1652", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xie, Boyi, Rebecca J. Passonneau, Leon Wu, and Germ\u00e1n G. Creamer. 2013. Semantic frames to predict stock price movement. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 873-883, Sofia. Zesch, Torsten, Christof M\u00fcller, and Iryna Gurevych. 2008. Extracting lexical semantic knowledge from Wikipedia and Wiktionary. In Proceedings of the 6th International Conference on Language Resources and Evaluation, pages 1646-1652, Marrakech.", |
| "links": null |
| }, |
| "BIBREF139": { |
| "ref_id": "b139", |
| "title": "TRICLUSTER: An effective algorithm for mining coherent clusters in 3d microarray Data", |
| "authors": [ |
| { |
| "first": "Lizhuang", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammed", |
| "middle": [ |
| "J" |
| ], |
| "last": "Zaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data", |
| "volume": "", |
| "issue": "", |
| "pages": "694--705", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhao, Lizhuang and Mohammed J. Zaki. 2005. TRICLUSTER: An effective algorithm for mining coherent clusters in 3d microarray Data. In Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data, pages 694-705, New York, NY.", |
| "links": null |
| }, |
| "BIBREF140": { |
| "ref_id": "b140", |
| "title": "Improving question retrieval in community question answering using world knowledge", |
| "authors": [ |
| { |
| "first": "Guangyou", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Fang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Daojian", |
| "middle": [], |
| "last": "Zeng", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "2239--2245", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhou, Guangyou, Yang Liu, Fang Liu, Daojian Zeng, and Jun Zhao. 2013. Improving question retrieval in community question answering using world knowledge. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pages 2239-2245, Beijing.", |
| "links": null |
| }, |
| "BIBREF141": { |
| "ref_id": "b141", |
| "title": "Human Behavior and the Principle of Least Effort", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "K" |
| ], |
| "last": "Zipf", |
| "suffix": "" |
| } |
| ], |
| "year": 1949, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zipf, George K. 1949. Human Behavior and the Principle of Least Effort. Addison-Wesley, Menlo Park, CA.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Log-log plots showing growth of the empirical average running time in number of nodes (left) and number of edges (right) of two WATSET[CW top , CW top ] setups: sequential and parallel. The dashed line is fitted to the running time data of the sequential version of WATSET, showing polynomial growth in O(|V| 2.52 ) and O(|E| 1.63 ), respectively." |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Impact of the different graph-weighting schemas on the performance of synset induction. Each bar corresponds to the top performance of a method inTables 9 and 10." |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Example of two senses associated with a triple (government, run, market)." |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Examples of \"good\" frames produced by the Triframes WATSET[CW top , CW top ] method as labeled by our annotators; frame identifiers are present in the first column; pronouns and prepositions are omitted." |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Examples of \"bad\" frames produced by the Triframes WATSET[CW top , CW top ] method as labeled by our annotators; frame identifiers are present in the first column, pronouns and prepositions are omitted." |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>Input Nodes</td><td>Input Edges</td><td>Output Linguistic Structure</td><td>See</td></tr><tr><td>Polysemous words</td><td>Synonymy</td><td>Synsets composed of</td><td>\u00a7 4</td></tr><tr><td/><td>relationships</td><td>disambiguated words</td><td/></tr><tr><td>Subject-Verb-Object</td><td>Most distributionally</td><td>Lexical semantic frames</td><td>\u00a7 5</td></tr><tr><td>(SVO) triples</td><td>similar SVO triples</td><td/><td/></tr><tr><td>Polysemous words</td><td>Most distributionally</td><td>Semantic classes composed</td><td>\u00a7 6</td></tr><tr><td/><td>similar words</td><td>of disambiguated words</td><td/></tr></table>", |
| "text": "Various types of input linguistic graphs clustered by the WATSET algorithm and the corresponding induced output symbolic linguistic structures.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td>Source</td><td>Target</td><td>Index</td></tr><tr><td>bank</td><td>streambank</td><td>1</td></tr><tr><td/><td>riverbank</td><td>1</td></tr><tr><td/><td>streamside</td><td>1</td></tr><tr><td/><td>building</td><td>2</td></tr><tr><td/><td>bank building</td><td>2</td></tr><tr><td>streambank</td><td>bank</td><td>3</td></tr><tr><td/><td>riverbank</td><td>3</td></tr></table>", |
| "text": "Node sense identifier tracking in Simplified WATSET, according toFigure 2.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF7": { |
| "content": "<table><tr><td>Resource</td><td>Language</td><td colspan=\"2\"># words # synsets</td><td># pairs</td></tr><tr><td>WordNet BabelNet</td><td>English</td><td colspan=\"3\">148,730 11,710,137 6,667,855 28,822,400 117,659 152,254</td></tr><tr><td>RuWordNet YARN</td><td>Russian</td><td>110,242 9,141</td><td>49,492 2,210</td><td>278,381 48,291</td></tr></table>", |
| "text": "Statistics of the gold standard data sets used in our experiments.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF8": { |
| "content": "<table><tr><td>Language</td></tr></table>", |
| "text": "Statistics of the input data sets used in our experiments.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF9": { |
| "content": "<table/>", |
| "text": "). 28 4.2.3 Results and Discussion.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF12": { |
| "content": "<table/>", |
| "text": ". In particular, on WordNet for English, WATSET[CW log , MCL] has statistically significantly outperformed all other methods (p 0.01), including different configurations of our algorithm. On BabelNet for English, WATSET[MCL, MCL] showed a similar behavior (p 0.01). On RuWordNet for Russian, Simplified WATSET[MCL, CW lin ] statistically significantly outperformed all other algorithms, including highly competitive MCL and MaxMax (p 0.01). Similarly, on YARN for Russian, Simplified WATSET[CW lin ,", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF13": { |
| "content": "<table><tr><td colspan=\"2\">Size Synset</td></tr><tr><td>2</td><td>decimal point, dot</td></tr><tr><td>2</td><td>wall socket, power point</td></tr><tr><td>3</td><td>gullet, throat, food pipe</td></tr><tr><td>3</td><td>CAT, computed axial tomography, CT</td></tr><tr><td>4</td><td>microwave meal, ready meal, TV dinner, frozen dinner</td></tr><tr><td>4</td><td>mock strawberry, false strawberry, gurbir, Indian strawberry</td></tr><tr><td>5</td><td>objective case, accusative case, oblique case, object case, accusative</td></tr><tr><td>5</td><td>discipline, sphere, area, domain, sector</td></tr><tr><td>6</td><td>radio theater, dramatized audiobook, audio theater, radio play, radio drama, audio</td></tr><tr><td/><td>play</td></tr><tr><td>6</td><td>integrator, reconciler, consolidator, mediator, harmonizer, uniter</td></tr><tr><td>7</td><td>invite, motivate, entreat, ask for, incentify, ask out, encourage</td></tr><tr><td>7</td><td>curtail, craw, yield, riding crop, harvest, crop, hunting crop</td></tr></table>", |
| "text": "Sample synsets induced by the WATSET[MCL, MCL] method for English using the sim weighting approach.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF14": { |
| "content": "<table><tr><td>Input Synsets</td><td>Gold Synsets</td><td>Language</td><td>Pr</td><td>Re</td><td>F 1</td></tr><tr><td>BabelNet WordNet</td><td>WordNet BabelNet</td><td>English</td><td colspan=\"3\">72.93 99.76 84.26 99.79 69.86 82.18</td></tr><tr><td>YARN BabelNet</td><td>RuWordNet RuWordNet</td><td>Russian</td><td colspan=\"3\">16.36 16.21 16.28 34.84 40.87 37.61</td></tr><tr><td>RuWordNet BabelNet</td><td>YARN YARN</td><td>Russian</td><td colspan=\"3\">66.96 12.13 20.54 51.53 10.89 17.98</td></tr></table>", |
| "text": "Performance of lexical resources cross-evaluated against each other.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF16": { |
| "content": "<table><tr><td>Output: a set of triframes F.</td><td/></tr><tr><td>1: for all t = (s, p, o) \u2208 T do</td><td>Embed the triples</td></tr><tr><td>2:</td><td/></tr></table>", |
| "text": "the number of nearest neighbors k \u2208 N, a graph clustering algorithm Cluster.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF17": { |
| "content": "<table><tr><td>Data set</td><td colspan=\"3\"># instances # unique # clusters</td></tr><tr><td>FrameNet Triples (Bauer et al. 2012)</td><td>99,744</td><td>94,170</td><td>383</td></tr><tr><td>Polysemous Verb Classes</td><td/><td/><td/></tr></table>", |
| "text": "Statistics of the evaluation data sets.", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF18": { |
| "content": "<table><tr><td/><td/><td>Verb</td><td/><td>Subject</td><td/><td>Object</td><td>Frame</td></tr><tr><td/><td colspan=\"7\">nmPU niPU F 1 nmPU niPU F 1 nmPU niPU F 1 nmPU niPU</td><td>F 1</td></tr><tr><td>HOSG (Cotterell et al. 2017) NOAC (Egurnov et al. 2017) Triadic Spectral Triadic k-Means LDA-Frames (Materna 2013)</td><td colspan=\"8\">44.41 68.43 53.86 52.84 74.53 61.83 54.73 74.05 62.94 55.74 50.45 52.96 20.73 88.38 33.58 57.00 80.11 66.61 57.32 81.13 67.18 44.01 63.21 51.89 * 49.62 24.90 33.15 50.07 41.07 45.13 50.50 41.82 45.75 52.05 28.60 36.91 * 63.87 23.16 33.99 63.15 38.20 47.60 63.98 37.43 47.23 63.64 24.11 34.97 * 26.11 66.92 37.56 17.28 83.26 28.62 20.80 90.33 33.81 18.80 71.17 29.75 *</td></tr><tr><td>Triframes CW</td><td colspan=\"8\">7.75 6.48 7.06 3.70 14.07 5.86 51.91 76.92 61.99 21.67 26.50 23.84</td></tr><tr><td>Singletons</td><td>0</td><td>18.03 0</td><td>0</td><td>20.56 0</td><td>0</td><td>17.35 0</td><td colspan=\"2\">81.44 15.50 26.04</td></tr><tr><td>Whole</td><td colspan=\"8\">7.35 100.0 13.70 5.62 97.40 10.63 4.24 98.01 8.14 5.07 98.75 9.65</td></tr></table>", |
| "text": "Triframes WATSET[CW top , CW top ] 42.84 88.35 57.70 54.22 81.40 65.09 53.04 83.25 64.80 55.19 60.81 57.87 * Triframes WATSET \u00a7[CW top , CW top ] 42.70 87.41 57.37 54.29 78.92 64.33 52.87 83.47 64.74 55.12 59.92 57.42 * Triframes WATSET[MCL, MCL] 52.60 70.07 60.09 55.70 74.51 63.74 54.14 78.70 64.15 60.93 52.44 56.37 * Triframes WATSET \u00a7[MCL, MCL] 55.13 69.58 61.51 55.10 76.02 63.89 54.27 78.48 64.17 60.56 52.16 56.05", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF20": { |
| "content": "<table><tr><td/><td>Subjects: wine, act, power</td></tr><tr><td># 8</td><td>Verbs: hearten, bringObjects: right, good, school, there, thousand</td></tr><tr><td># 1057</td><td>Subjects: parent, scientist, officer, event Verbs: promise, pledge Objects: parent, be, good, government, client, minister, people, coach</td></tr><tr><td># 1657</td><td>Subjects: people, doctor Verbs: spell, steal, tell, say, know Objects: egg, food, potato</td></tr></table>", |
| "text": ", discourage, encumber, . . . 432 more verbs. . . , build, chew, unsettle, snap", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF21": { |
| "content": "<table><tr><td colspan=\"2\">Root Synset Child Synsets</td><td/><td/><td/></tr><tr><td>rock.n.02</td><td>aphanite.n.01,</td><td colspan=\"2\">caliche.n.02,</td><td colspan=\"2\">claystone.n.01,</td><td>dolomite.n.01,</td></tr><tr><td/><td>emery stone.n.01,</td><td/><td colspan=\"2\">fieldstone.n.01,</td><td>gravel.n.01,</td><td>ballast.n.02,</td></tr><tr><td colspan=\"2\">bank gravel.ntoxin.n.01 animal toxin.n.01,</td><td/><td colspan=\"2\">venom.n.01,</td><td>kokoi venom.n.01,</td></tr><tr><td colspan=\"3\">snake venom.naxis.n.01 coordinate axis.n.01,</td><td colspan=\"2\">x-axis.n.01,</td><td>y-axis.n.01,</td><td>z-axis.n.01,</td></tr><tr><td/><td>major axis.n</td><td/><td/><td/></tr></table>", |
| "text": "Examples of semantic classes extracted from WordNet hierarchy of synsets for the path length d = 5 from the root synset. .01, shingle.n.02, greisen.n.01, igneous rock.n.01, adesite.n.01, andesite.n.01, . . . 63 more entries. . . , tufa.n.01 .01, anatoxin.n.01, botulin.n.01, cytotoxin.n.01, enterotoxin.n.01, nephrotoxin.n.01, endotoxin.n.01, exotoxin.n.01, . . . 19 more entries. . . , ricin.n.01 .01, minor axis.n.01, optic axis.n.01, principal axis.n.01, semimajor axis.n.01, semiminor axis.n.01", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF25": { |
| "content": "<table><tr><td>d = 4</td><td/><td>d = 5</td><td/><td>d = 6</td><td/></tr><tr><td>nmPU niPU</td><td>F 1</td><td>nmPU niPU</td><td>F 1</td><td>nmPU niPU</td><td>F 1</td></tr></table>", |
| "text": "WATSET \u00a7[CW lin , CW top ] 47.43 42.63 44.90 45.26 42.67 43.93 40.20 44.37 42.18 WATSET[CW lin , CW top ] 47.38 42.65 44.89 44.86 43.03 43.93 40.07 44.14 42.01 CW lin 34.09 40.98 37.22 34.92 40.65 37.57 31.84 41.89 36.18 CW log 29.00 44.85 35.23 29.63 44.72 35.64 26.00 46.36 33.31 MCL 54.90 19.63 28.92 45.32 22.59 30.15 38.38 26.96 31.67 MaxMax 59.29 6.93 12.42 52.65 10.14 17.01 47.28 13.69 21.23", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |