ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1008.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:52:15.342900Z"
},
"title": "Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network",
"authors": [
{
"first": "Kim",
"middle": [
"Anh"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Stuttgart Pfaffenwaldring 5B",
"location": {
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "nguyenkh@ims.uni-stuttgart.de"
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Stuttgart Pfaffenwaldring 5B",
"location": {
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "schulte@ims.uni-stuttgart.de"
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Stuttgart Pfaffenwaldring 5B",
"location": {
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distinguishing between antonyms and synonyms is a key task to achieve high performance in NLP systems. While they are notoriously difficult to distinguish by distributional co-occurrence models, pattern-based methods have proven effective to differentiate between the relations. In this paper, we present a novel neural network model AntSynNET that exploits lexico-syntactic patterns from syntactic parse trees. In addition to the lexical and syntactic information, we successfully integrate the distance between the related words along the syntactic path as a new pattern feature. The results from classification experiments show that AntSyn-NET improves the performance over prior pattern-based methods.",
"pdf_parse": {
"paper_id": "E17-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "Distinguishing between antonyms and synonyms is a key task to achieve high performance in NLP systems. While they are notoriously difficult to distinguish by distributional co-occurrence models, pattern-based methods have proven effective to differentiate between the relations. In this paper, we present a novel neural network model AntSynNET that exploits lexico-syntactic patterns from syntactic parse trees. In addition to the lexical and syntactic information, we successfully integrate the distance between the related words along the syntactic path as a new pattern feature. The results from classification experiments show that AntSyn-NET improves the performance over prior pattern-based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Antonymy and synonymy represent lexical semantic relations that are central to the organization of the mental lexicon (Miller and Fellbaum, 1991) . While antonymy is defined as the oppositeness between words, synonymy refers to words that are similar in meaning (Deese, 1965; Lyons, 1977) . From a computational point of view, distinguishing between antonymy and synonymy is important for NLP applications such as Machine Translation and Textual Entailment, which go beyond a general notion of semantic relatedness and require to identify specific semantic relations. However, due to interchangeable substitution, antonyms and synonyms often occur in similar contexts, which makes it challenging to automatically distinguish between them.",
"cite_spans": [
{
"start": 118,
"end": 145,
"text": "(Miller and Fellbaum, 1991)",
"ref_id": "BIBREF12"
},
{
"start": 262,
"end": 275,
"text": "(Deese, 1965;",
"ref_id": "BIBREF2"
},
{
"start": 276,
"end": 288,
"text": "Lyons, 1977)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two families of approaches to differentiate between antonyms and synonyms are predominent in NLP. Both make use of distributional vector representations, relying on the distributional hypothesis (Harris, 1954; Firth, 1957) , that words with similar distributions have related meanings: co-occurrence models and pattern-based models. These distributional semantic models (DSMs) offer a means to represent meaning vectors of words or word pairs, and to determine their semantic relatedness (Turney and Pantel, 2010) .",
"cite_spans": [
{
"start": 195,
"end": 209,
"text": "(Harris, 1954;",
"ref_id": "BIBREF5"
},
{
"start": 210,
"end": 222,
"text": "Firth, 1957)",
"ref_id": "BIBREF4"
},
{
"start": 488,
"end": 513,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In co-occurrence models, each word is represented by a weighted feature vector, where features typically correspond to words that co-occur in particular contexts. When using word embeddings, these models rely on neural methods to represent words as low-dimensional vectors. To create the word embeddings, the models either make use of neural-based techniques, such as the skipgram model (Mikolov et al., 2013) , or use matrix factorization (Pennington et al., 2014) that builds word embeddings by factorizing word-context cooccurrence matrices. In comparison to standard co-occurrence vector representations, word embeddings address the problematic sparsity of word vectors and have achieved impressive results in many NLP tasks such as word similarity (e.g., Pennington et al. (2014) ), relation classification (e.g., ), and antonym-synonym distinction (e.g., Nguyen et al. (2016) ).",
"cite_spans": [
{
"start": 387,
"end": 409,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 440,
"end": 465,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 760,
"end": 784,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF16"
},
{
"start": 861,
"end": 881,
"text": "Nguyen et al. (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In pattern-based models, vector representations make use of lexico-syntactic surface patterns to distinguish between the relations of word pairs. For example, Justeson and Katz (1991) suggested that adjectival opposites co-occur with each other in specific linear sequences, such as between X and Y. Hearst (1992) determined surface patterns, e.g., X such as Y, to identify nominal hypernyms. Lin et al. (2003) proposed two textual patterns indicating semantic incompatibility, from X to Y and either X or Y, to distinguish opposites from semantically similar words. Roth and Schulte im Walde (2014) proposed a method that combined patterns with discourse markers for classifying paradigmatic relations including antonymy, synonymy, and hypernymy. Recently, Schwartz et al. (2015) used two prominent patterns from Lin et al. (2003) to learn word embeddings that distinguished antonyms from similar words in determining degrees of similarity and word analogy.",
"cite_spans": [
{
"start": 159,
"end": 183,
"text": "Justeson and Katz (1991)",
"ref_id": "BIBREF8"
},
{
"start": 300,
"end": 313,
"text": "Hearst (1992)",
"ref_id": "BIBREF6"
},
{
"start": 393,
"end": 410,
"text": "Lin et al. (2003)",
"ref_id": "BIBREF9"
},
{
"start": 758,
"end": 780,
"text": "Schwartz et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 814,
"end": 831,
"text": "Lin et al. (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a novel patternbased neural method AntSynNET to distinguish antonyms from synonyms. We hypothesize that antonymous word pairs co-occur with each other in lexico-syntactic patterns within a sentence more often than would be expected by synonymous pairs. This hypothesis is inspired by corpus-based studies on antonymy and synonymy. Among others, Charles and Miller (1989) suggested that adjectival opposites co-occur in patterns; Fellbaum (1995) stated that nominal and verbal opposites co-occur in the same sentence significantly more often than chance; Lin et al. (2003) argued that if two words appear in clear antonym patterns, they are unlikely to represent synonymous pair.",
"cite_spans": [
{
"start": 371,
"end": 396,
"text": "Charles and Miller (1989)",
"ref_id": "BIBREF1"
},
{
"start": 455,
"end": 470,
"text": "Fellbaum (1995)",
"ref_id": "BIBREF3"
},
{
"start": 580,
"end": 597,
"text": "Lin et al. (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We start out by inducing patterns between X and Y from a large-scale web corpus, where X and Y represent two words of an antonym or synonym word pair, and the pattern is derived from the simple paths between X and Y in a syntactic parse tree. Each node in the simple path combines lexical and syntactic information; in addition, we suggest a novel feature for the patterns, i.e., the distance between the two words along the syntactic path. All pattern features are fed into a recurrent neural network with long short-term memory (LSTM) units (Hochreiter and Schmidhuber, 1997) , which encode the patterns as vector representations. Afterwards, the vector representations of the patterns are used in a classifier to distinguish between antonyms and synonyms. The results from experiments show that AntSynNET improves the performance over prior pattern-based methods. Furthermore, the implementation of our models is made publicly available 1 .",
"cite_spans": [
{
"start": 543,
"end": 577,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows: In Section 2, we present previous work distinguishing antonyms and synonyms. Section 3 describes our proposed AntSynNET model. We present the induction of the patterns (Section 3.1), describe the recurrent neural network with long 1 https://github.com/nguyenkh/AntSynNET short-term memory units which is used to encode patterns within a vector representation (Section 3.2), and describe two models to classify antonyms and synonyms: the pure pattern-based model (Section 3.3.1) and the combined model (Section 3.3.2). After introducing two baselines in Section 4, we describe our dataset, experimental settings, results of our methods, the effects of the newly proposed distance feature, and the effects of the various types of word embeddings. Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Pattern-based methods: Regarding the task of antonym-synonym distinction, there exist a variety of approaches which rely on patterns. Lin et al. (2003) used bilingual dependency triples and patterns to extract distributionally similar words. They relied on clear antonym patterns such as from X to Y and either X or Y in a post-processing step to distinguish antonyms from synonyms. The main idea is that if two words X and Y appear in one of these patterns, they are unlikely to represent synonymous pair. Schulte im Walde and K\u00f6per (2013) proposed a method to distinguish between the paradigmatic relations antonymy, synonymy and hypernymy in German, based on automatically acquired word patterns. Roth and Schulte im Walde (2014) combined general lexico-syntactic patterns with discourse markers as indicators for the same relations, both for German and for English. They assumed that if two phrases frequently co-occur with a specific discourse marker, then the discourse relation expressed by the corresponding marker should also indicate the relation between the words in the affected phrases. By using the raw corpus and a fixed list of discourse markers, the model can easily be extended to other languages. More recently, Schwartz et al. (2015) presented a symmetric pattern-based model for word vector representation in which antonyms are assigned to dissimilar vector representations. Differently to the previous pattern-based methods which used the standard distribution of patterns, Schwartz et al. used patterns to learn word embeddings. They derived this representation with the incorporation of a thesaurus and latent semantic anal-ysis, by assigning signs to the entries in the cooccurrence matrix on which latent semantic analysis operates, such that synonyms would tend to have positive cosine similarities, and antonyms would tend to have negative cosine similarities. Scheible et al. (2013) showed that the distributional difference between antonyms and synonyms can be identified via a simple word space model by using appropriate features. Instead of taking into account all words in a window of a certain size for feature extraction, the authors experimented with only words of a certain part-of-speech, and restricted distributions. Santus et al. (2014) proposed a different method to distinguish antonyms from synonyms by identifying the most salient dimensions of meaning in vector representations and reporting a new average-precision-based distributional measure and an entropy-based measure. Ono et al. (2015) trained supervised word embeddings for the task of identifying antonymy. They proposed two models to learn word embeddings: the first model relied on thesaurus information; the second model made use of distributional information and thesaurus information. More recently, Nguyen et al. 2016proposed two methods to distinguish antonyms from synonyms: in the first method, the authors improved the quality of weighted feature vectors by strengthening those features that are most salient in the vectors, and by putting less emphasis on those that are of minor importance when distinguishing degrees of similarity between words. In the second method, the lexical contrast information was integrated into the skip-gram model (Mikolov et al., 2013) to learn word embeddings. This model successfully predicted degrees of similarity and identified antonyms and synonyms.",
"cite_spans": [
{
"start": 134,
"end": 151,
"text": "Lin et al. (2003)",
"ref_id": "BIBREF9"
},
{
"start": 1231,
"end": 1253,
"text": "Schwartz et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 1889,
"end": 1911,
"text": "Scheible et al. (2013)",
"ref_id": "BIBREF19"
},
{
"start": 2258,
"end": 2278,
"text": "Santus et al. (2014)",
"ref_id": "BIBREF18"
},
{
"start": 2522,
"end": 2539,
"text": "Ono et al. (2015)",
"ref_id": "BIBREF15"
},
{
"start": 3260,
"end": 3282,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the AntSynNET model, using a pattern-based LSTM for distinguishing antonyms from synonyms. We first present the induction of patterns from a parsed corpus (Section 3.1). Section 3.2 then describes how we utilize the recurrent neural network with long short-term memory units to encode the patterns as vector representation. Finally, we present the AntSynNET model and two approaches to classify antonyms and synonyms (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AntSynNET: LSTM-based Antonym-Synonym Distinction",
"sec_num": "3"
},
{
"text": "Corpus-based studies on antonymy have suggested that opposites co-occur with each other within a sentence significantly more often than would be expected by chance. Our method thus makes use of patterns as the main indicators of word pair co-occurrence, to enforce a distinction between antonyms and synonyms. Figure 1 shows a syntactic parse tree of the sentence \"My old village has been provided with the new services\". Following the characterizations of a tree in graph theory, any two nodes (vertices) of a tree are connected by a simple path (or one unique path). The simple path is the shortest path between any two nodes in a tree and does not contain repeated nodes. In the example, the lexico-syntactic tree pattern of the antonymous pair old-new is determined by finding the simple path (in red) from the lemma old to the lemma new. It focuses on the most relevant information and ignores irrelevant information which does not appear in the simple path (i.e., has, been). Node Representation: The path patterns make use of four features to represent each node in the syntax tree: lemma, part-of-speech (POS) tag, dependency label and distance label. The lemma feature captures the lexical information of words in the sentence, while the POS and dependency features capture the morpho-syntactic information of the sentence. The distance label measures the path distance between the target word nodes in the syntactic tree. Each step between a parent and a child node represents a distance of 1; and the ancestor nodes of the remaining nodes in the path are represented by a distance of 0. For example, the node provided is an ancestor node of the simple path from old to new. The distances from the node provided to the nodes village and old are 1 and 2, respectively. The vector representation of each node concatenates the four-feature vectors as follows: and distance label, respectively; and the \u2295 denotes the concatenation operation.",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 318,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Induction of Patterns",
"sec_num": "3.1"
},
{
"text": "v node = [ v lemma \u2295 v pos \u2295 v dep \u2295 v dist ] where v lemma , v pos , v dep , v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction of Patterns",
"sec_num": "3.1"
},
{
"text": "Pattern Representation: For a pattern p which is constructed by the sequence of nodes n 1 , n 2 , ..., n k , the pattern representation of p is a sequence of vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction of Patterns",
"sec_num": "3.1"
},
{
"text": "p = [ n 1 , n 2 , ..., n k ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction of Patterns",
"sec_num": "3.1"
},
{
"text": "The pattern vector v p is then encoded by applying a recurrent neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induction of Patterns",
"sec_num": "3.1"
},
{
"text": "A recurrent neural network (RNN) is suitable for modeling sequential data by a vector representation. In our methods, we use a long short-term memory (LSTM) network, a variant of a recurrent neural network to encode patterns, for the following reasons. Given a sequence of words p = [n 1 , n 2 , ..., n k ] as input data, an RNN processes each word n t at a time, and returns a vector of state h k for the complete input sequence. For each time step t, the RNN updates an internal memory state h t which depends on the current input n t and the previous state h t\u22121 . Yet, if the sequential input is a long-term dependency, an RNN faces the problem of gradient vanishing or exploding, leading to difficulties in training the model. LSTM units address these problems. The underlying idea of an LSTM is to use an adaptive gating mechanism to decide on the degree that LSTM units keep the previous state and memorize the extracted features of the current input. More specif-ically, an LSTM comprises four components: an input gate i t , a forget gate f t , an output gate o t , and a memory cell c t . The state of an LSTM at each time step t is formalized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Network with Long Short-Term Memory Units",
"sec_num": "3.2"
},
{
"text": "i t = \u03c3(W i \u2022 x t + U i \u2022 h t\u22121 + b i ) f t = \u03c3(W f \u2022 x t + U f \u2022 h t\u22121 + b f ) o t = \u03c3(W o \u2022 x t + U o \u2022 h t\u22121 + b o ) g t = tanh(W c \u2022 x t + U c \u2022 h t\u22121 + b c ) c t = i t \u2297 g t + f t \u2297 c t\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Network with Long Short-Term Memory Units",
"sec_num": "3.2"
},
{
"text": "W refers to a matrix of weights that projects information between two layers; b is a layer-specific vector of bias terms; \u03c3 denotes the sigmoid function. The output of an LSTM at a time step t is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Network with Long Short-Term Memory Units",
"sec_num": "3.2"
},
{
"text": "h t = o t \u2297 tanh(c t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Network with Long Short-Term Memory Units",
"sec_num": "3.2"
},
{
"text": "where \u2297 denotes element-wise multiplication. In our methods, we rely on the last state h k to represent the vector v p of a pattern p = [ n 1 , n 2 , ..., n k ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Network with Long Short-Term Memory Units",
"sec_num": "3.2"
},
{
"text": "In this section, we present two models to distinguish antonyms from synonyms. The first model makes use of patterns to classify antonyms and synonyms, by using an LSTM to encode patterns as vector representations and then feeding those vectors to a logistic regression layer (Section 3.3.1). The second model creates combined vector representations of word pairs, which concatenate the vectors of the words and the patterns (Section 3.3.2). Figure 2 : Illustration of the AntSynNET model. Each word pair is represented by several patterns, and each pattern represents a path in the graph of the syntactic tree. Patterns consist of several nodes where each node is represented by a vector with four features: lemma, POS, dependency label, and distance label. The mean pooling of the pattern vectors is the vector representation of each word pair, which is then fed to the logistic regression layer to classify antonyms and synonyms.",
"cite_spans": [],
"ref_spans": [
{
"start": 441,
"end": 449,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Proposed AntSynNET Model",
"sec_num": "3.3"
},
{
"text": "LSTM LSTM LSTM LSTM LSTM v p v p Mean Pooling Logistic Regression X/ADJ/amod/0 from/ADP/prep/1 Y/ADJ/pobj/2 X/ADJ/conj/1 world/NOUN/pobj/0 Y/ADJ/amod/1 v lemma v pos v dep v dist",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM",
"sec_num": null
},
{
"text": "In this model, we make use of a recurrent neural network with LSTM units to encode patterns containing a sequence of nodes. Figure 2 illustrates the AntSynNET model. Given a word pair (x, y), we induce patterns for (x, y) from a corpus, where each pattern represents a path from x to y (cf. Section 3.1). We then feed each pattern p of the word pair (x, y) into an LSTM to obtain v p , the vector representation of the pattern p (cf. Section 3.2). For each word pair (x, y), the vector representation of (x, y) is computed as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 132,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern-based AntSynNET",
"sec_num": "3.3.1"
},
{
"text": "v xy = p\u2208P (x,y) v p \u2022 c p p\u2208P (x,y) c p (1) v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern-based AntSynNET",
"sec_num": "3.3.1"
},
{
"text": "xy refers to the vector of the word pair (x, y); P (x, y) is the set of patterns corresponding to the pair (x, y); c p is the frequency of the pattern p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern-based AntSynNET",
"sec_num": "3.3.1"
},
{
"text": "The vector v xy is then fed into a logistic regression layer whose target is the class label associated with the pair (x, y). Finally, the pair (x, y) is predicted as positive (i.e., antonymous) word pair if the probability of the prediction for v xy is larger than 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern-based AntSynNET",
"sec_num": "3.3.1"
},
{
"text": "Inspired by the supervised distributional concatenation method in Baroni et al. (2012) and the integrated path-based and distributional method for hypernymy detection in Shwartz et al. (2016) , we take into account the patterns and distribution of target pairs to create their combined vector representations. Given a word pair (x, y), the combined vector representation of the pair (x, y) is determined by using both the co-occurrence distribution of the words and the syntactic path patterns:",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "Baroni et al. (2012)",
"ref_id": "BIBREF0"
},
{
"start": 170,
"end": 191,
"text": "Shwartz et al. (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combined AntSynNET",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v comb(x,y) = [ v x \u2295 v xy \u2295 v y ]",
"eq_num": "(2)"
}
],
"section": "Combined AntSynNET",
"sec_num": "3.3.2"
},
{
"text": "v comb(x,y) refers to the combined vector of the word pair (x, y); v x and v y are the vectors of word x and word y, respectively; v xy is the vector of the pattern that corresponds to the pair (x, y), cf. Section 3.3.1. Similar to the pattern-based model, the combined vector v comb(x,y) is fed into the logistic regression layer to classify antonyms and synonyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined AntSynNET",
"sec_num": "3.3.2"
},
{
"text": "To compare AntSynNET with baseline models for pattern-based classification of antonyms and synonyms, we introduce two pattern-based baseline methods: the distributional method (Section 4.1), and the distributed method (Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4"
},
{
"text": "As a first baseline, we apply the approach by Roth and Schulte im Walde (2014), henceforth R&SiW. They used a vector space model to represent pairs of words by a combination of standard lexico-syntactic patterns and discourse markers. In addition to the patterns, the discourse markers added information to express discourse relations, which in turn may indicate the specific semantic relation between the two words in a word pair. For example, contrast relations might indicate antonymy, whereas elaborations may indicate synonymy or hyponymy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Baseline",
"sec_num": "4.1"
},
{
"text": "Michael Roth, the first author of R&SiW, kindly computed the relation classification results of the pattern-discourse model for our test sets. The weights between marker-based and pattern-based models were tuned on the validation sets; other hyperparameters were set exactly as described by the R&SiW method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Baseline",
"sec_num": "4.1"
},
{
"text": "The SP method proposed by Schwartz et al. (2015) uses symmetric patterns for generating word embeddings. In this work, the authors applied an unsupervised algorithm for the automatic extraction of symmetric patterns from plain text. The symmetric patterns were defined as a sequence of 3-5 tokens consisting of exactly two wildcards and 1-3 words. The patterns were filtered based on their frequencies, such that the resulting pattern set contained 11 patterns. For generating word embeddings, a matrix of co-occurrence counts between patterns and words in the vocabulary was computed, using positive point-wise mutual information. The sparsity problem of vector representations was addressed by smoothing. For antonym representation, the authors relied on two patterns suggested by Lin et al. (2003) to construct word embeddings containing an antonym parameter that can be turned on in order to represent antonyms as dissimilar, and that can be turned off to represent antonyms as similar.",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "Schwartz et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 783,
"end": 800,
"text": "Lin et al. (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributed Baseline",
"sec_num": "4.2"
},
{
"text": "To apply the SP method to our data, we make use of the pre-trained SP embeddings 2 with 500 dimensions 3 . We calculate the cosine similarity of word pairs and then use a Support Vector Machine with Radial Basis Function kernel to classify antonyms and synonyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributed Baseline",
"sec_num": "4.2"
},
{
"text": "For training the models, neural networks require a large amount of training data. We use the existing large-scale antonym and synonym pairs previously used by Nguyen et al. (2016) . Originally, the data pairs were collected from WordNet (Miller, 1995) and Wordnik 4 .",
"cite_spans": [
{
"start": 159,
"end": 179,
"text": "Nguyen et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 237,
"end": 251,
"text": "(Miller, 1995)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "In order to induce patterns for the word pairs in the dataset, we identify the sentences in the corpus that contain the word pair. Thereafter, we extract all patterns for the word pair. We filter out all patterns which occur less than five times; and we only take into account word pairs that contain at least five patterns for training, validating and testing. For the proportion of positive and negative pairs, we keep a ratio of 1:1 positive (antonym) to negative (synonym) pairs in the dataset. In order to create the sets of training, testing and validation data, we perform random splitting with 70% train, 25% test, and 5% validation sets. The final dataset contains the number of word pairs according to word classes described in Table 1 . Moreover, Table 2 shows the average number of patterns for each word pair in our dataset. Table 2 : Average number of patterns per word pair across word classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 738,
"end": 745,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 758,
"end": 765,
"text": "Table 2",
"ref_id": null
},
{
"start": 838,
"end": 845,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "We use the English Wikipedia dump 5 from June 2016 as the corpus resource for our methods and baselines. For parsing the corpus, we rely on spaCy 6 . For the lemma embeddings, we rely on the word embeddings of the dLCE Nguyen et al., 2016) which is the stateof-the-art vector representation for distinguishing antonyms from synonyms. We re-implemented this cutting-edge model on Wikipedia with 100 dimensions, and then make use of the dLCE word embeddings for initialization the lemma embeddings. The embeddings of POS tags, dependency labels, distance labels, and out-of-vocabulary lemmas are initialized randomly. The number of dimensions is set to 10 for the embeddings of POS tags, dependency labels and distance labels. We use the validation sets to tune the number of dimensions for these labels. For optimization, we rely on the cross-entropy loss function and Stochastic Gradient Descent with the Adadelta update rule (Zeiler, 2012) . For training, we use the Theano framework (Theano Development Team, 2016) . Regularization is applied by a dropout of 0.5 on each of component's embeddings (dropout rate is tuned on the validation set). We train the models with 40 epochs and update all embeddings during training. Table 3 shows the significant 8 performance of our models in comparison to the baselines. Concerning adjectives, the two proposed models significantly outperform the two baselines: The performance of the baselines is around .72 for F 1 , and the corresponding results for the combined AntSynNET model achieve an improvement of >.06. Regarding nouns, the improvement of the new methods is just .02 F 1 in comparison to the R&SiW baseline, but we achieve a much better performance in comparison to the SP baseline, an increase of .37 F 1 . Regarding verbs, we do not outperform the more advanced R&SiW baseline in terms of the F 1 score, but we obtain higher recall scores. In comparison to the SP baseline, our models still show a clear F 1 improvement. Overall, our proposed models achieve comparatively high recall scores compared to the two baselines. This strengthens our hypothesis that there is a higher possibility for the co-occurrence of antonymous pairs in patterns over synonymous pairs within a sentence. Because, when the proposed models obtain high recall scores, the models are able to retrieve most relevant information (antonymous pairs) corresponding to the patterns. Regarding the low precision in the two proposed models, we sampled randomly 5 pairs in each population: true positive, true negative, false positive, false negative. We then compared the overlap of patterns for the true predictions (true positive pairs and true negative pairs) and the false predictions (false positive pairs and false negative pairs). We found out that there is no overlap between patterns of true predictions; and the number overlap between patterns of false predictions is 2, 2, and 4 patterns for noun, adjective, and verb classes, respectively. This shows that the low precision of our models stems from the patterns which represent both antonymous and synonymous pairs.",
"cite_spans": [
{
"start": 219,
"end": 239,
"text": "Nguyen et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 926,
"end": 940,
"text": "(Zeiler, 2012)",
"ref_id": "BIBREF27"
},
{
"start": 985,
"end": 1016,
"text": "(Theano Development Team, 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1224,
"end": 1231,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "In our models, the novel distance feature is successfully integrated along the syntactic path to represent lexico-syntactic patterns. The intu- ition behind the distance feature exploits properties of trees in graph theory, which show that there exist differences in the degree of relationship between the parent node and the child nodes (distance = 1) and in the degree of relationship between the ancestor node and the descendant nodes (distance > 1). Hence, we use the distance feature to effectively capture these relationships. In order to evaluate the effect of our novel distance feature, we compare the distance feature to the direction feature proposed by Shwartz et al. (2016) . In their approach, the authors combined lemma, POS, dependency, and direction features for the task of hypernym detection. The direction feature represented the direction of the dependency label between two nodes in a path from X to Y.",
"cite_spans": [
{
"start": 665,
"end": 686,
"text": "Shwartz et al. (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of the Distance Feature",
"sec_num": "5.4"
},
{
"text": "For evaluation, we make use of the same information regarding dataset and patterns as in Section 5.3, and then replace the distance feature by the direction feature. The results are shown in Table 4. The distance feature enhances the performance of our proposed models more effectively than the direction feature does, across all word classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of the Distance Feature",
"sec_num": "5.4"
},
{
"text": "Our methods rely on the word embeddings of the dLCE model, state-of-the-art word embeddings for antonym-synonym distinction. Yet, the word embeddings of the dLCE model, i.e., supervised word embeddings, represent information collected from lexical resources. In order to evaluate the effect of these word embeddings on the performance of our models, we replace them by the pre-trained GloVe word embeddings 9 with 100 dimensions, and compare the effects of the GloVe word embeddings and the dLCE word embeddings on the performance of the two proposed models. Table 5 illustrates the performance of our two models on all word classes. The table shows that the dLCE word embeddings are better than the 9 http://www-nlp.stanford.edu/projects/glove/ pre-trained GloVe word embeddings, by around .01 F 1 for the pattern-based AntSynNET model and the combined AntSynNET model regarding adjective and verb pairs. Regarding noun pairs, the improvements of the dLCE word embeddings over pre-trained GloVe word embeddings achieve around .01 and .04 F 1 for the pattern-based model and the combined model, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 559,
"end": 566,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effect of Word Embeddings",
"sec_num": "5.5"
},
{
"text": "In this paper, we presented a novel patternbased neural method AntSynNET to distinguish antonyms from synonyms. We hypothesized that antonymous word pairs co-occur with each other in lexico-syntactic patterns within a sentence more often than synonymous word pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The patterns were derived from the simple paths between semantically related words in a syntactic parse tree. In addition to lexical and syntactic information, we suggested a novel path distance feature. The AntSynNET model consists of two approaches to classify antonyms and synonyms. In the first approach, we used a recurrent neural network with long short-term memory units to encode the patterns as vector representations; in the second approach, we made use of the distribution and encoded patterns of the target pairs to generate combined vector representations. The resulting vectors of patterns in both approaches were fed into the logistic regression layer for classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our proposed models significantly outperformed two baselines relying on previous work, mainly in terms of recall. Moreover, we demonstrated that the distance feature outperformed a previously suggested direction feature, and that our embeddings outperformed the state-of-the-art GloVe embeddings. Last but not least, our two proposed models only rely on corpus data, such that the models are easily applicable to other languages and relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://homes.cs.washington.edu/\u02dcroysch/papers/ sp_embeddings/sp_embeddings.html 3 The 500-dimensional embeddings outperformed the 300-dimensional embeddings for our data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.wordnik.com 5 https://dumps.wikimedia.org/enwiki/latest/ enwiki-latest-pages-articles.xml.bz26 https://spacy.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/nguyenkh/AntSynDistinction 8 t-test, *p < 0.05, **p < 0.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Michael Roth for helping us to compute the results of the R&SiW model on our dataset.The research was supported by the Ministry of Education and Training of the Socialist Republic of Vietnam (Scholarship 977/QD-BGDDT; Kim-Anh Nguyen), the DFG Collaborative Research Centre SFB 732 (Kim-Anh Nguyen, Ngoc Thang Vu), and the DFG Heisenberg Fellowship SCHU-2580/1 (Sabine Schulte im Walde).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Entailment above the word level in distributional semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Ngoc-Quynh",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Chung",
"middle": [],
"last": "Shan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceed- ings of the 13th Conference of the European Chap- ter of the Association for Computational Linguistics (EACL), pages 23-32, Avignon, France.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contexts of antonymous adjectives",
"authors": [
{
"first": "G",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "George",
"middle": [
"A"
],
"last": "Charles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1989,
"venue": "Applied Psychology",
"volume": "10",
"issue": "",
"pages": "357--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walter G. Charles and George A. Miller. 1989. Con- texts of antonymous adjectives. Applied Psychol- ogy, 10:357-375.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Structure of Associations in Language and Thought",
"authors": [
{
"first": "James",
"middle": [],
"last": "Deese",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Deese. 1965. The Structure of Associations in Language and Thought. The John Hopkins Press, Baltimore, MD.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Co-occurrence and antonymy",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1995,
"venue": "International Journal of Lexicography",
"volume": "8",
"issue": "",
"pages": "281--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1995. Co-occurrence and antonymy. International Journal of Lexicography, 8:281-303.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Papers in Linguistics 1934-51. Longmans",
"authors": [
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R. Firth. 1957. Papers in Linguistics 1934-51. Longmans, London, UK.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Distributional structure. Word",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146-162.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In In Proceed- ings of the 14th International Conference on Com- putational Linguistics (COLING), pages 539-545, Nantes, France.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Cooccurrences of antonymous adjectives and their contexts",
"authors": [
{
"first": "John",
"middle": [
"S"
],
"last": "Justeson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Slava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Linguistics",
"volume": "17",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John S. Justeson and Slava M. Katz. 1991. Co- occurrences of antonymous adjectives and their con- texts. Computational Linguistics, 17:1-19.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Identifying synonyms among distributionally similar words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lijuan",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "1492--1493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin, Shaojun Zhao, Lijuan Qin, and Ming Zhou. 2003. Identifying synonyms among distri- butionally similar words. In Proceedings of the 18th International Joint Conference on Artificial Intelli- gence (IJCAI), pages 1492-1493, Acapulco, Mex- ico.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (NAACL), pages 746-751, At- lanta, Georgia.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semantic networks of English",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1991,
"venue": "Cognition",
"volume": "41",
"issue": "",
"pages": "197--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller and Christiane Fellbaum. 1991. Se- mantic networks of English. Cognition, 41:197- 229.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "WordNet: A lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym- synonym distinction. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL), pages 454-459, Berlin, Germany.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word embedding-based antonym detection using thesauri and distributional information",
"authors": [
{
"first": "Masataka",
"middle": [],
"last": "Ono",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "984--989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detec- tion using thesauri and distributional information. In Proceedings of the Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL), pages 984-989, Denver, Colorado.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vec- tors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Combining word patterns and discourse markers for paradigmatic relation classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "524--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Sabine Schulte im Walde. 2014. Combining word patterns and discourse markers for paradigmatic relation classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 524-530, Baltimore, MD.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Chasing hypernyms in vector spaces with entropy",
"authors": [
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "38--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 38-42, Gothenburg, Sweden.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Uncovering distributional differences between synonyms and antonyms in a word space model",
"authors": [
{
"first": "Silke",
"middle": [],
"last": "Scheible",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Sylvia",
"middle": [],
"last": "Springorum",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 6th International Joint Conference on Natural Language Processing (IJCNLP)",
"volume": "",
"issue": "",
"pages": "489--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silke Scheible, Sabine Schulte im Walde, and Sylvia Springorum. 2013. Uncovering distributional dif- ferences between synonyms and antonyms in a word space model. In Proceedings of the 6th Interna- tional Joint Conference on Natural Language Pro- cessing (IJCNLP), pages 489-497, Nagoya, Japan.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Pattern-based distinction of paradigmatic relations for german nouns, verbs, adjectives",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 25th International Conference of the German Society for Computational Linguistics and Language Technology (GSCL)",
"volume": "",
"issue": "",
"pages": "189--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte im Walde and Maximilian K\u00f6per. 2013. Pattern-based distinction of paradigmatic relations for german nouns, verbs, adjectives. In Proceed- ings of the 25th International Conference of the Ger- man Society for Computational Linguistics and Lan- guage Technology (GSCL), pages 189-198, Darm- stadt, Germany.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Symmetric pattern based word embeddings for improved word similarity prediction",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 19th Conference on Computational Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "258--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. In Proceedings of the 19th Conference on Computational Language Learning (CoNLL), pages 258-267, Beijing, China.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving hypernymy detection with an integrated path-based and distributional method",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "2389--2398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2389- 2398, Berlin, Germany.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Theano: A Python framework for fast computation of mathematical expressions",
"authors": [],
"year": 2016,
"venue": "Theano Development Team",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical ex- pressions. arXiv e-prints, abs/1605.02688.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Combining recurrent and convolutional neural networks for relation classification",
"authors": [
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "534--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Sch\u00fctze. 2016. Combining recurrent and con- volutional neural networks for relation classifica- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL), pages 534-539.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Polarity inducing latent semantic analysis",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Zweig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP)",
"volume": "",
"issue": "",
"pages": "1212--1222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Geoffrey Zweig, and John C. Platt. 2012. Polarity inducing latent semantic analysis. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP), pages 1212-1222, Jeju Island, Korea.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Vector representation methods: Yih et al. (2012) introduced a new vector representation where antonyms lie on opposite sides of a sphere.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "The example pattern between X = old and Y = new in Figure 1 is represented as follows: X/JJ/amod/2 -village/NN/nsubj/1 --provide/VBN/ROOT/0 --with/IN/prep/1 --service/NNS/pobj/2 --Y/JJ/amod/3.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "dist represent the embeddings of the lemma, POS tag, dependency label Root providedIllustration of the syntactic tree for the sentence \"My old village has been provided with the new services\". Red lines indicate the path from the word old to the word new.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "Our dataset.",
"content": "<table><tr><td colspan=\"4\">Word Class Train Test Validation</td></tr><tr><td>Adjective</td><td>135</td><td>131</td><td>141</td></tr><tr><td>Verb</td><td>364</td><td>332</td><td>396</td></tr><tr><td>Noun</td><td>110</td><td>132</td><td>105</td></tr></table>",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "0.787 0.788 0.833 0.831 0.832 Pattern-based AntSynNET 0.764 0.788 0.776 * 0.741 0.833 0.784 0.804 0.851 0.827 Combined AntSynNET 0.763 0.807 0.784 * 0.743 0.815 0.777 0.816 0.898 0.855 * *",
"content": "<table><tr><td>Model</td><td>P</td><td>Adjective R</td><td>F 1</td><td>P</td><td>Verb R</td><td>F 1</td><td>P</td><td>Noun R</td><td>F 1</td></tr><tr><td>SP baseline</td><td colspan=\"3\">0.730 0.706 0.718</td><td colspan=\"6\">0.560 0.609 0.584 0.625 0.393 0.482</td></tr><tr><td>R&amp;SiW baseline</td><td colspan=\"3\">0.717 0.717 0.717</td><td>0.789</td><td/><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "Performance of the AntSynNET models in comparison to the baseline models.",
"content": "<table><tr><td>Feature</td><td>Model</td><td>P</td><td>Adjective R</td><td>F 1</td><td>P</td><td>Verb R</td><td>F 1</td><td>P</td><td>Noun R</td><td>F 1</td></tr><tr><td>Direction</td><td colspan=\"4\">Pattern-based 0.752 0.755 0.753 Combined 0.754 0.784 0.769</td><td colspan=\"6\">0.734 0.819 0.774 0.800 0.825 0.813 0.739 0.793 0.765 0.829 0.810 0.819</td></tr><tr><td>Distance</td><td colspan=\"10\">Pattern-based 0.764 0.788 0.776 Combined 0.763 0.807 0.784 * * 0.743 0.815 0.777 0.816 0.898 0.855 * * 0.741 0.833 0.784 0.804 0.851 0.827</td></tr></table>",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"text": "Comparing the novel distance feature with Schwarz et al.'s direction feature, across word classes.",
"content": "<table/>",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "Pattern-based Model GloVe 0.763 0.770 0.767 0.705 0.852 0.772 0.789 0.849 0.818 dLCE 0.764 0.788 0.776 0.741 0.833 0.784 0.804 0.851 0.827 Combined Model Glove 0.750 0.798 0.773 0.717 0.826 0.768 0.807 0.827 0.817 dLCE 0.763 0.807 0.784 0.743 0.815 0.777 0.816 0.898 0.855",
"content": "<table><tr><td>Model</td><td>Word Embeddings</td><td>P</td><td>Adjective R</td><td>F 1</td><td>P</td><td>Verb R</td><td>F 1</td><td>P</td><td>Noun R</td><td>F 1</td></tr></table>",
"num": null
},
"TABREF6": {
"html": null,
"type_str": "table",
"text": "Comparing pre-trained GloVe and dLCE word embeddings.",
"content": "<table/>",
"num": null
}
}
}
}