ACL-OCL / Base_JSON /prefixS /json /S17 /S17-1002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:29:15.300258Z"
},
"title": "Learning Antonyms with Paraphrases and a Morphology-Aware Neural Network",
"authors": [
{
"first": "Sneha",
"middle": [],
"last": "Rajana",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"country": "USA"
}
},
"email": "srajana@seas.upenn.edu"
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Apidianaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {
"country": "Israel"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems. In this paper, we present a novel method for deriving antonym pairs using paraphrase pairs containing negation markers. We further propose a neural network model, AntNET, that integrates morphological features indicative of antonymy into a path-based relation detection algorithm. We demonstrate that our model outperforms state-of-the-art models in distinguishing antonyms from other semantic relations and is capable of efficiently handling multi-word expressions.",
"pdf_parse": {
"paper_id": "S17-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems. In this paper, we present a novel method for deriving antonym pairs using paraphrase pairs containing negation markers. We further propose a neural network model, AntNET, that integrates morphological features indicative of antonymy into a path-based relation detection algorithm. We demonstrate that our model outperforms state-of-the-art models in distinguishing antonyms from other semantic relations and is capable of efficiently handling multi-word expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Identifying antonymy and expressions with contrasting meanings is valuable for NLP systems which go beyond recognizing semantic relatedness and require to identify specific semantic relations. While manually created semantic taxonomies, like WordNet (Fellbaum, 1998) , define antonymy relations between some word pairs that native speakers consider antonyms, they have limited coverage. Further, as each term of an antonymous pair can have many semantically close terms, the contrasting word pairs far outnumber those that are commonly considered antonym pairs, and they remain unrecorded. Therefore, automated methods have been proposed to determine for a given term-pair (x, y), whether x and y are antonyms of each other, based on their occurrences in a large corpus. Charles and Miller (1989) put forward the cooccurrence hypothesis that antonyms occur together in a sentence more often than chance. However, non-antonymous semantically related words",
"cite_spans": [
{
"start": 250,
"end": 266,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 771,
"end": 796,
"text": "Charles and Miller (1989)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Antonym Pair not sufficient/insufficient sufficient/insufficient insignificant/negligible significant/negligible dishonest/lying honest/lying unusual/pretty strange usual/pretty strange Table 1 : Examples of antonyms derived from PPDB paraphrases. The antonym pairs in column 2 were derived from the corresponding paraphrase pairs in column 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "such as hypernyms, holonyms, meronyms, and near-synonyms also tend to occur together more often than chance. Thus, separating antonyms from pairs linked by other relationships has proven to be difficult. Approaches to antonym detection have exploited distributional vector representations relying on the distributional hypothesis of semantic similarity (Harris, 1954; Firth, 1957 ) that words co-occurring in similar contexts tend to be semantically close. Two main information sources are used to recognize semantic relations: pathbased and distributional. Path-based methods consider the joint occurrences of the two terms in a given sentence and use the dependency paths that connect the terms as features (Hearst, 1992; Roth and Schulte im Walde, 2014; Schwartz et al., 2015) . For distinguishing antonyms from other relations, Lin et al. (2003) proposed to use antonym patterns (such as either X or Y and from X to Y ). Distributional methods are based on the disjoint occurrences of each term and have recently become popular using word embeddings (Mikolov et al., 2013; Pennington et al., 2014) which provide a distributional representation for each term. Recently, combined path-based and distributional methods for relation detection have also been proposed Nguyen et al., 2017) . They showed that a good path representa-tion can provide substantial complementary information to the distributional signal for distinguishing between different semantic relations.",
"cite_spans": [
{
"start": 353,
"end": 367,
"text": "(Harris, 1954;",
"ref_id": "BIBREF7"
},
{
"start": 368,
"end": 379,
"text": "Firth, 1957",
"ref_id": "BIBREF5"
},
{
"start": 709,
"end": 723,
"text": "(Hearst, 1992;",
"ref_id": "BIBREF8"
},
{
"start": 724,
"end": 756,
"text": "Roth and Schulte im Walde, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 757,
"end": 779,
"text": "Schwartz et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 832,
"end": 849,
"text": "Lin et al. (2003)",
"ref_id": "BIBREF12"
},
{
"start": 1054,
"end": 1076,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 1077,
"end": 1101,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 1267,
"end": 1287,
"text": "Nguyen et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "While antonymy applies to expressions that represent contrasting meanings, paraphrases are phrases expressing the same meaning, which usually occur in similar textual contexts (Barzilay and McKeown, 2001) or have common translations in other languages (Bannard and Callison-Burch, 2005) . Specifically, if two words or phrases are paraphrases, they are unlikely to be antonyms of each other. Our first approach to antonym detection exploits this fact and uses paraphrases for detecting and generating antonyms (The dementors caught Sirius Black/ Black could not escape the dementors). We start by focusing on phrase pairs that are most salient for deriving antonyms.",
"cite_spans": [
{
"start": 176,
"end": 204,
"text": "(Barzilay and McKeown, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 252,
"end": 286,
"text": "(Bannard and Callison-Burch, 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "Our assumption is that phrases (or words) containing negating words (or prefixes) are more helpful for identifying opposing relationships between term-pairs. For example, from the paraphrase pair (caught/not escape), we can derive the antonym pair (caught/escape) by just removing the negating word 'not'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "Our second method is inspired by the recent success of deep learning methods for relation detection. Shwartz et al. (2016) proposed an integrated path-based and distributional model to improve hypernymy detection between term-pairs, and later extended it to classify multiple semantic relations (Shwartz and Dagan, 2016) (LexNET). Although LexNET was the best performing system in the semantic relation classification task of the CogALex 2016 shared task, the model performed poorly on synonyms and antonyms compared to other relations. The path-based component is weak in recognizing synonyms, which do not tend to co-occur, and the distributional information caused confusion between synonyms and antonyms, since both tend to occur in the same contexts. We propose AntNET, a novel extension of LexNET that integrates information about negating prefixes as a new morphological pattern feature and is able to distinguish antonyms from other semantic relations. In addition, we optimize the vector representations of dependency paths between the given term pair, encoded using a neural network, by replacing the embeddings of words with negating prefixes by the embeddings of the base, non-negated, forms of the words. For example, for the term pair unhappy/joyful, we record the negating prefix of unhappy using a new path feature and replace the word embedding of unhappy with happy in the vector representation of the dependency path between unhappy and sad. The proposed model improves the path embeddings to better distinguish antonyms from other semantic relations and gets higher performance than prior path-based methods on this task. We used the antonym pairs extracted from the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015b) in the paraphrasebased method as training data for our neural network model.",
"cite_spans": [
{
"start": 90,
"end": 122,
"text": "detection. Shwartz et al. (2016)",
"ref_id": null
},
{
"start": 1714,
"end": 1741,
"text": "(Ganitkevitch et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 1742,
"end": 1764,
"text": "Pavlick et al., 2015b)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "The main contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "\u2022 We present a novel technique of using paraphrases for antonym detection and successfully derive antonym pairs from paraphrases in the PPDB, the largest paraphrase resource currently available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "\u2022 We demonstrate improvements to an integrated path-based and distributional model, showing that our morphology-aware neural network model, AntNET, performs better than state-of-the-art methods for antonym detection. Pattern-based Methods Pattern-based methods for inducing semantic relations between a pair of terms (x, y) consider the lexico-syntactic paths that connect the joint occurrences of x and y in a large corpus. A variety of approaches have been proposed that rely on patterns between terms in a corpus to distinguish antonyms from other relations. Lin et al. (2003) used translation information and lexico-syntactic patterns to extract distributionally similar words, and then filtered out words that appeared with the patterns 'from X to Y' or 'either X or Y' significantly often. The intuition behind this was that if two words X and Y appear in one of these patterns, they are unlikely to represent a synonymous pair. Roth 3 Paraphrase-Based Antonym Derivation",
"cite_spans": [
{
"start": 562,
"end": 579,
"text": "Lin et al. (2003)",
"ref_id": "BIBREF12"
},
{
"start": 935,
"end": 939,
"text": "Roth",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "(x,y) from paraphrase (x,y)/(x,\u1ef9) 80,669 (x, paraphrase(y)), (paraphrase(x), y) 81,221 (x, synset(y)), (synset(x), y) 692,231",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "Existing semantic resources like WordNet (Fellbaum, 1998) contain a much smaller set of antonyms compared to other semantic relations (synonyms, hypernyms and meronyms). Our aim is to create a large resource of high quality antonym pairs using paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Pair",
"sec_num": null
},
{
"text": "The Paraphrase Database (PPDB) contains over 150 million paraphrase rules covering three paraphrase types: lexical (single word), phrasal (multiword), and syntactic restructuring rules, and is the largest collection of paraphrases currently available. PPDB . In this paper, we focus on lexical and phrasal paraphrases up to two words in length. We examine the relationships between phrase pairs in the PPDB focusing on phrase pairs that are most salient for deriving antonyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Paraphrase Database",
"sec_num": "3.1"
},
{
"text": "Selection of Paraphrases We consider all phrase pairs from PPDB (p 1 , p 2 ) up to two words in length such that one of the two phrases either begins with a negating word like not, or contains a negating prefix. 1 We chose these two types of paraphrase pairs since we believe them to be the most indicative of an antonymy relationship between the target words. There are 7,878 unordered phrase pairs of the form (p 1 , p 2 ) where p 1 begins with 'not', and 183,159 phrases of the form (p 1 , p 2 ) where p 1 contains a negating prefix.",
"cite_spans": [
{
"start": 212,
"end": 213,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Derivation",
"sec_num": "3.2"
},
{
"text": "Paraphrase Transformation For paraphrases containing a negating prefix, we perform morphological analysis to identify and remove the negating prefixes. For a phrase pair like unhappy/sad, an antonymy relation is derived between the base form of the negated word, without the negation prefix, and its paraphrase (happy/sad). We use MORSEL (Lignos, 2010) to perform morphological analysis and identify negation markers. For multi-word phrases with a negating word, the negating word is simply dropped to obtain an antonym pair (e.g. different/not identical \u2192 different/identical). Some examples of PPDB paraphrase pairs and antonym pairs derived from them are shown in Table 1 . The derived antonym pairs are further expanded by associating the synonyms (from WordNet) and lexical paraphrases (from PPDB) of each phrase with the other phrase in the derived pair. While expanding each phrase in the derived pair by its paraphrases, we filter out paraphrase pairs with a PPDB score (Pavlick et al., 2015a) of less than 2.5. In the above example, unhappy/sad, we first derive happy/sad as an antonym pair and expand it by considering all synonyms of happy as antonyms of sad (e.g. joyful/sad), and all synonyms of sad as antonyms of happy (e.g. happy/gloomy). Table 2 shows the number of pairs derived at each step using PPDB. In total, we were able to derive around 773K unique pairs from PPDB. This is a much larger dataset than existing resources like Word-Net and EVALution as shown in Table 3 .",
"cite_spans": [
{
"start": 338,
"end": 352,
"text": "(Lignos, 2010)",
"ref_id": "BIBREF10"
},
{
"start": 978,
"end": 1001,
"text": "(Pavlick et al., 2015a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 667,
"end": 674,
"text": "Table 1",
"ref_id": null
},
{
"start": 1255,
"end": 1262,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1485,
"end": 1492,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Antonym Derivation",
"sec_num": "3.2"
},
{
"text": "Analysis We performed a manual evaluation of the quality of the extracted antonyms by randomly selecting 1000 pairs classified as 'antonym' and observed that the dataset contained about 63% antonyms. Errors mostly consisted of phrases and words that do not have an opposing meaning after the removal of the negation pattern. For example, the equivalent pair till/until that was derived from the PPDB paraphrase rule not till/until. Other nonantonyms derived from the above methods can be classified into unrelated pairs (background/figure), paraphrases or pairs that have an equivalent meaning (admissible/permissible), words that belong to a category (Africa/Asia), pairs that have an entailment relation (valid/equally valid) and pairs that are related but not with an antonym relationship (twinkle/dark). Table 4 gives some examples of categories of non-antonyms.",
"cite_spans": [],
"ref_spans": [
{
"start": 808,
"end": 815,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Antonym Derivation",
"sec_num": "3.2"
},
{
"text": "Annotation Since the pairs derived from PPDB seemed to contain a variety of relations in addition to antonyms, we crowdsourced the task of labelling a subset of these pairs in order to obtain the true labels. 2 We asked workers to choose between the labels: antonym, synonym (or paraphrase for multi-word expressions), unrelated, other, entailment, and category. We showed each pair to 3 workers, taking the majority label as truth.",
"cite_spans": [
{
"start": 209,
"end": 210,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antonym Derivation",
"sec_num": "3.2"
},
{
"text": "In this section we describe AntNET, a long short term memory (LSTM) based, morphology-aware neural network model for antonym detection. We first focus on improving the neural embeddings of the path representation (Section 4.1), and then integrate distributional signals into our network resulting in a combined method (Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM-Based Antonym Detection",
"sec_num": "4"
},
{
"text": "Similarly to prior work, we represent each dependency path as a sequence of edges that leads from x to y in the dependency tree. We use the same path-based features proposed by for recognizing hypernym relations: lemma and part-of-speech (POS) tag of the source node, the dependency label, and the edge direction between two subsequent nodes. Additionally, we also add a new feature that indicates whether the source node is negated. Rather than treating an entire dependency path as a single feature, we encode the sequence of edges using a long short term memory network (Hochreiter and Schmidhuber, 1997) . The vectors obtained for the different paths of a given (x, y) pair are pooled, and the resulting vector is used for classification. The overall network structure is depicted in Figure 1 . Edge Representation We denote each edge as lemma/pos/dep/dir/neg. We are only interested in checking if x and/or y have negation markers but not the intermediate edges since negation information for intermediate lemmas is unlikely to contribute to identifying whether there is an antonym relationship between x and y. Hence, in our model, neg is represented in one of three ways: negated if x or y is negated, not-negated if x or y is not negated, and unavailable for the intermediate edges. If the source node is negated, we replace the lemma by the lemma of its base, nonnegated, form. For example, if we identified unhappy as a 'negated' word, we replace the lemma embedding of unhappy by the embedding of happy in the path representation. The negation feature will help in separating antonyms from other semantic relations, especially those that are hard to distinguish from, like synonyms.",
"cite_spans": [
{
"start": 573,
"end": 607,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 788,
"end": 796,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Path-Based Network",
"sec_num": "4.1"
},
{
"text": "The replacement of a negated word's embedding by its base form's embedding is done for a few reasons. First, words and their polar antonyms are more likely to co-occur in sentences compared to words and their negated forms. For example, Neither happy nor sad is probably a more common phrase than Neither happy nor unhappy, so this technique will help our model to identify an opposing relationship between both types of pairs, happy/unhappy and happy/sad. Second, a common practice for creating word embeddings for multi-word expressions (MWEs) is by averaging over the embeddings of each word in the expression. Ideally, this is not a good representation for phrases like not identical since we lose out on the negating information obtained from not. Indicating the presence of not using a negation feature and replacing the embedding of not identical by identical will increase the classifier's probability of identifying not identical/different as paraphrases and identical/different as antonyms. And finally, this method helps us distinguish between terms that are seemingly negated but are not in reality (e.g. invaluable). We encode the sequence of edges using an LSTM network. The vectors obtained for all the paths connecting x and y are pooled and combined, and the resulting vector is used for classification. The vector representation of each edge is the concatenation of its feature vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path-Based Network",
"sec_num": "4.1"
},
{
"text": "v edge = [ v lemma , v pos , v dep , v dir , v neg ] where v lemma , v pos , v dep , v dir , v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path-Based Network",
"sec_num": "4.1"
},
{
"text": "neg represent the vector embeddings of the negation marker, lemma, POS tag, dependency label and dependency direction, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path-Based Network",
"sec_num": "4.1"
},
{
"text": "The representation for a path p composed of a sequence of edges edge 1 , edge 2 , .., edge k is a sequence of edge vectors: p = [ edge 1 , edge 2 , ..., edge k ]. The edge vectors are fed in order to a recurrent neural network (RNN) with LSTM units, resulting in the encoded path vector v p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "Classification Task Given a lexical or phrasal pair (x, y) we induce patterns from a corpus where each pattern represents a lexico-syntactic path connecting x and y. The vector representation for each term pair (x, y) is computed as the weighted average of its path vectors by applying average pooling as follows: y) refers to the vector of the pair (x, y); P (x, y) is the multi-set of paths connecting x and y in the corpus and f p is the frequency of p in P (x, y) . The vector v p(x,y) is then fed into a neural network that outputs the class distribution c for each class (relation type), and the pair is assigned to the relation with the highest score r:",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 316,
"text": "y)",
"ref_id": null
},
{
"start": 456,
"end": 468,
"text": "in P (x, y)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "(1) v p(x,y) = p\u2208P (x,y)fp. vp p\u2208P (x,y)fp v p(x,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "(2a) c = sof tmax(M LP ( v p(x,y) ) (2b) r = argmax i c[i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "MLP stands for Multi Layer Perceptron and can be computed with or without a hidden layer (equations 4 and 5 respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "3) h = tanh(W 1 . v p(x,y) + b 1 ) (4) M LP ( v p(x,y) ) = W 2 . h + b 2 (5) M LP ( v p(x,y) ) = W 1 . v p(x,y) + b 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "W refers to a matrix of weights that projects information between two layers; b is a layer-specific vector of bias terms and h is the hidden layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Representation",
"sec_num": null
},
{
"text": "The path-based supervised model in Section 4.1 classifies each pair (x, y) based on the lexicosyntactic patterns that connect x and y in a corpus. Inspired by the improved performance of Shwartz et al.'s (2016) integrated path-based and distributional method over a simpler path-based algorithm, we integrate distributional features into our path-based network. We create a combined vector representation using both the syntactic path features and the co-occurrence distributional features of x and y for each pair (x, y). The combined vector representation for (x, y), v c(xy) , is computed by simply concatenating the word embeddings of x ( v x ) and y ( v y ) to the path-based feature vector v p(x,y) :",
"cite_spans": [
{
"start": 187,
"end": 210,
"text": "Shwartz et al.'s (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Path-Based and Distributional Network",
"sec_num": "4.2"
},
{
"text": "(6) v c(xy) = [ v x , v p(x,y) , v y ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Path-Based and Distributional Network",
"sec_num": "4.2"
},
{
"text": "We experiment with the path-based and combined models for antonym identification by performing two types of classification: binary and multiclass classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Train Test Val Total 5,122 1,829 367 7,318 Table 5 : Number of instances present in the train/test/validation splits of the crowdsourced dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Neural networks require a large amount of training data. We use the labelled portion of the dataset that we created using PPDB, as described in Section 3. In order to induce paths for the pairs in the dataset, we identify sentences in the corpus that contain the pair and extract all patterns for the given pair. Pairs with an antonym relationship are considered as positive instances in both classification experiments. In the binary classification experiment, we consider all pairs related by other relations (entailment, synonymy, category, unrelated, other) as negative instances. We also perform a variant of the multiclass classification with three classes (antonym, other, unrelated). Due to the skewed nature of the dataset, we combined category, entailment and synonym/paraphrases into one class. For both classification experiments, we perform random split with 70% train, 25% test, and 5% validation sets. Table 5 displays the number of relations in our dataset. Wikipedia 3 was used as the underlying corpus for all methods and we perform model selection on the validation set to tune the hyper-parameters of each method. We apply grid search for a range of values and pick the ones that yield the highest F 1 score on the validation set. The best hyper-parameters are reported in the appendix.",
"cite_spans": [],
"ref_spans": [
{
"start": 917,
"end": 924,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "Majority Baseline The majority baseline is achieved by labelling all the instances with the most frequent class occuring in the dataset i.e. FALSE (binary) or UNRELATED (multiclass). Table 7 : Comparing the novel negation marking feature with the distance feature proposed by Nguyen et al. 2017.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 190,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "Distributed Baseline The method proposed by Schwartz et al. (2015) uses symmetric patterns (SPs) for generating word embeddings. The authors automatically acquired symmetric patterns (defined as a sequence of 3-5 tokens consisting of exactly 2 wildcards and 1-3 words) from a large plain-text corpus, and generated vectors where each co-ordinate represented the co-occurrence in symmetric patterns of the represented word with another word of the vocabulary. For antonym representation, the authors relied on the patterns suggested by (Lin et al., 2003) to construct word embeddings containing an antonym parameter that can be turned on in order to represent antonyms as dissimilar, and that can be turned off to represent antonyms as similar. To evaluate the SP method on our data, we used the pre-trained SP embeddings 4 with 500 dimensions. We use the SVM classifier with RBF kernel for the classification of word pairs.",
"cite_spans": [
{
"start": 44,
"end": 66,
"text": "Schwartz et al. (2015)",
"ref_id": "BIBREF19"
},
{
"start": 535,
"end": 553,
"text": "(Lin et al., 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "AntNET is an extension of the path-based and combined models proposed by for classifying multiple semantic relations, we use their models as additional baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path-based and Combined Baseline Since",
"sec_num": null
},
{
"text": "Because their model used a different dataset that contained very few antonym instances, we repli-cated the baseline (SD) with the dataset and corpus information as in Sectionn 5.1 rather than comparing to the reported results. Table 6 displays the performance scores of AntNET and the baselines in terms of precision, recall and F 1 . Our combined model significantly 5 outperforms all baselines in both binary and multiclass classifications. Both path-based and combined models of AntNET achieve a much better performance in comparison to the majority class and SP baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Path-based and Combined Baseline Since",
"sec_num": null
},
{
"text": "Comparing the path-based methods, the AntNET model achieves a higher precision compared to the path-based SD baseline for binary classification, and outperforms the SD model in precision, recall and F 1 in the multiclass classification experiment. The low precision of the SD model stems from its inability to distinguish between antonyms and synonyms, and between related and unrelated pairs which are common in our dataset, causing many false positive pairs such as difficult/harsh, bad/cunning, finish/far which were classified as antonyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "Comparing the combined models, the AntNET model outperforms the SD model in precision, recall and F 1 , achieving state-of-the-art results for antonym detection. In all the experiments, the performance of the model in the binary classification task was better than in the multiclass classification. Multiclass classification seems to be inherently harder for all methods, due to the large number of relations and the smaller number of instances for each relation. We also observed that as we increased the size of the training dataset used in our experiments, the results improved for both path-based and combined models, confirming the need for large-scale datasets that will benefit training neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "Effect of the Negation-marking Feature In our models, the novel negation marking feature is successfully integrated along the syntactic path to represent the paths between x and y. In order to evaluate the effect of our novel negation-marking feature for antonym detection, we compare this feature to the distance feature proposed by Nguyen et al. (2017) . In their approach, they integrate the distance between related words in a lexicosyntactic path as a new pattern feature, along with lemma, POS and dependency for the task of distinguishing antonyms and synonyms. We re-implemented this model by making use of the same information regarding dataset and patterns as in Section 5.1 and then replacing the direction feature in the SD models by the distance feature.",
"cite_spans": [
{
"start": 334,
"end": 354,
"text": "Nguyen et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "The results are shown in Table 7 and indicate that the negation marking feature and the replacement of the embeddings of negated words by the ones of their base forms enhance the performance of our models more effectively than the distance feature does, across both binary and multiclass classifications. Although, the distance feature has previously been shown to perform well for the task of distinguishing antonyms from synonyms, this feature is not very effective in the multiclass setting. Figure 2 displays the confusion matrices for the binary and multiclass experiments of the best performing AntNET model. The confusion matrix shows that pairs were mostly assigned to the correct relation more than to any other class.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 7",
"ref_id": null
},
{
"start": 495,
"end": 503,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "We analyzed the false positives from both the binary and multiclass experiments. We sampled about 20% false positive pairs and identified the following common errors. The majority of the misclassification errors stem from antonym-like or near-antonym relations: these are relations that could be considered as antonymy but were annotated by crowd-workers as other relations because they contain polysemous terms, for which the relation holds in a specific sense. For example: north/south and polite/sassy were labelled as category and other respectively. Other errors stem from confusing antonyms and unrelated pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Positives",
"sec_num": null
},
{
"text": "We again sampled about 20% false positive pairs from both the binary and multiclass experiments and analyzed the major types of errors. Most of these pairs had only few cooccurrences in the corpus often due to infrequent terms (e.g. cisc/risc which define computer architectures). While our model effectively handled negative prefixes, it failed to handle negative suffixes causing incorrect classification of pairs like spiritless/spirited. A possible future work is to simply extend this model to handle negative suffixes as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "False Negatives",
"sec_num": null
},
{
"text": "In this paper, we presented an original technique for deriving antonyms using paraphrases from PPDB. We also proposed a novel morphologyaware neural network model, AntNET, which improves antonymy prediction for path-based and combined models. In addition to lexical and syntactic information, we suggested to include a novel morphological negation-marking feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our models outperform the baselines in two relation classification tasks. We also demonstrated that the negation marking feature outperforms previously suggested path-based features for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Since our proposed techniques for antonymy detection are corpus based, they can be applied to different languages and relations. The paraphrasebased method can be applied to other languages by extracting the paraphrases for these languages from the PPDB and using a morphological analysis tool (e.g. Morfette for French (Chrupala et al., 2008) ) or by looking up the negation prefixes in a grammar book for languages that do not dispose of such a tool. The LSTM-based model could also be used in other languages since the method is corpus based, but we would need to create a training set for new languages. This would not however be too difficult; the training set used by the model is not that big (the one used here was around 6000 pairs) and could be easily labelled through crowdsourcing.",
"cite_spans": [
{
"start": 320,
"end": 343,
"text": "(Chrupala et al., 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We release our code and the large-scale dataset derived from PPDB, annotated with semantic relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Negating prefixes includede, un, in, anti, il, non, dis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "5884 pairs were randomly chosen and were annotated on www.crowdflower.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used the English Wikipedia dump from May 2015 as the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://homes.cs.washington.edu/ roysch/papers/sp_embeddings/sp_ embeddings.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used paired t-test. *p < 0.1, **p < 0.05",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based in part on research sponsored by DARPA under grant number FA8750-13-2-0017 (the DEFT program). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government. This work has also been supported by the French National Research Agency under project ANR-16-CE33-0013 and partially supported by an Intel ICRI-CI grant, the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).We would like to thank our anonymous reviewers for their thoughtful and helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "For deriving antonyms using PPDB, we used the XXXL size of PPDB version 2.0 found in http://paraphrase.org/.To compute the metrics in Tables 6 and 7 , We used scikit-learn with the \"averaged setup\", which computes the metrics for each relation and reports their average weighted by support (the number of true instances for each relation). Note that it can result in a F 1 score that is not the harmonic mean of precision and recall.During preprocessing we handled removal of punctuation. Since our dataset only contains short phrases, we removed any stop words occurring at the beginning of a sentence (Example: a man \u2192 man) and we also removed plurals. The best hyperparameters for all models mentioned in this paper are shown in ",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 148,
"text": "Tables 6 and 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Paraphrasing with Bilingual Parallel Corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "597--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with Bilingual Parallel Corpora. In Pro- ceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL'05). Strouds- burg, PA, pages 597-604.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extracting Paraphrases from a Parallel Corpus",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics (ACL'01)",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Kathleen R. McKeown. 2001. Ex- tracting Paraphrases from a Parallel Corpus. In Pro- ceedings of the 39th Annual Meeting on Association for Computational Linguistics (ACL'01). Toulouse, France, pages 50-57.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contexts of antonymous adjectives",
"authors": [
{
"first": "G",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "George",
"middle": [
"A"
],
"last": "Charles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1989,
"venue": "Applied Psychology",
"volume": "10",
"issue": "",
"pages": "357--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walter G. Charles and George A. Miller. 1989. Con- texts of antonymous adjectives. Applied Psychology 10:357-375.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning Morphology with Morfette",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupala",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "2362--2367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupala, Georgiana Dinu, and Josef van Genabith. 2008. Learning Morphology with Mor- fette. In Proceedings of the Sixth International Conference on Language Resources and Evalua- tion (LREC'08). Marrakech, Morocco, pages 2362- 2367.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WordNet: an electronic lexical database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: an elec- tronic lexical database. MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A synopsis of linguistic theory, 1930-1955",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "Studies in Linguistic Analysis",
"volume": "",
"issue": "",
"pages": "1--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Firth. 1957. A synopsis of linguistic theory, 1930- 1955. In Studies in Linguistic Analysis, Basil Black- well, Oxford, United Kingdom, pages 1-32.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "PPDB: The Paraphrase Database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL/HLT). Atlanta, Georgia",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (NAACL/HLT). Atlanta, Geor- gia, pages 758-764.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distributional structure",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "Word",
"volume": "10",
"issue": "23",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig S. Harris. 1954. Distributional structure. Word 10(23):146-162.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING'92)",
"volume": "",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 14th International Conference on Compu- tational Linguistics (COLING'92). Nantes, France, pages 539-545.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "Jurgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning from Unseen Data",
"authors": [
{
"first": "Constantine",
"middle": [],
"last": "Lignos",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Morpho Challenge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Constantine Lignos. 2010. Learning from Unseen Data. In Proceedings of the Morpho Challenge 2010",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "DIRT -Discovery of Inference Rules from Text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'01)",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -Discov- ery of Inference Rules from Text. In Proceedings of the Seventh ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining (KDD'01). San Francisco, California, pages 323- 328.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Identifying synonyms among distributionally similar words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lijuan",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI '03)",
"volume": "",
"issue": "",
"pages": "1492--1493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin, Shaojun Zhao, Lijuan Qin, and Ming Zhou. 2003. Identifying synonyms among distribu- tionally similar words. In Proceedings of the Eigh- teenth International Joint Conference on Artificial Intelligence (IJCAI '03). Acapulco, Mexico, pages 1492-1493.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS'13)",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed Represen- tations of Words and Phrases and their Composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems (NIPS'13). Lake Tahoe, Nevada, pages 3111-3119.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distinguishing antonyms and synonyms in a pattern-based neural network",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL'17",
"volume": "",
"issue": "",
"pages": "76--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Distinguishing antonyms and synonyms in a pattern-based neural network. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics (EACL'17). Valencia, Spain, pages 76-85.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adding Semantics to Data-Driven Paraphrasing",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Charley",
"middle": [],
"last": "Beller",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "The 53rd Annual Meeting of the Association for Computational Linguistics (ACL'15)",
"volume": "",
"issue": "",
"pages": "1512--1522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Benjamin Van Durme, and Chris Callison- Burch. 2015a. Adding Semantics to Data-Driven Paraphrasing. In The 53rd Annual Meeting of the Association for Computational Linguistics (ACL'15). Beijing, China, pages 1512-1522.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevich",
"suffix": ""
},
{
"first": "Chris Callison-Burch Ben",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL'15)",
"volume": "",
"issue": "",
"pages": "425--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevich, and Chris Callison-Burch Ben Van Durme. 2015b. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics (ACL'15). Beijing, China, pages 425-430.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP'14)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP'14). Doha, Qatar, pages 1532- 1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Combining Word Patterns and Discourse Markers for Paradigmatic Relation Classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14)",
"volume": "",
"issue": "",
"pages": "524--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Sabine Schulte im Walde. 2014. Combining Word Patterns and Discourse Markers for Paradigmatic Relation Classification. In Pro- ceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (ACL'14). Bal- timore, MD, pages 524-530.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Symmetric Pattern Based Word Embeddings for Improved Word Similarity Prediction",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning (CoNLL'15)",
"volume": "",
"issue": "",
"pages": "258--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric Pattern Based Word Embeddings for Im- proved Word Similarity Prediction. In Proceed- ings of the Nineteenth Conference on Computational Natural Language Learning (CoNLL'15). Beijing, China, pages 258-267.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "CogALex-V Shared Task: LexNET -Integrated Path-based and Distributional Method for the Identification of Semantic Relations",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex-V)",
"volume": "",
"issue": "",
"pages": "80--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz and Ido Dagan. 2016. CogALex-V Shared Task: LexNET -Integrated Path-based and Distributional Method for the Identification of Se- mantic Relations. In Proceedings of the 5th Work- shop on Cognitive Aspects of the Lexicon (CogALex- V). Osaka, Japan, pages 80-85.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Improving Hypernymy Detection with an Integrated Path-based and Distributional Method",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL'16)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving Hypernymy Detection with an Integrated Path-based and Distributional Method. In Pro- ceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (ACL'16).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Illustration of the AntNET model. Each pair is represented by several paths and each path is a sequence of edges. An edge consists of five features: lemma, POS, dependency label, dependency direction, and negation marker.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Confusion matrices for the combined AntNET model for binary (left) and multiclass (right) classifications. Rows indicate gold labels and columns indicate predictions. The matrix is normalized along rows, so that the predictions for each (true) class sum to 100%.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>: Number of unique antonym pairs derived</td></tr><tr><td>from PPDB at each step. Paraphrases and synsets</td></tr><tr><td>were obtained from PPDB and WordNet respec-</td></tr><tr><td>tively.</td></tr><tr><td>details) and showed that paraphrases in this re-</td></tr><tr><td>source represent a variety of relations other than</td></tr><tr><td>equivalence, including contradictory pairs like no-</td></tr><tr><td>body/someone and close/open.</td></tr></table>",
"type_str": "table",
"html": null,
"text": ""
},
"TABREF3": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Number of unique antonym pairs derived from different sources. The number of pairs obtained from PPDB far outnumbers the antonym pairs present in EVALution and WordNet."
},
"TABREF5": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Examples of different types of non-antonyms derived from PPDB."
},
"TABREF6": {
"num": null,
"content": "<table><tr><td/><td/><td/><td>Binary</td><td/><td/><td>Multiclass</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td colspan=\"2\">Majority baseline</td><td colspan=\"3\">0.304 0.551 0.392</td><td colspan=\"2\">0.222 0.472 0.303</td></tr><tr><td>SP baseline</td><td/><td colspan=\"3\">0.661 0.568 0.436</td><td colspan=\"2\">0.583 0.488 0.344</td></tr><tr><td colspan=\"2\">Path-based SD baseline</td><td colspan=\"3\">0.723 0.724 0.722</td><td colspan=\"2\">0.636 0.675 0.651</td></tr><tr><td colspan=\"2\">Path-based AntNET</td><td colspan=\"3\">0.732 0.722 0.713</td><td colspan=\"2\">0.652 0.687 0.661**</td></tr><tr><td colspan=\"2\">Combined SD baseline</td><td colspan=\"3\">0.790 0.788 0.788</td><td colspan=\"2\">0.744 0.750 0.738</td></tr><tr><td colspan=\"2\">Combined AntNET</td><td colspan=\"3\">0.803 0.802 0.802*</td><td colspan=\"2\">0.746 0.757 0.746*</td></tr><tr><td>Feature</td><td>Model</td><td/><td colspan=\"2\">Binary</td><td colspan=\"2\">Multiclass</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td colspan=\"7\">Distance Path-based 0.727 0.727 0.724 0.665 0.692 0.664</td></tr><tr><td colspan=\"7\">Combined 0.789 0.788 0.788 0.732 0.743 0.734</td></tr><tr><td colspan=\"7\">Negation Path-based 0.732 0.722 0.713 0.652 0.687 0.661</td></tr><tr><td colspan=\"7\">Combined 0.803 0.802 0.802 0.746 0.757 0.746</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Performance of the AntNET models in comparison to the baseline models."
}
}
}
}