ACL-OCL / Base_JSON /prefixS /json /S19 /S19-1002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:48:12.264382Z"
},
"title": "Word Usage Similarity Estimation with Sentence Representations and Automatic Substitutes",
"authors": [
{
"first": "Aina",
"middle": [
"Gar\u00ed"
],
"last": "Soler",
"suffix": "",
"affiliation": {
"laboratory": "LIMSI",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "F-91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Apidianaki",
"suffix": "",
"affiliation": {
"laboratory": "LIMSI",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "F-91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": "marianna@limsi.fr"
},
{
"first": "Alexandre",
"middle": [],
"last": "Allauzen",
"suffix": "",
"affiliation": {
"laboratory": "LIMSI",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "F-91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": "allauzen@limsi.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Usage similarity estimation addresses the semantic proximity of word instances in different contexts. We apply contextualized (ELMo and BERT) word and sentence embeddings to this task, and propose supervised models that leverage these representations for prediction. Our models are further assisted by lexical substitute annotations automatically assigned to word instances by context2vec, a neural model that relies on a bidirectional LSTM. We perform an extensive comparison of existing word and sentence representations on benchmark datasets addressing both graded and binary similarity. The best performing models outperform previous methods in both settings.",
"pdf_parse": {
"paper_id": "S19-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "Usage similarity estimation addresses the semantic proximity of word instances in different contexts. We apply contextualized (ELMo and BERT) word and sentence embeddings to this task, and propose supervised models that leverage these representations for prediction. Our models are further assisted by lexical substitute annotations automatically assigned to word instances by context2vec, a neural model that relies on a bidirectional LSTM. We perform an extensive comparison of existing word and sentence representations on benchmark datasets addressing both graded and binary similarity. The best performing models outperform previous methods in both settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Traditional word embeddings, like Word2Vec and GloVe, merge different meanings of a word in a single vector representation (Mikolov et al., 2013; Pennington et al., 2014) . These pre-trained embeddings are fixed, and stay the same independently of the context of use. Current contextualized sense representations, like ELMo and BERT, go to the other extreme and model meaning as word usage (Peters et al., 2018; Devlin et al., 2018) . They provide a dynamic representation of word meaning adapted to every new context of use.",
"cite_spans": [
{
"start": 123,
"end": 145,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF21"
},
{
"start": 146,
"end": 170,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 390,
"end": 411,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 412,
"end": 432,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we perform an extensive comparison of existing static and dynamic embeddingbased meaning representation methods on the usage similarity (Usim) task, which involves estimating the semantic proximity of word instances in different contexts (Erk et al., 2009) . Usim differs from a classical Semantic Textual Similarity task (Agirre et al., 2016) by the focus on a particular word in the sentence. We evaluate on this task word and context representations obtained using pre-trained uncontextualized word Figure 1: We use contextualized word representations built from the whole sentence or smaller windows around the target word for usage similarity estimation, combined with automatic substitute annotations.",
"cite_spans": [
{
"start": 252,
"end": 270,
"text": "(Erk et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 336,
"end": 357,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "embeddings (GloVe) (Pennington et al., 2014) , with and without dimensionality reduction (SIF) (Arora et al., 2017) ; context representations obtained from a bidirectional LSTM (context2vec) (Melamud et al., 2016) ; contextualized word embeddings derived from a LSTM bidirectional language model (ELMo) (Peters et al., 2018) and generated by a Transformer (BERT) (Devlin et al., 2018) ; doc2vec (Le and Mikolov, 2014) and Universal Sentence Encoder representations (Cer et al., 2018) . All these embedding-based methods provide direct assessments of usage similarity. The best representations are used as features in supervised models for Usim prediction, trained on similarity judgments. We combine direct Usim assessments, made by the embedding-based methods, with a substitutebased Usim approach. Building up on previous work that used manually selected in-context substitutes as a proxy for Usim (Erk et al., 2013; Mc-Carthy et al., 2016) , we propose to automatize the annotation collection step in order to scale up the method and make it operational on unrestricted text. We exploit annotations assigned to words in context by the context2vec lexical substitution model, which relies on word and context representations learned by a bidirectional LSTM from a large corpus (Melamud et al., 2016) .",
"cite_spans": [
{
"start": 19,
"end": 44,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 95,
"end": 115,
"text": "(Arora et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 191,
"end": 213,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 303,
"end": 324,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 363,
"end": 384,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 395,
"end": 417,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 465,
"end": 483,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 900,
"end": 918,
"text": "(Erk et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 919,
"end": 942,
"text": "Mc-Carthy et al., 2016)",
"ref_id": null
},
{
"start": 1279,
"end": 1301,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we provide a direct comparison of a wide range of word and sentence representation methods on the Usage Similarity (Usim) task and show that current contextualized representations can successfully predict Usim;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we propose to automatize, and scale up, previous substitute-based Usim prediction methods;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we propose supervised models for Usim prediction which integrate embedding and lexical substitution features;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we propose a methodology for collecting new training data for supervised Usim prediction from datasets annotated for related tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test our models on benchmark datasets containing gold graded and binary word Usim judgments (Erk et al., 2013; Pilehvar and Camacho-Collados, 2019) . From the compared embeddingbased approaches, the BERT model gives best results on both types of data, providing a straightforward way for word usage similarity calculation. Our supervised model performs on par with BERT on the graded and binary Usim tasks, when using embedding-based representations and clean lexical substitutes.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "(Erk et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 114,
"end": 150,
"text": "Pilehvar and Camacho-Collados, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Usage similarity is a means for representing word meaning which involves assessing in-context semantic similarity, rather than mapping to word senses from external inventories (Erk et al., 2009 (Erk et al., , 2013 . This methodology followed from the gradual shift from word sense disambiguation models that would select the best sense in context from a dictionary, to models that reason about meaning by solely relying on distributional similarity (Erk and Pad\u00f3, 2008; Mitchell and Lapata, 2008) , or allow multiple sense interpretations (Jurgens, 2014) . In Erk et al. (2009) , the idea is to model meaning in context in a way that captures different degrees of similarity to a word sense, or between word instances.",
"cite_spans": [
{
"start": 176,
"end": 193,
"text": "(Erk et al., 2009",
"ref_id": "BIBREF5"
},
{
"start": 194,
"end": 213,
"text": "(Erk et al., , 2013",
"ref_id": "BIBREF6"
},
{
"start": 449,
"end": 469,
"text": "(Erk and Pad\u00f3, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 470,
"end": 496,
"text": "Mitchell and Lapata, 2008)",
"ref_id": "BIBREF22"
},
{
"start": 539,
"end": 554,
"text": "(Jurgens, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 560,
"end": 577,
"text": "Erk et al. (2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Due to its high reliance on context, Usim can be viewed as a semantic textual similarity (STS) (Agirre et al., 2016) task with a focus on a specific word instance. This connection motivated us to apply methods initially proposed for sentence similarity to Usim prediction. More precisely, we build sentence representations using different types of word and sentence embeddings, ranging from the classical word-averaging approach with traditional word embeddings (Pennington et al., 2014) , to more recent contextualized word representations (Peters et al., 2018; Devlin et al., 2018) . We explore the contribution of each separate method for Usim prediction, and use the best performing ones as features in supervised models. These are trained on sentence pairs labelled with Usim judgments (Erk et al., 2009) to predict the similarity of new word instances.",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 462,
"end": 487,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 541,
"end": 562,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 563,
"end": 583,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 791,
"end": 809,
"text": "(Erk et al., 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous attempts to automatic Usim prediction involved obtaining vectors encoding a distribution of topics for every target word in context (Lui et al., 2012) . In this work, Usim was approximated by the cosine similarity of the resulting topic vectors. We show how contextualized representations, and the supervised model that uses them as features, outperform topic-based methods on the graded Usim task.",
"cite_spans": [
{
"start": 141,
"end": 159,
"text": "(Lui et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We combine the embedding-based direct Usim assessment methods with substitute-based representations obtained using an unsupervised lexical substitution model. McCarthy et al. (2016) showed it is possible to model usage similarity using manual substitute annotations for words in context. In this setting, the set of substitutes proposed for a word instance describe its specific meaning, while similarity of substitute annotations for different instances points to their semantic proximity. 1 We follow up on this work and propose a way to use substitutes for Usim prediction on unrestricted text, bypassing the need for manual annotations. Our method relies on substitute annotations proposed by the context2vec model (Melamud et al., 2016) , which uses word and context representations learned by a bidirectional LSTM from a large corpus (UkWac) Baroni et al. (2009) .",
"cite_spans": [
{
"start": 159,
"end": 181,
"text": "McCarthy et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 491,
"end": 492,
"text": "1",
"ref_id": null
},
{
"start": 719,
"end": 741,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 848,
"end": 868,
"text": "Baroni et al. (2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The local papers took photographs of the footprint. (Erk et al., 2013) for the nouns paper (Usim score = 4.34) and coach.n (Usim score = 1.5), with the substitutes assigned by the annotators (GOLD). For comparison, we give the substitutes selected for these instances by the automatic substitution method (context2vec) used in our experiments from two different pools of substitutes (AUTO-LSCNC and PPDB). More details on the automatic substitution configurations are given in Section 4.2.",
"cite_spans": [
{
"start": 52,
"end": 70,
"text": "(Erk et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Substitutes",
"sec_num": null
},
{
"text": "3 Data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Substitutes",
"sec_num": null
},
{
"text": "We use the training and test datasets of the SemEval-2007 Lexical Substitution (LexSub) task (McCarthy and Navigli, 2007) , which contain instances of target words in sentential context handlabelled with meaning-preserving substitutes. A subset of the LexSub data (10 instances x 56 lemmas) has additionally been annotated with graded pairwise Usim judgments (Erk et al., 2013) . Each sentence pair received a rating (on a scale of 1-5) by multiple annotators, and the average judgment for each pair was retained. McCarthy et al. (2016) derive two additional scores from Usim annotations that denote how easy it is to partition a lemma's usages into sets describing distinct senses: Uiaa, the inter-annotator agreement for a given lemma, taken as the average pairwise Spearman's \u03c1 correlation between ranked judgments of the annotators; and Umid, the proportion of midrange judgments over all instances for a lemma and all annotators.",
"cite_spans": [
{
"start": 93,
"end": 121,
"text": "(McCarthy and Navigli, 2007)",
"ref_id": "BIBREF18"
},
{
"start": 359,
"end": 377,
"text": "(Erk et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The LexSub and Usim Datasets",
"sec_num": "3.1"
},
{
"text": "In our experiments, we use 2,466 sentence pairs from the Usim data for training, development and testing of different automatic Usim prediction methods. Our models rely on substitutes automatically assigned to words in context using context2vec (Melamud et al., 2016) , and on various word and sentence embedding representa-tions. We also train a model using the gold substitutes, to test how well our models perform when substitute quality is high. Performance of the different models is evaluated by measuring how well they approximate the Usim scores assigned by annotators. Table 1 shows examples of sentence pairs from the Usim dataset (Erk et al., 2013) with the GOLD substitutes and Usim scores assigned by the annotators. The Usim score is high for similar instances, and decreases for instances that describe different meanings. The semantic proximity of two instances is also reflected in the similarity of their substitutes sets. For comparison, we also give in the Table the substitutes selected for these instances by the automatic context2vec substitution method used in our experiments (more details in Section 4.2).",
"cite_spans": [
{
"start": 245,
"end": 267,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 641,
"end": 659,
"text": "(Erk et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The LexSub and Usim Datasets",
"sec_num": "3.1"
},
{
"text": "Given the small size of the Usim dataset, we extract additional training data for our models from the Concepts in Context (CoInCo) corpus (Kremer et al., 2014) , a subset of the MASC corpus (Ide et al., 2008) . CoInCo contains manually selected substitutes for all content words in a sentence, but provides no usage similarity scores that could be used for training. We construct our supplementary training data as follows: we gather all instances of a target word in the corpus with at least four substitutes, and keep pairs with (1) no overlap in substitutes, and (2) minimum 75% substitute overlap. 2 We view the first set of pairs as examples of completely different usages of a word (DIFF), and the second set as examples of identical usages (SAME). The two sets are unbalanced in terms of number of instance pairs (19,060 vs. 2,556) . We balance them by keeping in DIFF the 2,556 pairs with the highest number of substitutes.",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "(Kremer et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 190,
"end": 208,
"text": "(Ide et al., 2008)",
"ref_id": "BIBREF10"
},
{
"start": 602,
"end": 603,
"text": "2",
"ref_id": null
},
{
"start": 820,
"end": 838,
"text": "(19,060 vs. 2,556)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Concepts in Context Corpus",
"sec_num": "3.2"
},
{
"text": "We also annotate the data with substitutes using context2vec (Melamud et al., 2016) , as described in Section 4.2. We apply an additional filtering to the sentence pairs extracted from CoInCo, discarding instances of words that are not in the con-text2vec vocabulary and have no embeddings. We are left with 2,513 pairs in each class (5,026 in total). We use 80% of these pairs (4,020) together with the Usim data to train our supervised Usim models described in Section 4.3. 3",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Concepts in Context Corpus",
"sec_num": "3.2"
},
{
"text": "The third dataset we use in our experiments is the recently released Word-in-Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019), version 0.1. WiC provides pairs of contextualized target word instances describing the same or different meaning, framing in-context sense identification as a binary classification task. For example, a sentence pair for the noun stream is: ['Stream of consciousness' -'Two streams of development run through American history']. A system is expected to be able to identify that stream does not have the same meaning in the two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Word-in-Context dataset",
"sec_num": "3.3"
},
{
"text": "WiC sentences were extracted from example usages in WordNet (Fellbaum, 1998), VerbNet (Schuler, 2006) , and Wiktionary. Instance pairs were automatically labeled as positive (T) or negative (F) (corresponding to the same/different sense) using information in the lexicographic resources, such as presence in the same or different synsets. Each word is represented by at most three instances in WiC, and repeated sentences are excluded. It is important to note that meanings represented in the WiC dataset are coarser-grained than WordNet senses. This was ensured by excluding WordNet synsets describing highly sim-ilar meanings (sister senses, and senses belonging to the same supersense). The human-level performance upper-bound on this binary task, as measured on two 100-sentence samples, is 80.5%. Inter-annotator agreement is also high, at 79%. The dataset comes with an official train/dev/test split containing 7,618, 702 and 1,366 sentence pairs, respectively. 4",
"cite_spans": [
{
"start": 86,
"end": 101,
"text": "(Schuler, 2006)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Word-in-Context dataset",
"sec_num": "3.3"
},
{
"text": "We experiment with two ways of predicting usage similarity: an unsupervised approach which relies on the cosine similarity of different kinds of word and sentence representations, and provides direct Usim assessments; and supervised models that combine embedding similarity with features based on substitute overlap. We present the direct Usim prediction methods in Section 4.1. In Section 4.2, we describe how substitute-based features were extracted, and in Section 4.3, we introduce the supervised Usim models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "In the unsupervised Usim prediction setting, we apply different types of pre-trained word and sentence embeddings as follows: we compute an embedding for every sentence in the Usim dataset, and calculate the pairwise cosine similarity between the sentences available for a target word. Then, for every embedding type, we measure the correlation between sentence similarities and gold usage similarity judgments in the Usim dataset, using Spearman's \u03c1 correlation coefficient. We experiment with the following embedding types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "GloVe embeddings are uncontextualized word representations which merge all senses of a word in one vector (Pennington et al., 2014) . We use 300-dimensional GloVe embeddings pre-trained on Common Crawl (840B tokens). 5 The representation of a sentence is obtained by averaging the GloVe embeddings of the words in the sentence. SIF (Smooth Inverse Frequency) embeddings are sentence representations built by applying dimensionality reduction to a weighted average of uncontextualized embeddings of words in a sentence (Arora et al., 2017) . We use SIF in combination with GloVe vectors.",
"cite_spans": [
{
"start": 106,
"end": 131,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 217,
"end": 218,
"text": "5",
"ref_id": null
},
{
"start": 518,
"end": 538,
"text": "(Arora et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "Context2vec embeddings (Melamud et al., 2016 ). The context2vec model learns embeddings for words and their sentential contexts simultaneously. The resulting representations reflect: a) the similarity between potential fillers of a sentence with a blank slot, and b) the similarity of contexts that can be filled with the same word. We use a context2vec model pre-trained on the UkWac corpus (Baroni et al., 2009) 6 to compute embeddings for sentences with a blank at the target word's position.",
"cite_spans": [
{
"start": 23,
"end": 44,
"text": "(Melamud et al., 2016",
"ref_id": "BIBREF19"
},
{
"start": 392,
"end": 415,
"text": "(Baroni et al., 2009) 6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "ELMo (Embeddings from Language Models) representations are contextualized word embeddings derived from the internal states of an LSTM bidirectional language model (biLM) (Peters et al., 2018) . In our experiments, we use a pre-trained 512-dimensional biLM. 7 Typically, the best linear combination of the layer representations for a word is learned for each end task in a supervised manner. Here, we use out-of-the-box embeddings (without tuning) and experiment with the top layer, and with the average of the three hidden layers. We represent a sentence in two ways: by the contextualized ELMo embedding obtained for the target word, and by the average of ELMo embeddings for all words in a sentence.",
"cite_spans": [
{
"start": 170,
"end": 191,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) . BERT representations are generated by a 12-layer bidirectional Transformer encoder that jointly conditions on both left and right context in all layers. 8 BERT can be fine-tuned to specific end tasks, or its contextualized word representations can be used directly in applications, similar to ELMo. We try different layer combinations and create sentence representations, in the same way as for ELMo: using either the BERT embedding of the target word, or the average of the BERT embeddings for all words in a sentence.",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "Universal Sentence Encoder (USE) makes use of a Deep Averaging Network (DAN) encoder trained to create sentence representations by means of multi-task learning (Cer et al., 2018) . USE has been shown to improve performance on different NLP tasks using transfer learning. 9",
"cite_spans": [
{
"start": 160,
"end": 178,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "doc2vec is an extension of word2vec to the sentence, paragraph or document level (Le and Mikolov, 2014) . One of its forms, dbow (distributed bag of words), is based on the skip-gram model, where it adds a new feature vector representing a document. We use a dbow model trained on English Wikipedia released by Lau and Baldwin (2016). 10 We test the above models with representations built from the whole sentence, and using a smaller context window (cw) around the target word. Sentences in the WiC dataset are quite short (7.9 \u00b1 3.9 words), but the length of sentences in the Usim and CoInCo datasets varies a lot (27.4 \u00b1 13.2 and 18.8 \u00b1 10.2, respectively). We want to check whether information surrounding the target word in the sentence is more relevant, and sufficient for Usim estimation. We focus on the words in a context window of \u00b1 2, 3, 4 or 5 words at each side of a target word. Then, we collect their word embeddings to be averaged (for GloVe, ELMo and BERT), or derive an embedding from this specific window instead of the whole sentence (for USE).",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 335,
"end": 337,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "We approximate Usim by measuring the cosine similarity of the resulting context representations. We compare the performance of these direct assessment methods on the Usim dataset and report the results in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Usage Similarity Prediction",
"sec_num": "4.1"
},
{
"text": "Following up on McCarthy et al.'s (2016) sense clusterability work, we also experiment with a substitute-based approach for Usim prediction. McCarthy et al. showed that manually selected substitutes for word instances in context can be used as a proxy for Usim. Here, we propose an approach to obtain these annotations automatically that can be applied to the whole vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Substitute-based Feature Extraction",
"sec_num": "4.2"
},
{
"text": "We generate rankings of candidate substitutes for words in context using the context2vec method (Melamud et al., 2016) . The original method selects and ranks substitutes from the whole vocabulary. To facilitate comparison and evaluation, we use the following pools of candidates: (a) all substitutes that were proposed for a word in the LexSub and CoInCo annotations (we call this substitute pool AUTO-LSCNC); (b) the paraphrases of the word in the Paraphrase Database (PPDB) XXL package (Ganitkevitch et al., 2013; Pavlick et al., 2015 ) (AUTO-PPDB). 11 In the WiC experiments, where no substitute annotations are available, we only use PPDB paraphrases (AUTO-PPDB). We obtain a context2vec embedding for a sentence by replacing the target word with a blank. AUTO-LSCNC substitutes are high-quality since they were extracted from the manual LexSub and CoInCo annotations. They are semantically similar to the target, and con-text2vec just needs to rank them according to how well they fit the new context. This is done by measuring the cosine similarity between each substitute's context2vec word embedding and the context embedding obtained for the sentence.",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 489,
"end": 516,
"text": "(Ganitkevitch et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 517,
"end": 537,
"text": "Pavlick et al., 2015",
"ref_id": "BIBREF23"
},
{
"start": 553,
"end": 555,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "The AUTO-PPDB pool contains paraphrases from PPDB XXL, which were automatically extracted from parallel corpora (Ganitkevitch et al., 2013) . Hence, this pool contains noisy paraphrases that should be ranked lower. To this end, we use in this setting the original context2vec scoring formula which also accounts for the similarity between the target word and the substitute:",
"cite_spans": [
{
"start": 112,
"end": 139,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c2v score = cos(s, t) + 1 2 \u00d7 cos(s, C) + 1 2",
"eq_num": "(1)"
}
],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "In formula (1), s and t are the word embeddings of a substitute and the target word, and C is the context2vec vector of the context. Following this procedure, context2vec produces a ranking of candidate substitutes for each target word instance in the Usim, CoInCo and WiC datasets, according to their fit in context. Every candidate is assigned a score, with substitutes that are a good fit in a specific context being higher-ranked than others. For every new target word instance, context2vec ranks all candidate substitutes available for the target in each pool. Consequently, the automatic annotations produced for different instances of the target include the same set of substitutes, but in different order. This does not allow for the use of measures based on substitute overlap, which were shown to be useful for Usim prediction in McCarthy et al. (2016) . In order to use this type of measures, we propose ways to filter the automatically generated rankings, and keep for each instance only substitutes that are a good fit in context.",
"cite_spans": [
{
"start": 840,
"end": 862,
"text": "McCarthy et al. (2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "11 http://paraphrase.org/ Substitute Filtering We test different filters to discard low quality substitutes from the annotations proposed by context2vec for each instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "\u2022 PPDB 2.0 score: Given a ranking R of n substitutes R = [s 1 , s 2 , ..., s n ] proposed by context2vec, we form pairs of substitutes in adjacent positions {s i \u2194 s i+1 }, and check whether they exist as paraphrase pairs in PPDB. We expect substitutes that are paraphrases of each other to be similarly ranked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "If s i and s i+1 are not paraphrases in PPDB, we keep all substitutes up to s i and use this as a cut-off point, discarding substitutes present from position s i+1 onwards in the ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "\u2022 GloVe word embeddings: We measure the cosine similarity (cosSim) between GloVe embeddings of adjacent substitutes {s i \u2194 s i+1 } in the ranking R obtained for a new instance. We first compare the similarity of the first pair of substitutes (cosSim(s 1 , s 2 )) to a lower bound similarity threshold T. If cosSim(s 1 , s 2 ) exceeds T, we assume that s 1 and s 2 have the same meaning, and use cosSim(s 1 , s 2 ) as a reference similarity value, S, for this instance. The middle point between the two values, M = (T + S)/2, is then used as a threshold to determine whether there is a shift in meaning in subsequent pairs. If cosSim(s i , s i+1 ) < M , for i > 1, then only the higher ranked substitute (s i ) is retained and all subsequent substitutes in the ranking are discarded. The intuition behind this calculation is that if cosSim is much lower than the reference S (even if it exceeds T ), substitutes possibly have different senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "\u2022 Context2vec score: This filter uses the score assigned by context2vec to each substitute, reflecting how good a fit it is in each context. context2vec scores vary a lot across instances, it is thus not straightforward to choose a threshold. We instead refer to the scores assigned to adjacent pairs of substitutes in the ranking produced for each instance, R = [s 1 , s 2 , ..., s n ]. We view the pair with the biggest difference in scores as the cut-off point, considering it reflects a degradation in substitute fit. We retain only substitutes up to this point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "\u2022 Highest-ranked X substitutes. We also test two simple baselines, which consist in keep-ing the 5 and 10 highest-ranked substitutes for each instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "We test the efficiency of each filter on the portion of the LexSub dataset (McCarthy and Navigli, 2007 ) that was not annotated for Usim. We compare the substitutes retained for each instance after filtering to its gold LexSub susbtitutes using the F1-score, and the proportion of false positives out of all positives. Filtering results are reported in Appendix A. The best filters were GloVe word embeddings (T = 0.2) for AUTO-LSCNC, and the PPDB filter for AUTO-PPDB.",
"cite_spans": [
{
"start": 75,
"end": 102,
"text": "(McCarthy and Navigli, 2007",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "Feature Extraction After annotating the Usim sentences with context2vec and filtering, we extract, for each sentence pair (S 1 , S 2 ), a set of features related to the amount of substitute overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "\u2022 Common substitutes. The proportion of shared substitutes between two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "\u2022 GAP score. The average of the Generalized Average Precision (GAP) score (Kishida, 2005) taken in both directions (GAP (S 1 , S 2 ) and GAP (S 2 , S 1 )). GAP is a measure that compares two rankings considering not only the order of the ranked elements but also their weights. It ranges from 0 to 1, where 0 means that rankings are completely different and 1 indicates perfect agreement. We use the frequency in the manual Usim annotations (i.e. the number of annotators who proposed each substitute) as the weight for gold substitutes, and the context2vec score for automatic substitutes. We use the GAP implementation from Melamud et al. (2015) .",
"cite_spans": [
{
"start": 74,
"end": 89,
"text": "(Kishida, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 626,
"end": 647,
"text": "Melamud et al. (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "\u2022 Substitute cosine similarity. We form substitute pairs (S 1 \u2194 S 2 ) and calculate the average of their GloVe cosine similarities. This feature shows the semantic similarity of substitutes, even when overlap is low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic LexSub",
"sec_num": null
},
{
"text": "We train linear regression models to predict Usim scores for word instances in different contexts using as features the cosine similarity of the different representations in Section 4.1, and the substitutebased features in 4.2. For training, we use the Usim dataset on its own (cf. Section 3.1), and combined with the additional training examples extracted from CoInCo (cf. Section 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Usim Prediction",
"sec_num": "4.3"
},
{
"text": "To be able to evaluate the performance of our models separately for each of the 56 target words in the Usim dataset, we train a separate model for each word in a leave-one-out setting. Each time, we use 2,196 pairs for training, 225 for development and 45 for testing. 12 Each model is evaluated on the sentences corresponding to the left out target word. We report results of these experiments in Section 5. The performance of the model with context2vec substitutes from the two substitute pools is compared to that of the model with gold substitute annotations. We replicate the experiments by adding CoInCo data to the Usim training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Usim Prediction",
"sec_num": "4.3"
},
{
"text": "To test the contribution of each feature, we perform an ablation study on the 225 Usim sentence pairs of the development set, which cover the full spectrum of Usim scores (from 1 to 5). We report results of the feature ablation in Appendix C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Usim Prediction",
"sec_num": "4.3"
},
{
"text": "We also build a model for the binary Usim task on the WiC dataset (Pilehvar and Camacho-Collados, 2019), using the official train/dev/test split. We train a logistic regression classifier on the training set, and use the development set to select the best among several feature combinations. We report results of the best performing models on the WiC test set in Section 5. For instances in WiC where no PPDB substitutes are available (133 out of 1,366 in the test set) we back off to a model that only relies on the embedding features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Usim Prediction",
"sec_num": "4.3"
},
{
"text": "Direct Usim Prediction Correlation results between Usim judgments and the cosine similarity of the embedding representations described in Section 4.1 are found in Table 2 . Detailed results for all context window combinations are given in Appendix B. We observe that target word BERT embeddings give best performance in this task. Selecting a context window around (or including) the target word does not always help, on the contrary it can harm the models. Context2vec sentence representations are the next best performing representation, after BERT, but their correlation is much lower. The simple GloVe-based SIF approach for sentence representation, which consists in applying dimensionality reduction to a weighted average of GloVe vectors of the words in a sentence, is much superior to the simple average of GloVe vectors and even better than doc2vec sentence representations, obtaining a correlation comparable to Table 2 : Spearman \u03c1 correlation of different sentence and word embeddings on the Usim dataset using different context window sizes (cw). For BERT and ELMo, top refers to the top layer, and av refers to the average of layers (3 for ELMo, and the last 4 for BERT).",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 2",
"ref_id": null
},
{
"start": 922,
"end": 929,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Graded Usim To evaluate the performance of our supervised models, we measure the correlation of the predictions with human similarity judgments on the Usim dataset using Spearman's \u03c1. Results reported in Table 3 are the average of the correlations obtained for each target word with gold and automatic substitutes (from the two substitute pools), and for each type of features, substitutebased and embedding-based (cosine similarities from BERT and context2vec). We also report results with the additional CoInCo training data. Unsurprisingly, the best results are obtained by the methods that use the gold substitutes. This is consistent with previous analyses by Erk et al. (2009) who found overlap in manually-proposed substitutes to correlate with Usim judgments. The lower performance of features that rely on automatically selected substitutes (AUTO-LSCNC and AUTO-PPDB) demonstrates the impact of substitute quality on the contribution of this type of features. The addition of CoInCo data does not seem to help the models, as results are slightly lower than in the only Usim setting. This can be due to the fact that CoInCo data contains only extreme cases of similarity (SAME/DIFF) and no in-termediate ratings. The slight improvement in the combined settings over embedding-based models is not significant in AUTO-LSCNC substitutes, but it is for gold substitutes (p < 0.001). 13 For comparison to the topic-modelling approach of Lui et al. (2012) , we evaluate on the 34 lemmas used in their experiments. They report a correlation calculated over all instances. With the exception of the substitute-only setting with PPDB candidates, all of our Usim models get higher correlation than their model (\u03c1 = 0.202), with \u03c1 = 0.512 for the combination of AUTO-LSCNC substitutes and embeddings. The average of the per target word correlation in Lui et al. (2012) (\u03c1 = 0.388) is still lower than that of our AUTO-LSCNC model in the combined setting (\u03c1 = 0.500).",
"cite_spans": [
{
"start": 665,
"end": 682,
"text": "Erk et al. (2009)",
"ref_id": "BIBREF5"
},
{
"start": 1387,
"end": 1389,
"text": "13",
"ref_id": null
},
{
"start": 1440,
"end": 1457,
"text": "Lui et al. (2012)",
"ref_id": "BIBREF16"
},
{
"start": 1848,
"end": 1865,
"text": "Lui et al. (2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "that of USE.",
"sec_num": null
},
{
"text": "We evaluate the predictions of our binary classifiers by measuring accuracy on the test portion of the WiC dataset. Results for the best configurations for each training set are reported in Table 4 . Experiments on the development set showed that target word BERT representations and USE sentence embeddings are the best-suited for WiC. Therefore, 'embedding-based features' here refers to these two representations. Results on the development set can be found in Appendix D. All configurations obtain higher accuracy than the previous best reported result on this dataset (59.4) (Pilehvar and Camacho-Collados, 2019), obtained using DeConf vectors, which are multi-prototype embeddings based on WordNet knowledge (Pilehvar and Collier, 2016). Similar to the graded Usim experiments, adding substitute-based features to embedding features slightly improves the accuracy of the model. Also, combining the Co-InCo and WiC data for training does not have a clear impact on results, even in this binary classification setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Binary Usim",
"sec_num": null
},
{
"text": "Results reported for Usim are the average correlation for each target word, but the strength of the correlation varies greatly for different words for all models and settings. For example, in the case of direct Usim prediction with embeddings using BERT target, Spearman's \u03c1 ranges from 0.805 (for the verb fire) to -0.111 (for the verb suffer). This variation in performance is not surprising, since annotators themselves found some lemmas harder to annotate than others, as reflected in the Usim inter-annotator agreement measure (Uiaa) (McCarthy et al., 2016) . We find that BERT target word embeddings results correlate with Uiaa per target word (\u03c1 = 0.59, p < 0.05), showing that the performance of this model depends to a certain extent on the ease of annotation for each lemma. Uiaa also correlates with the standard deviation of average Usim scores by target word (\u03c1 = 0.66, p < 0.001). Indeed, average Usim values for the word suffer do not exhibit high variance as they only range from 3.6 to 4.9. Within a smaller range of scores, a strong correlation is harder to obtain. The negative correlation between Uiaa and Umid (\u22120.46, p < 0.001) also suggests that words with higher disagreement tend to exhibit a higher proportion of mid-range judgments. We believe that this analysis highlights the difference between usage similarity across target words and encourages a by-lemma approach where the specificities of each lemma are taken into account.",
"cite_spans": [
{
"start": 539,
"end": 562,
"text": "(McCarthy et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We applied a wide range of existing word and context representations to graded and binary usage similarity prediction. We also proposed novel supervised models which use as features the best performing embedding representations, and make high quality predictions especially in the binary setting, outperforming previous approaches. The supervised models include features based on in-context lexical substitutes. We show that automatic substitutions constitute an alternative to manual annotation when combined with the embedding-based features. Nevertheless, if there is no specific reason for using substitutes for measuring Usim, BERT offers a much more straightforward solution to the Usim prediction problem. In future work, we plan to use automatic Usim predictions for estimating word sense partitionability. We believe such knowledge can be useful to determine the appropriate meaning representation for each lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Tables 5 and 6 contain results obtained using the different substitute filters described in Section 4.2. We measure the quality of the substitutes retained in the automatic ranking produced by context2vec after filtering against gold substitute annotations in LexSub data. Here, we only use the portion of LexSub data that does not contain Usim judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Filtering experiments",
"sec_num": null
},
{
"text": "We measure filtered substitute quality against the gold standard using the F1-score, and the proportion of false positives (FP) over all positives (TP+FP). Table 5 shows results for annotations assigned by context2vec using the the Lex-Sub/CoInCo pool of substitutes (AUTO-LSCNC). Table 6 shows results for context2vec annotations with the PPDB pool of substitutes (AUTO-PPDB). Table 7 : Correlations of sentence and word embeddings on the Usim dataset using different context window sizes (cw). For BERT and ELMo, top refers to the top layer, and av refers to the average of layers (3 for ELMo, and the last 4 for BERT). concat 4 refers to the concatenation of the last 4 layers of BERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 281,
"end": 288,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 378,
"end": 385,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Filtering experiments",
"sec_num": null
},
{
"text": "Results of feature ablation experiments on the Usim development sets are given in Table 9 . Table 10 : Accuracy of different features and combinations on the WiC development set. On this dataset, the two best types of embeddings, that were chosen for the Embedding-based and Combined configurations, were BERT (target word, average of the last 4 layers) and USE. Both Only-substitutes and Combined use features of automatic substitutes from the PPDB pool, and back off to the Embedding-based model when there were no paraphrases available for the target word in the PPDB.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 9",
"ref_id": "TABREF11"
},
{
"start": 92,
"end": 100,
"text": "Table 10",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "C Feature Ablation on Usim",
"sec_num": null
},
{
"text": "McCarthy et al. use the substitute annotations as features for predicting Usim, clustering instances and estimating the partitionability of words into senses. This offers a way to distinguish between lemmas with distinct senses and others with fuzzy semantics, which would be more challenging in annotation tasks and automatic processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Full overlap is rare since annotators propose somewhat different sets of substitutes, even for instances with the same meaning. Full overlap is observed for only 437 of all considered CoInCo pairs (0.3%).3 We will make the dataset available at https:// github.com/ainagari. 20% of the extracted examples were kept aside for development and testing purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The test portion of WiC had not been released at the time of submission. We contacted the authors and ran the evaluation on the official test set, to be able to compare to results reported in their paper (Pilehvar and Camacho-Collados, 2019).5 https://nlp.stanford.edu/projects/ glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://u.cs.biu.ac.il/\u02dcnlp/resources/ downloads/context2vec/ 7 https://allennlp.org/elmo 8 This is an important difference with the ELMo architecture which concatenates a left-to-right and right-to-left model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tfhub.dev/google/ universal-sentence-encoder/210 https://github.com/jhlau/doc2vec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "With the exception of 4 lemmas which had 36 pairs, and one which had 44.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As determined by paired t-tests, after verifying the normality of the differences with the Shapiro-Wilk test",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their helpful feedback on this work. We would also like to thank Jose Camacho-Collados for his help with the WiC experiments.The work has been supported by the French National Research Agency under project ANR-16-CE33-0013.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "497--511",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1081"
]
},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 497-511, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Simple but Tough-to-Beat Baseline for Sentence Embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A Simple but Tough-to-Beat Baseline for Sentence Embeddings. In International Conference on Learn- ing Representations (ICLR), Toulon, France.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The WaCky wide web: a collection of very large linguistically processed web-crawled corpora",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
},
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Language Resources and Evaluation",
"volume": "43",
"issue": "3",
"pages": "209--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Journal of Language Re- sources and Evaluation, 43(3):209-226.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 169-174, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Investigations on word senses and word usages",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Gaylord",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk, Diana McCarthy, and Nicholas Gaylord. 2009. Investigations on word senses and word us- ages. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna- tional Joint Conference on Natural Language Pro- cessing of the AFNLP, pages 10-18, Suntec, Singa- pore. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring word meaning in context",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Gaylord",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "",
"pages": "511--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk, Diana McCarthy, and Nicholas Gaylord. 2013. Measuring word meaning in context. Com- putational Linguistics, 39:511-554.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A structured vector space model for word meaning in context",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "897--906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk and Sebastian Pad\u00f3. 2008. A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 897-906, Honolulu, Hawaii. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "WordNet: An Electronic Lexical Database. Language, Speech, and Communication",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. Language, Speech, and Communication. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "PPDB: The Paraphrase Database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764, Atlanta, Georgia. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "MASC: the Manually Annotated Sub-Corpus of American English",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Ide",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Fillmore",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Ide, Collin Baker, Christiane Fellbaum, Charles Fillmore, and Rebecca Passonneau. 2008. MASC: the Manually Annotated Sub-Corpus of American English. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An analysis of ambiguity in word sense annotations",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)",
"volume": "",
"issue": "",
"pages": "3006--3012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens. 2014. An analysis of ambiguity in word sense annotations. In Proceedings of the Ninth In- ternational Conference on Language Resources and Evaluation (LREC-2014), pages 3006-3012, Reyk- javik, Iceland. European Language Resources Asso- ciation (ELRA).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments",
"authors": [
{
"first": "Kazuaki",
"middle": [],
"last": "Kishida",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuaki Kishida. 2005. Property of average precision and its generalization: An examination of evalua- tion indicator for information retrieval experiments. Technical Report NII-2005-014E, National Institute of Informatics Tokyo, Japan.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "What substitutes tell us -analysis of an \"all-words\" lexical substitution corpus",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "Kremer",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "540--549",
"other_ids": {
"DOI": [
"10.3115/v1/E14-1057"
]
},
"num": null,
"urls": [],
"raw_text": "Gerhard Kremer, Katrin Erk, Sebastian Pad\u00f3, and Ste- fan Thater. 2014. What substitutes tell us -analy- sis of an \"all-words\" lexical substitution corpus. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 540-549, Gothenburg, Sweden. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An empirical evaluation of doc2vec with practical insights into document embedding generation",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Jey",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1609"
]
},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau and Timothy Baldwin. 2016. An empiri- cal evaluation of doc2vec with practical insights into document embedding generation. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 78-86, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International conference on Machine Learning, pages 1188-1196, Beijing, China.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised estimation of word usage similarity",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "33--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui, Timothy Baldwin, and Diana McCarthy. 2012. Unsupervised estimation of word usage simi- larity. In Proceedings of the Australasian Language Technology Association Workshop 2012, pages 33- 41, Dunedin, New Zealand.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word sense clustering and clusterability",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "2",
"pages": "245--275",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00247"
]
},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy, Marianna Apidianaki, and Katrin Erk. 2016. Word sense clustering and clusterability. Computational Linguistics, 42(2):245-275.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semeval-2007 task 10: English lexical substitution task",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy and Roberto Navigli. 2007. Semeval- 2007 task 10: English lexical substitution task. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 48-53, Prague, Czech Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "context2vec: Learning Generic Context Embedding with Bidirectional LSTM",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning Generic Context Em- bedding with Bidirectional LSTM. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 51-61, Berlin, Germany. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A simple word embedding model for lexical substitution",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Omer Levy, and Ido Dagan. 2015. A simple word embedding model for lexical substitu- tion. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 1-7, Denver, Colorado.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings of the Inter- national Conference on Learning Representations, Scottsdale, Arizona.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, Ohio. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "425--430",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2070"
]
},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification . In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 425-430, Beijing, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "WiC: 10, 000 Example Pairs for Evaluating Context-Sensitive Representations",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
}
],
"year": 2019,
"venue": "Accepted at the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar and Jos\u00e9 Camacho- Collados. 2019. WiC: 10, 000 Example Pairs for Evaluating Context-Sensitive Representations. Ac- cepted at the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "De-conflated semantic representations",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1680--1690",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1174"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680-1690, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon",
"authors": [
{
"first": "Karin Kipper",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karin Kipper Schuler. 2006. VerbNet: A Broad- Coverage, Comprehensive Verb Lexicon. Ph.D. the- sis, University of Pennsylvania.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"text": "Eleven CIRA members [have been [convicted of criminal charges and others are] awaiting trial]. So what started out as a perfectly lovely s as [a public wrestling match between g baby girl], with baby girl loving very s So what started out as a perfectly lovely stroll ended [up as [a public wrestling match between grandma and] baby girl], with baby girl loving very second of it . grass clippings can be brought out to the landfill at anytime for no *charge* and may not be placed in city cans . the tag consists of a tiny chip , [about the [size of a match head that serves ] as a ] portable database . this is at least 26 weeks by the [week in [which the approved match with the child ] is made ]. grass clippings can be brought out to the [landfill at [anytime for no charge and may not ] be placed in city cans ]",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "Example pairs of highly similar and dissimilar usages from the Usim dataset",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>Training set</td><td>Features</td></tr></table>",
"text": "Graded Usim results: Spearman's \u03c1 correlation results between supervised model predictions and graded annotations on the Usim test set. The first column reports results obtained using gold substitute annotations for each target word instance. The last two columns give results with automatic substitutes selected among all substitutes proposed for the word in the LexSub and CoInCo datasets (AUTO-LSCNC), or paraphrases in the PPDB XXL package (AUTO-PPDB). The Embedding-based configuration uses cosine similarities from BERT and context2vec.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF6": {
"content": "<table/>",
"text": "Binary Usim results: Accuracy of models on the WiC test set. The Embedding-based configuration includes cosine similarities of BERT target and USE. The Combined setting uses, in addition, substitute overlap features (AUTO-PPDB).",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table><tr><td colspan=\"3\">: Results of different substitute filtering strate-</td></tr><tr><td colspan=\"3\">gies applied to annotations assigned by context2vec</td></tr><tr><td colspan=\"3\">when using the LexSub/CoInCo pool of substitutes</td></tr><tr><td>(AUTO-LSCNC).</td><td/><td/></tr><tr><td>Filter</td><td>F1</td><td>F P/(T P + F P )</td></tr><tr><td>Highest 10</td><td>0.245</td><td>0.838</td></tr><tr><td>Highest 5</td><td>0.290</td><td>0.766</td></tr><tr><td>PPDB</td><td>0.268</td><td>0.731</td></tr><tr><td colspan=\"2\">GloVe (T = 0.1) 0.266</td><td>0.778</td></tr><tr><td colspan=\"2\">GloVe (T = 0.2) 0.268</td><td>0.769</td></tr><tr><td colspan=\"2\">GloVe (T = 0.3) 0.266</td><td>0.750</td></tr><tr><td>c2v score</td><td>0.250</td><td>0.675</td></tr><tr><td>No filter</td><td>0.142</td><td>0.920</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF8": {
"content": "<table><tr><td>B Direct Usage Similarity Estimation</td></tr><tr><td>Correlations between gold Usim scores for all</td></tr><tr><td>words and cosine similarities of different embed-</td></tr></table>",
"text": "Results of different substitute filtering strategies applied to annotations assigned by context2vec when using the PPDB pool of substitutes (AUTO-PPDB).",
"num": null,
"type_str": "table",
"html": null
},
"TABREF9": {
"content": "<table><tr><td>Context</td><td colspan=\"2\">Embeddings Correlation</td></tr><tr><td/><td>ELMo top</td><td>0.289</td></tr><tr><td>cw=2</td><td>ELMo av BERT av 4</td><td>0.280 0.344</td></tr><tr><td/><td>GloVe</td><td>0.140</td></tr><tr><td/><td>ELMo top</td><td>0.282</td></tr><tr><td>cw=3</td><td>ELMo av BERT av 4</td><td>0.279 0.339</td></tr><tr><td/><td>GloVe</td><td>0.163</td></tr><tr><td/><td>ELMo top</td><td>0.270</td></tr><tr><td>cw=4</td><td>ELMo av BERT av 4</td><td>0.263 0.311</td></tr><tr><td/><td>GloVe</td><td>0.160</td></tr><tr><td/><td>ELMo top</td><td>0.266</td></tr><tr><td>cw=5</td><td>ELMo av BERT av 4</td><td>0.263 0.309</td></tr><tr><td/><td>GloVe</td><td>0.162</td></tr><tr><td/><td>ELMo av</td><td>0.284</td></tr><tr><td/><td>ELMo top</td><td>0.278</td></tr><tr><td>cw=2 (incl. target)</td><td>BERT av 4</td><td>0.416</td></tr><tr><td/><td>GloVe</td><td>0.159</td></tr><tr><td/><td>USE</td><td>0.146</td></tr><tr><td/><td>ELMo av</td><td>0.280</td></tr><tr><td/><td>ELMo top</td><td>0.273</td></tr><tr><td>cw=3 (incl. target)</td><td>BERT av 4</td><td>0.395</td></tr><tr><td/><td>GloVe</td><td>0.180</td></tr><tr><td/><td>USE</td><td>0.184</td></tr><tr><td/><td>ELMo av</td><td>0.267</td></tr><tr><td/><td>ELMo top</td><td>0.265</td></tr><tr><td>cw=4 (incl. target)</td><td>BERT av 4</td><td>0.365</td></tr><tr><td/><td>GloVe</td><td>0.176</td></tr><tr><td/><td>USE</td><td>0.191</td></tr><tr><td/><td>ELMo av</td><td>0.266</td></tr><tr><td/><td>ELMo top</td><td>0.263</td></tr><tr><td>cw=5 (incl. target)</td><td>BERT av 4</td><td>0.359</td></tr><tr><td/><td>GloVe</td><td>0.175</td></tr><tr><td/><td>USE</td><td>0.221</td></tr></table>",
"text": "shows the accuracy of different configurations on the WiC development set.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF10": {
"content": "<table><tr><td>Ablation</td><td colspan=\"3\">Gold AUTO-LSCNC AUTO-PPDB</td></tr><tr><td>None</td><td>0.729</td><td>0.538</td><td>0.524</td></tr><tr><td colspan=\"2\">Sub. similarity 0.701</td><td>0.537</td><td>0.524</td></tr><tr><td>Common sub.</td><td>0.722</td><td>0.538</td><td>0.524</td></tr><tr><td>GAP</td><td>0.730</td><td>0.537</td><td>0.523</td></tr><tr><td>c2v</td><td>0.730</td><td>0.539</td><td>0.523</td></tr><tr><td colspan=\"2\">Bert av 4 target 0.700</td><td>0.348</td><td>0.283</td></tr></table>",
"text": "Correlations of different sentence and word embeddings on the Usim dataset using different context window sizes (cw).",
"num": null,
"type_str": "table",
"html": null
},
"TABREF11": {
"content": "<table><tr><td>Training set</td><td>Features</td><td>Accuracy</td></tr><tr><td/><td>BERT av 4 last target word</td><td>65.24</td></tr><tr><td/><td>c2v</td><td>57.69</td></tr><tr><td/><td>ELMo top cw=2</td><td>61.11</td></tr><tr><td>WiC</td><td>USE SIF</td><td>63.68 60.97</td></tr><tr><td/><td>Only substitutes</td><td>55.41</td></tr><tr><td/><td>BERT av 4 target word &amp; USE</td><td>67.95</td></tr><tr><td/><td>Combined</td><td>66.81</td></tr><tr><td/><td>BERT av 4 target word</td><td>64.96</td></tr><tr><td/><td>c2v</td><td>58.12</td></tr><tr><td/><td>ELMo top cw=2</td><td>61.11</td></tr><tr><td>WiC + CoInCo</td><td>USE SIF</td><td>63.53 59.97</td></tr><tr><td/><td>Only substitutes</td><td>56.13</td></tr><tr><td/><td>BERT av 4 target word &amp; USE</td><td>68.66</td></tr><tr><td/><td>Combined</td><td>66.81</td></tr></table>",
"text": "Results of feature ablation experiments for systems trained and tested on the Usim dataset with gold substitutes (Gold) as well as automatic substitutes from different pools, Lexsub/CoInCo (AUTO-LSCNC) and PPDB (AUTO-PPDB). Rows indicate the feature that is removed each time. Numbers correspond to the average Spearman \u03c1 correlation on the development set across target words.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}