ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:20:34.252944Z"
},
"title": "UiO-UvA at SemEval-2020 Task 1: Contextualised Embeddings for Lexical Semantic Change Detection",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {
"country": "Norway"
}
},
"email": ""
},
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"country": "Netherlands"
}
},
"email": "m.giulianelli@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We apply contextualised word embeddings to lexical semantic change detection in the SemEval-2020 Shared Task 1. This paper focuses on Subtask 2, ranking words by the degree of their semantic drift over time. We analyse the performance of two contextualising architectures (BERT and ELMo) and three change detection algorithms. We find that the most effective algorithms rely on the cosine similarity between averaged token embeddings and the pairwise distances between token embeddings. They outperform strong baselines by a large margin (in the post-evaluation phase, we have the best Subtask 2 submission for SemEval-2020 Task 1), but interestingly, the choice of a particular algorithm depends on the distribution of gold scores in the test set.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We apply contextualised word embeddings to lexical semantic change detection in the SemEval-2020 Shared Task 1. This paper focuses on Subtask 2, ranking words by the degree of their semantic drift over time. We analyse the performance of two contextualising architectures (BERT and ELMo) and three change detection algorithms. We find that the most effective algorithms rely on the cosine similarity between averaged token embeddings and the pairwise distances between token embeddings. They outperform strong baselines by a large margin (in the post-evaluation phase, we have the best Subtask 2 submission for SemEval-2020 Task 1), but interestingly, the choice of a particular algorithm depends on the distribution of gold scores in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Words change their meaning over time in all natural languages. The emergence of large representative historical corpora and powerful data-driven semantic models has allowed researchers to track meaning change. SemEval-2020 Shared Task 1 challenges its participants to classify a list of target words into stable or changed (Subtask 1) and/or to rank these words by the degree of their semantic change (Subtask 2) (Schlechtweg et al., 2020) . The task is multilingual: it includes four lists of target words, respectively for English, German, Latin, and Swedish (several dozen words each). Each word list is accompanied with two historical corpora of varying size, consisting of texts created in two different time periods. The shared task organisers additionally provided frequency-based and distributional baseline methods.",
"cite_spans": [
{
"start": 413,
"end": 439,
"text": "(Schlechtweg et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participated in Subtask 2 1 as the UiO-UvA team and investigated the potential of contextualised embeddings for the detection of lexical semantic change. Our systems are based on ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) language models, and employ 3 different algorithms to compare contextualised embeddings diachronically. Our evaluation phase submission to the shared task ranked 9 th out of 34 participating teams, while our post-evaluation phase submission is the best from those published on the shared task website 2 (but some knowledge of the test sets statistics was needed, see below). In this paper, we extensively evaluate all combinations of architectures, training corpora and change detection algorithms, using 5 test sets in 4 languages.",
"cite_spans": [
{
"start": 187,
"end": 208,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 218,
"end": 239,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main findings are twofold: 1) In 3 out of 5 test sets, ELMo consistently outperforms BERT, while being much faster in training and inference; 2) Cosine similarity of averaged contextualised embeddings and average pairwise distance between contextualised embeddings are the two best-performing change detection algorithms, but different test sets show strong preference to either the former or the latter. This preference shows strong correlation with the distribution of gold scores in a test set. While it may indicate that there is a bias in the available test sets, this finding remains yet unexplained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our implementations of all the evaluated algorithms are available at https://github.com/ akutuzov/semeval2020, and the ELMo models we trained can be downloaded from the NLPL vector repository at http://vectors.nlpl.eu/repository/ (Fares et al., 2017) .",
"cite_spans": [
{
"start": 230,
"end": 250,
"text": "(Fares et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lexical semantic change detection (LSCD) task is determining whether and/or to what extent the meaning of a set of target words has changed over time, with the help of time-annotated corpora (Kutuzov et al., 2018; Tahmasebi et al., 2018) . LSCD is often addressed using distributional semantic models: timesensitive word representations (so-called 'diachronic word embeddings') are learned and then compared between different time periods. This is a fast developing NLP sub-field with a number of influential papers ( (Gulordava and Baroni, 2011; Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016; Bamler and Mandt, 2017; Del Tredici et al., 2019; Dubossarsky et al., 2019) , and many others).",
"cite_spans": [
{
"start": 191,
"end": 213,
"text": "(Kutuzov et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 214,
"end": 237,
"text": "Tahmasebi et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 518,
"end": 546,
"text": "(Gulordava and Baroni, 2011;",
"ref_id": "BIBREF8"
},
{
"start": 547,
"end": 564,
"text": "Kim et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 565,
"end": 587,
"text": "Kulkarni et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 588,
"end": 610,
"text": "Hamilton et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 611,
"end": 634,
"text": "Bamler and Mandt, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 635,
"end": 660,
"text": "Del Tredici et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 661,
"end": 686,
"text": "Dubossarsky et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Most of the previous work used some variation of 'static' word embeddings where each occurrence of a word form is assigned the same vector representation independently of its context. Recent contextualised architectures allow for overcoming this limitation by taking sentential context into account when inferring word token representations. However, application of such architectures to diachronic semantic change detection was up to now limited (Hu et al., 2019; Giulianelli et al., 2020; Martinc et al., 2020) . While all these studies use BERT as their contextualising architecture, we extend our analysis to ELMo and perform a systematic evaluation of various semantic change detection approaches for both language models.",
"cite_spans": [
{
"start": 447,
"end": 464,
"text": "(Hu et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 465,
"end": 490,
"text": "Giulianelli et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 491,
"end": 512,
"text": "Martinc et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "ELMo (Peters et al., 2018) was arguably the first contextualised word embedding architecture to attract wide attention of the NLP community. The network architecture consists of a two-layer Bidirectional LSTM on top of a convolutional layer. BERT (Devlin et al., 2019) is a Transformer with self-attention trained on masked language modelling and next sentence prediction. While Transformer architectures have been shown to outperform recurrent ones in many NLP tasks, ELMo allows faster training and inference than BERT, making it more convenient to experiment with different training corpora and hyperparameters.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 247,
"end": 268,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our approach relies on contextualised representations of word occurrences often referred to as contextualised embeddings). Given two time periods t 1 , t 2 , two corpora C 1 , C 2 , and a set of target words, we use a neural language model to obtain contextualised embeddings of each occurrence of the target words in C 1 and C 2 and use them to compute a continuous change score. This score indicates the degree of semantic change undergone by a word between t 1 and t 2 , and the target words are ranked by its value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "More precisely, given a target word w and its sentential context",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "s = (v 1 , ..., v i , ..., v m ) with w = v i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "we extract the activations of a language model's hidden layers for sentence position i. The N w contextualised embeddings collected for w can be represented as the usage matrix U w = (w 1 , . . . , w Nw ). The timespecific usage matrices U 1 w , U 2 w for time periods t 1 and t 2 are used as input to all the tested metrics of semantic change. We use three change detection algorithms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "1. Inverted cosine similarity over word prototypes (PRT) Given two usage matrices U t 1 w , U t 2 w , the degree of change of w is calculated as the inverted similarity 3 between the average token embeddings ('prototypes') of all occurrences of w in the two time periods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "P RT U t 1 w , U t 2 w = 1 d x i \u2208U t 1 w x i N t 1 w , x j \u2208U t 2 w x j N t 2 w (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "where N t 1 w and N t 2 w are the number of occurrences of w in time periods t 1 and t 2 , and d is a similarity metric, for which we use cosine similarity. This method is similar to the standard LSCD workflow with static embeddings produced by Procrustes-aligned time-specific distributional models (Hamilton et al., 2016) , with the only additional step of averaging token embeddings to create a single vector. Since we want the algorithm to produce higher scores for the words which changed more, the inverted value of cosine similarity is used as the prediction.",
"cite_spans": [
{
"start": 300,
"end": 323,
"text": "(Hamilton et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "2. Average pairwise cosine distance between token embeddings (APD) Here, the degree of change of w is measured as the average distance between any two embeddings from different time periods (Giulianelli et al., 2020) :",
"cite_spans": [
{
"start": 190,
"end": 216,
"text": "(Giulianelli et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "AP D U t 1 w , U t 2 w = 1 N t 1 w \u2022 N t 2 w x i \u2208U t 1 w , x j \u2208U t 2 w d (x i , x j ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "where d is the cosine distance. High APD values indicate stronger semantic change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "3. Jensen-Shannon divergence (JSD) This measure relies on the partitioning of embeddings into clusters of similar word usages. We follow Giulianelli et al. (2020) and create a single usage matrix with occurrences from two corpora",
"cite_spans": [
{
"start": 137,
"end": 162,
"text": "Giulianelli et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "[U t 1 w ; U t 2 w ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": ". We then standardise it and cluster its entries using Affinity Propagation (Frey and Dueck, 2007) , which automatically selects a number of clusters for each word (Martinc et al., 2020) . Finally, we define probability distributions u t 1 w , u t 2 w based on the normalised counts of word occurrences from each cluster (Hu et al., 2019) and compute a JSD score (Lin, 1991) :",
"cite_spans": [
{
"start": 76,
"end": 98,
"text": "(Frey and Dueck, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 164,
"end": 186,
"text": "(Martinc et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 321,
"end": 338,
"text": "(Hu et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 363,
"end": 374,
"text": "(Lin, 1991)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "JSD(u t 1 w , u t 2 w ) = H 1 2 u t 1 w + u t 2 w \u2212 1 2 H u t 1 w \u2212 H u t 2 w",
"eq_num": "(3)"
}
],
"section": "System overview",
"sec_num": "3"
},
{
"text": "JSD scores measure the amount of change in the proportions of word usage clusters across time periods. A high JSD score indicates a high degree of lexical semantic change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System overview",
"sec_num": "3"
},
{
"text": "For each of the 4 languages of the shared task, we train 4 ELMo model variants: 1) Pre-trained, an ELMo model trained on the respective Wikipedia corpus (English, German, Latin or Swedish) 4 ; 2) Fine-tuned, the same as Pre-trained but further fine-tuned on the union of the two test corpora; 3) Trained on test, trained only on the union of the two test corpora; 4) Incremental, two models-the first is trained on the first test corpus, and the second is the same model further trained on the second test corpus. The ELMo models are trained for 3 epochs (except English and Latin Trained on test and Incremental models, for which we use 5 epochs, due to small test corpora sizes), with the LSTM dimensionality of 2048, batch size 192 and 4096 negative samples per batch. All the other hyperparameters are left at their default values. 5 For BERT, we use the base version, with 12 layers and 768 hidden dimensions. 6 For English, German and Swedish, we employ language-specific models: bert-base-uncased, bert-base-german-cased, and af-ai-center/bert-base-swedish-uncased. For Latin, we resort to bert-base-multilingual-cased, since there is no specific Latin BERT available yet. Given the limited size of the test corpora (in the order of 10 8 word tokens at max), we do not train BERT from scratch and only test the Pre-trained and Fine-tuned BERT variants. The fine-tuning is done with BERT's standard objective for 2 epochs (English was trained for 5 epochs). We configure BERT's WordPiece tokeniser to never split any occurrences of the target words (some target words are split by default into character sequences) and we add unknown target words to BERT's vocabulary. We perform this step both before fine-tuning and before the extraction of contextualised representations.",
"cite_spans": [
{
"start": 189,
"end": 190,
"text": "4",
"ref_id": null
},
{
"start": 836,
"end": 837,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "At inference time, we use all ELMo and BERT variants to produce contextualised representations of all the occurrences of each target word in the test corpora. For the Incremental variant, the representations for the occurrences in each of the two test corpora are produced using the respective model trained on this corpus. The resulting embeddings are of size 12 \u00d7 768 and 3 \u00d7 512 for BERT and ELMo, respectively. We employ three strategies to reduce their dimensionality to that of a single layer: 1) using only the top layer, 2) averaging all layers, 3) averaging the last four layers (only for BERT embeddings, as this aggregation method was shown to work on par with the all-layers alternative in (Devlin et al., 2019) ). Finally, to predict the strength of semantic change of each target word between the two test corpora, we feed the word's contextualised embeddings into the three algorithms of semantic change estimation described in Section 3. We then compute the Spearman correlation of the estimated change scores with the gold answers. This is the evaluation metric of Subtask 2, and we use it throughout our experiments.",
"cite_spans": [
{
"start": 702,
"end": 723,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Our submission. In our 'official' shared task submission in the evaluation phase we used top-layer ELMo embeddings with the cosine similarity change detection algorithm for all languages. English and German ELMo models were trained on the respective Wikipedia corpora. For Swedish and Latin, pretrained ELMo models were not available, so we trained our own models on the union of the test corpora. This combination of architectures and algorithms was chosen based on our preliminary experiments with the available human-annotated semantic change datasets for English (Gulordava and Baroni, 2011) , German (Schlechtweg et al., 2018) and Russian (Fomin et al., 2019) . The resulting Spearman correlations were 0.136 for English, 0.695 for German, 0.370 for Latin, and 0.278 for Swedish. With an average Codalab score of 0.37, this submission ranked 9 th out of 34 teams in the evaluation phase.",
"cite_spans": [
{
"start": 567,
"end": 595,
"text": "(Gulordava and Baroni, 2011)",
"ref_id": "BIBREF8"
},
{
"start": 605,
"end": 631,
"text": "(Schlechtweg et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 636,
"end": 664,
"text": "Russian (Fomin et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We were aware that the submitted setup was likely sub-optimal as it did not include the Fine-tuned model variant. After the official submission deadline (in the post-evaluation phase), we finished training and fine-tuning all of our language models. Their systematic evaluation is the main contribution of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Current results. The average scores of all the tested configurations across 4 languages are given in Table 1 . This table includes both the results of the configurations we used in the evaluation phase and the results of the configurations we tested after the submission deadline (fine-tuned models). We compare our scores to the organisers' baselines (FD and CNT+CI+CD, as provided by Schlechtweg et al. (2020) ) and the classical approach of calculating cosine distance between static CBOW word embeddings (Mikolov et al., 2013) . The CBOW models were used in two different flavors: 1) 'incremental', where the C 2 model was initialised with the C 1 weights (Kim et al., 2014) , and 2) 'Procrustes', where the two models were trained independently on C 1 and C 2 , and then aligned using the Orthogonal Procrustes transformation (Hamilton et al., 2016) . Note that although we did not try different hyperparameter combinations for the static embeddings (varying vector dimensionalities, learning rates, vocabulary sizes, etc), we did not do that for contextualised embeddings either. All the training hyperparameters for both ELMo and BERT were fixed to their default values (see section 4), we varied only the training corpora and the layers from which embeddings were extracted. Thus, we believe that static and contextualised embedding-based approaches were under a fair comparison. Table 1 shows that no method achieves statistically significant correlation on all 4 languages, which attests both to the difficulty of the task and the diversity of the test sets. CBOW Procrustes is a surprisingly strong approach, consistently outperforming the organisers' baselines. Only PRT and APD obtain higher average scores, with fine-tuned ELMo models performing better than fine-tuned BERT.",
"cite_spans": [
{
"start": 386,
"end": 411,
"text": "Schlechtweg et al. (2020)",
"ref_id": "BIBREF20"
},
{
"start": 508,
"end": 530,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 660,
"end": 678,
"text": "(Kim et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 831,
"end": 854,
"text": "(Hamilton et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 1",
"ref_id": null
},
{
"start": 1388,
"end": 1395,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Judging only from the average correlation scores, contextualised embeddings do not seem to outshine their static counterparts, especially considering that both ELMo and BERT are more computationally demanding than CBOW. However, closer analysis of per-language results shows that in fact the contextualised approaches outperform the CBOW Procrustes baseline by a large margin for each of the shared task test sets. Table 2 features the scores obtained by our best-performing methods (PRT and APD with top layer embeddings from fine-tuned ELMo and BERT) on the individual languages of the shared task. We also report performance on the GEMS ('GEometrical Models of Natural Language Semantics workshop') test set (Gulordava and Baroni, 2011) to enable a comparison with previous work (Giulianelli et al., 2020) . The discrepancy between the averaged and the per-language results can be explained by properties of the test sets: APD works best on the English and Swedish sets, while PRT yields the best scores for German and Latin.",
"cite_spans": [
{
"start": 711,
"end": 739,
"text": "(Gulordava and Baroni, 2011)",
"ref_id": "BIBREF8"
},
{
"start": 782,
"end": 808,
"text": "(Giulianelli et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Although consistency across languages (3 out of 4) is an important benefit of the CBOW Procrustes approach, with the right choice of APD or PRT, contextualised embeddings can improve Spearman correlation coefficients by up to 50%. This is not a language-specific property: the English GEMS test Table 1 : Spearman correlation coefficients for Subtask 2 averaged over four languages. The number of asterisks denotes the number of languages for which the correlation was statistically significant (p < 0.05).",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 302,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "set does not behave like the English test set from the shared task. In fact, one can observe 3 groups of test sets with regards to their statistical properties and to the method they favour: group 1 (Latin and German) exhibits rather uniform gold score distributions and prefers PRT; group 2 (English and Swedish) is characterised by more skewed gold score distributions and prefers APD; group 3 (GEMS) is in between, with no clear preference. Interestingly, the method which produces a more uniform predicted score distribution (APD) works better for the test sets with skewed gold distributions, and the method which produces a more skewed predicted score distribution (PRT) works better for the uniformly distributed test sets (as can be seen in the Appendix). Furthermore, there is perfect negative correlation (\u03c1 = \u22121) between the median gold score of a test set and the performance of the APD algorithm with fine-tuned ELMo models on this test set. The same correlation for the APD performance is not significant but strictly negative. We currently do not have a plausible explanation for this behaviour. Table 2 also supports the previous observation that ELMo models perform better than BERT in the LSCD task. The only test set for which this is not the case is Latin, 7 while on GEMS, ELMo and BERT are on par. One possible explanation is that our ELMo models were pre-trained on lemmatised Wikipedia corpora and thus better fit the test corpora, provided in lemmatised form by the organisers. The BERT models were pre-trained on raw corpora, and fine-tuning them on lemmatised data proves less successful. This is of course not an advantage of the ELMo architecture per se; however, easy and fast training from scratch on the respective Wikipedia corpora for each shared task language was possible only because of much lower computational requirements of ELMo compared to BERT. 8",
"cite_spans": [],
"ref_spans": [
{
"start": 1111,
"end": 1118,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In the post-evaluation phase of the shared task, we submitted predictions obtained with the optimal system configurations: fine-tuned ELMo + APD for English and Swedish, fine-tuned ELMo + PRT for German, and fine-tuned BERT + PRT for Latin. This submission reached the average Spearman correlation of 0.618 and, at the time of writing, it is the best Subtask 2 submission for SemEval-2020 Task 1 (among those publicly available on the shared task website). Of course, the optimal choice of configurations was possible only because we already knew the test data. Still, it is useful for understanding of the real abilities of contextualised embedding-based approaches and the peculiarities of different models and test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our experiments for the SemEval-2020 Shared Task 1 (Subtask 2) show that using contextualised embeddings to rank words by the degree of their semantic change yields strong correlation with human judgements, outperforming approaches based on static embeddings. Models pre-trained on large external corpora and fine-tuned on the historical test corpora produce the highest correlation results, with ELMo slightly but consistently outperforming BERT as a contextualiser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Inverted cosine similarity between averaged contextualised embeddings and the average pairwise cosine distance between contextualised embeddings turned out to be the best semantic change detection algorithms. An interesting finding is that the former algorithm favours the test sets with uniform gold score distribution, while the latter works best with the test sets where the gold score distribution is skewed towards low values. This distinction is not related to the language of the test set, or any other linguistic or statistical property of the test sets we looked at. While slightly different human annotation protocols can be in play here, we believe this dependency between the gold scores distribution and the performance of semantic change detection systems deserves to be investigated further in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We also intend to explore the possibilities to improve our best-performing methods (PRT and APD), especially with regards to removing the outlier embeddings before calculating the semantic change score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The grouping differences can be quantified with respect to the median gold score (after unitnormalisation). Figure 2 shows the dependency of the PRT and APD performance on the median score of the gold test set. The dots here are the performance values of PRT or APD algorithms on different test sets. English and Swedish test sets are in the left part of the plot with the median gold scores of 0.200 and 0.203 correspondingly. German, GEMS and Latin are on the right with 0.266, 0.267 and 0.364 correspondingly. There is a perfect negative Spearman correlation between the median gold scores of these 5 test sets and the performance of APD semantic change detection algorithm on each of them (with fine-tuned ELMo embeddings). Figure 2 : Performance of the PRT and APD algorithms depending on the median gold score.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 2",
"ref_id": null
},
{
"start": 728,
"end": 736,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We also tried to use cosine distance (1 \u2212 d) instead of inverted cosine similarity, but the results were marginally worse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Wikipedia corpora were lemmatised using UDPipe(Straka and Strakov\u00e1, 2017) prior to training.5 To train and fine-tune ELMo models, we use the code from https://github.com/ltgoslo/simple_elmo_ training, which is essentially the reference ELMo implementation updated to the recent TensorFlow versions.6 We rely on Hugging Face's implementation of BERT (available at https://github.com/huggingface/ transformers, version 2.5.0), and follow their model naming conventions: https://huggingface.co/models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Latin test corpora are very peculiar: 1) homonyms in them are followed by '#' and the sense identifier, which is not the case for Latin Wikipedia, 2) the sizes of C1 and C2 are very imbalanced, with the latter being 4 times larger than the former.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that(Martinc et al., 2020) report a Spearman correlation of 0.510 on the GEMS dataset using fine-tuned BERT embeddings with Affinity Propagation and JSD. However, we were unable to reproduce these results, even when using the published code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "In the bottom part of Figure 1 , we show how different the 5 test sets are in terms of how the gold scores are distributed in them. In some test sets, the gold scores are skewed to the left, while some have a more uniform distribution. The top part of Figure 1 shows the distributions of the predicted scores produced by the APD and PRT algorithms (with fine-tuned ELMo embeddings). PRT tends to squeeze the majority of predictions near the lower boundary (no semantic change), with a low median score. In contrast, APD distributes its predictions in a much more uniform way, with a higher median score. Counter-intuitively, skewed gold distributions favour uniform predictions and vice versa. ",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 1",
"ref_id": null
},
{
"start": 252,
"end": 260,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A. Score distributions",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dynamic word embeddings",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Bamler",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Mandt",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "380--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 380-389. JMLR.org.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Short-term meaning shift: A distributional exploration",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Del Tredici",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019 (Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Del Tredici, Raquel Fern\u00e1ndez, and Gemma Boleda. 2019. Short-term meaning shift: A distributional exploration. In Proceedings of NAACL-HLT 2019 (Annual Conference of the North American Chapter of the Association for Computational Linguistics).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Time-Out: Temporal Referencing for Robust Modeling of Lexical Semantic Change",
"authors": [
{
"first": "Haim",
"middle": [],
"last": "Dubossarsky",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Hengchen",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haim Dubossarsky, Simon Hengchen, Nina Tahmasebi, and Dominik Schlechtweg. 2019. Time-Out: Temporal Referencing for Robust Modeling of Lexical Semantic Change. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word vectors, reuse, and replicability: Towards a community repository of large-text resources",
"authors": [
{
"first": "Murhaf",
"middle": [],
"last": "Fares",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "271--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murhaf Fares, Andrey Kutuzov, Stephan Oepen, and Erik Velldal. 2017. Word vectors, reuse, and replicability: Towards a community repository of large-text resources. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 271-276. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Tracing cultural diachronic semantic shifts in Russian using word embeddings: test sets and baselines. Komp'yuternaya Lingvistika i Intellektual'nye Tekhnologii: Dialog conference",
"authors": [
{
"first": "Daria",
"middle": [],
"last": "Vadim Fomin",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Bakshandaeva",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Rodina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kutuzov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "203--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vadim Fomin, Daria Bakshandaeva, Julia Rodina, and Andrey Kutuzov. 2019. Tracing cultural diachronic seman- tic shifts in Russian using word embeddings: test sets and baselines. Komp'yuternaya Lingvistika i Intellek- tual'nye Tekhnologii: Dialog conference, pages 203-218.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Clustering by passing messages between data points",
"authors": [
{
"first": "Brendan",
"middle": [],
"last": "Frey",
"suffix": ""
},
{
"first": "Delbert",
"middle": [],
"last": "Dueck",
"suffix": ""
}
],
"year": 2007,
"venue": "Science",
"volume": "315",
"issue": "5814",
"pages": "972--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. Science, 315(5814):972-976.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Analysing Lexical Semantic Change with Contextualised Word Representations",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Del"
],
"last": "Tredici",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Marco Del Tredici, and Raquel Fern\u00e1ndez. 2020. Analysing Lexical Semantic Change with Contextualised Word Representations. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics. Association for Computational Linguistics. Forthcoming.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "67--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 67-71, Edinburgh, UK, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Diachronic word embeddings reveal statistical laws of semantic change",
"authors": [
{
"first": "William",
"middle": [],
"last": "Hamilton",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1489--1501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489-1501.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Diachronic sense modeling with deep contextualized word embeddings: An ecological view",
"authors": [
{
"first": "Renfen",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Shen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shichen",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3899--3908",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic sense modeling with deep contextualized word embed- dings: An ecological view. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899-3908, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Temporal analysis of language through neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yi-I",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Hanaki",
"suffix": ""
},
{
"first": "Darshan",
"middle": [],
"last": "Hegde",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science",
"volume": "",
"issue": "",
"pages": "61--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 61-65.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistically significant detection of linguistic change",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "625--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625-635. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Temporal dynamics of semantic relations in word embeddings: an application to predicting armed conflict participants",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1824--1829",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrey Kutuzov, Erik Velldal, and Lilja \u00d8vrelid. 2017. Temporal dynamics of semantic relations in word em- beddings: an application to predicting armed conflict participants. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1824-1829, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Diachronic word embeddings and semantic shifts: a survey",
"authors": [
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Terrence",
"middle": [],
"last": "Szymanski",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1384--1397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrey Kutuzov, Lilja \u00d8vrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384-1397. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Divergence measures based on the Shannon entropy",
"authors": [
{
"first": "Jianhua",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Transactions on Information theory",
"volume": "37",
"issue": "1",
"pages": "145--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianhua Lin. 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on Information theory, 37(1):145-151.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Capturing evolution in word usage: Just add more clusters?",
"authors": [
{
"first": "Matej",
"middle": [],
"last": "Martinc",
"suffix": ""
},
{
"first": "Syrielle",
"middle": [],
"last": "Montariol",
"suffix": ""
},
{
"first": "Elaine",
"middle": [],
"last": "Zosa",
"suffix": ""
},
{
"first": "Lidia",
"middle": [],
"last": "Pivovarova",
"suffix": ""
}
],
"year": 2020,
"venue": "Companion Proceedings of the International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "20--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matej Martinc, Syrielle Montariol, Elaine Zosa, and Lidia Pivovarova. 2020. Capturing evolution in word usage: Just add more clusters? In Companion Proceedings of the International World Wide Web Conference, pages 20-24.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems 26, pages 3111-3119.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Diachronic usage relatedness (DURel): A framework for the annotation of lexical semantic change",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Eckmann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg, Sabine Schulte im Walde, and Stefanie Eckmann. 2018. Diachronic usage relatedness (DURel): A framework for the annotation of lexical semantic change. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 169-174.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Mcgillivray",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Hengchen",
"suffix": ""
},
{
"first": "Haim",
"middle": [],
"last": "Dubossarsky",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
}
],
"year": 2020,
"venue": "To appear in Proceedings of the 14th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection. In To appear in Proceedings of the 14th International Workshop on Semantic Evaluation, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Tokenizing, pos tagging, lemmatizing and parsing UD 2.0 with UDPipe",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "88--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing UD 2.0 with UDPipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Survey of computational approaches to lexical semantic change",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Jatowt",
"suffix": ""
}
],
"year": 2018,
"venue": "Preprint at ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. Survey of computational approaches to lexical semantic change. In Preprint at ArXiv 2018.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Spearman correlation per test set for our best methods (post-evaluation phase). \u2020 marks statistical significance (p < 0.05).",
"type_str": "table"
}
}
}
}