| { |
| "paper_id": "P19-1044", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:24:21.661852Z" |
| }, |
| "title": "Time-Out: Temporal Referencing for Robust Modeling of Lexical Semantic Change", |
| "authors": [ |
| { |
| "first": "Haim", |
| "middle": [], |
| "last": "Dubossarsky", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Language Technology Lab", |
| "institution": "University of Stuttgart", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Hengchen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Language Technology Lab", |
| "institution": "University of Stuttgart", |
| "location": {} |
| }, |
| "email": "simon.hengchen@helsinki.fi" |
| }, |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Tahmasebi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Language Technology Lab", |
| "institution": "University of Stuttgart", |
| "location": {} |
| }, |
| "email": "nina.tahmasebi@gu.se" |
| }, |
| { |
| "first": "Dominik", |
| "middle": [], |
| "last": "Schlechtweg", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Language Technology Lab", |
| "institution": "University of Stuttgart", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "State-of-the-art models of lexical semantic change detection suffer from noise stemming from vector space alignment. We have empirically tested the Temporal Referencing method for lexical semantic change and show that, by avoiding alignment, it is less affected by this noise. We show that, trained on a diachronic corpus, the skip-gram with negative sampling architecture with temporal referencing outperforms alignment models on a synthetic task as well as a manual testset. We introduce a principled way to simulate lexical semantic change and systematically control for possible biases.", |
| "pdf_parse": { |
| "paper_id": "P19-1044", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "State-of-the-art models of lexical semantic change detection suffer from noise stemming from vector space alignment. We have empirically tested the Temporal Referencing method for lexical semantic change and show that, by avoiding alignment, it is less affected by this noise. We show that, trained on a diachronic corpus, the skip-gram with negative sampling architecture with temporal referencing outperforms alignment models on a synthetic task as well as a manual testset. We introduce a principled way to simulate lexical semantic change and systematically control for possible biases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "These past years have seen the rise of computational methods to detect, track, qualify, and quantify how a word's sense -or senses -change over time. These tasks are critical challenges that are relevant to a range of NLP fields, including the study of historical semantic change. The successful outcome of semantic change detection is relevant to any diachronic textual analysis, including machine translation or normalization of historical texts (Tjong Kim Sang et al., 2017) , the detection of cultural semantic shifts (Kutuzov et al., 2017) or applications in digital humanities (Tahmasebi and Risse, 2017a). However, currently, the best-performing models (Hamilton et al., 2016b; Kulkarni et al., 2015; Schlechtweg et al., 2019) require a complex alignment procedure and have been shown to suffer from biases (Dubossarsky et al., 2017) . This exposes them to various sources of noise influencing their predictions; a fact which has long gone unnoticed because of the lack of standard evaluation procedures in the field.", |
| "cite_spans": [ |
| { |
| "start": 448, |
| "end": 477, |
| "text": "(Tjong Kim Sang et al., 2017)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 522, |
| "end": 544, |
| "text": "(Kutuzov et al., 2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 660, |
| "end": 684, |
| "text": "(Hamilton et al., 2016b;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 685, |
| "end": 707, |
| "text": "Kulkarni et al., 2015;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 708, |
| "end": 733, |
| "text": "Schlechtweg et al., 2019)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 814, |
| "end": 840, |
| "text": "(Dubossarsky et al., 2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We examine the modeling approach of Temporal Referencing (TR) which avoids post hoc align-ment and is applicable to any vector space learning technique. We show that it (i) is less affected by noise and (ii) clearly outperforms state-of-the-art alignment models on a synthetic change detection task. The task is based on data from a synchronic corpus into which we artificially inject lexical semantic change (LSC) in a controlled and semantically principled way. We further evaluate the models on a manual testset of diachronic LSC and examine their properties.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we focus on skip-gram with negative sampling (SGNS) models (Mikolov et al., 2013) and PPMI (Levy et al., 2015) and make use of TR to share context information across time periods, while learning individual embeddings for a target word in each time period. We evaluate models in two ways: on the one hand, through the comparison of model performance between semantically changing and stable words. This is achieved through the synthetic introduction (and removal) of polysemy, mimicking Sch\u00fctze (1998) ; Kulkarni et al. (2015) ; Rosenfeld and Erk (2018) . We differ from previous work by creating those changes in a more structured way, and for many time points. The second type of evaluation put forward is a study built on a smaller number of words manually classified as changed or stable.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 96, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 106, |
| "end": 125, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 501, |
| "end": 515, |
| "text": "Sch\u00fctze (1998)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 518, |
| "end": 540, |
| "text": "Kulkarni et al. (2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 543, |
| "end": 567, |
| "text": "Rosenfeld and Erk (2018)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our contributions are the following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Noise Reduction: We avoid post hoc alignment by TR and show that it outperforms other models and is robust to noise. \u2022 LSC Simulation: We propose a systematic and principled method of injecting semantic change in a controlled fashion. \u2022 Evaluation: We evaluate (i) by testing for noise reduction in a control condition, (ii) on large and controlled artificial data and (iii) on a manually annotated LSC testset. \u2022 Framework: The above comprises a frame-work to test any model of semantic change for their levels of noise and sensitivity in detecting simulated semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Models of LSC Detection Computational approaches to semantic change detection can be divided in different families: count-based semantic spaces (Sagi et al., 2009; Gulordava and Baroni, 2011) and more recently based on neural embeddings (Kim et al., 2014; Basile et al., 2016; Kulkarni et al., 2015; Hamilton et al., 2016b) ; graphbased models (Tahmasebi and Risse, 2017a; Mitra et al., 2014 Mitra et al., , 2015 ; and finally topic-based (Lau et al., 2012; Wang et al., 2015; Frermann and Lapata, 2016; Hengchen, 2017; Perrone et al., 2019) . Recently, we have seen dynamic embeddings with the main aim to circumvent alignment, and share data across time points, thus reducing data volume requirements. Using different base embeddings, SGNS (Bamler and Mandt, 2017) , PPMI (Yao et al., 2018) , and Bernoulli embeddings (Rudolph and Blei, 2018), the results show that sharing data is beneficial regardless of the method. 1 Temporal Referencing has been applied first in the field of term extraction Ferrari et al. (2017) and recently been tested for diachronic LSC detection (Schlechtweg et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 144, |
| "end": 163, |
| "text": "(Sagi et al., 2009;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 164, |
| "end": 191, |
| "text": "Gulordava and Baroni, 2011)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 237, |
| "end": 255, |
| "text": "(Kim et al., 2014;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 256, |
| "end": 276, |
| "text": "Basile et al., 2016;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 277, |
| "end": 299, |
| "text": "Kulkarni et al., 2015;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 300, |
| "end": 323, |
| "text": "Hamilton et al., 2016b)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 344, |
| "end": 372, |
| "text": "(Tahmasebi and Risse, 2017a;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 373, |
| "end": 391, |
| "text": "Mitra et al., 2014", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 392, |
| "end": 412, |
| "text": "Mitra et al., , 2015", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 439, |
| "end": 457, |
| "text": "(Lau et al., 2012;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 458, |
| "end": 476, |
| "text": "Wang et al., 2015;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 477, |
| "end": 503, |
| "text": "Frermann and Lapata, 2016;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 504, |
| "end": 519, |
| "text": "Hengchen, 2017;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 520, |
| "end": 541, |
| "text": "Perrone et al., 2019)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 742, |
| "end": 766, |
| "text": "(Bamler and Mandt, 2017)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 774, |
| "end": 792, |
| "text": "(Yao et al., 2018)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 999, |
| "end": 1020, |
| "text": "Ferrari et al. (2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1075, |
| "end": 1101, |
| "text": "(Schlechtweg et al., 2019)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Evaluation Due to a lack of proper evaluation methods and datasets, all papers above have performed different, non-comparable evaluations. Previous evaluation procedures mainly tackle a few words: case studies of individual words (Wijaya and Yeniterzi, 2011; Jatowt and Duh, 2014; Hamilton et al., 2016a) , or a comparison between a few changing and semantically stable words (Lau et al., 2012; Schlechtweg et al., 2017) . Other works focus on the post hoc evaluation of their respective models (Kulkarni et al., 2015; Eger and Mehler, 2016) . Importantly, Dubossarsky et al. (2017) proposed to use a control condition to mitigate the absent of validated evaluation methods and datasets.", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 258, |
| "text": "(Wijaya and Yeniterzi, 2011;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 259, |
| "end": 280, |
| "text": "Jatowt and Duh, 2014;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 281, |
| "end": 304, |
| "text": "Hamilton et al., 2016a)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 376, |
| "end": 394, |
| "text": "(Lau et al., 2012;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 395, |
| "end": 420, |
| "text": "Schlechtweg et al., 2017)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 495, |
| "end": 518, |
| "text": "(Kulkarni et al., 2015;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 519, |
| "end": 541, |
| "text": "Eger and Mehler, 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 557, |
| "end": 582, |
| "text": "Dubossarsky et al. (2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Evaluating empirical results often demands comparing these under a control condition in order to maintain that these are indeed valid and are not the result of unwanted confounding factors. A control condition directly follows from a specific research hypothesis, and therefore must resemble the original condition in any aspect, except the variable of interest that is being hypothesized about. For example, Dubossarsky et al. (2017) attested that a shuffled diachronic corpus is a proper control condition to test models for semantic change, under the hypothesis that such models indeed capture semantic change and not something else. They concluded that any degree of semantic change that is reported by a model on the shuffled corpus may only be related to noise, instead of a true semantic change. Similarly, we propose to test the noise levels associated with different semantic change models using a shuffled historical corpus, and evaluate their true degree of semantic change by comparing their results to the original historical corpus. Importantly, there are many ways to create control conditions, and the synthetic lexical semantic change proposed in Section 4 contains another type of control condition, that is based on artificially induced semantic change.", |
| "cite_spans": [ |
| { |
| "start": 409, |
| "end": 434, |
| "text": "Dubossarsky et al. (2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Control Condition", |
| "sec_num": null |
| }, |
| { |
| "text": "Embeddings A common method in LSC detection is to learn low-dimensional semantic vector spaces (embeddings) for specific time periods and then align spaces for consecutive time periods with an orthogonal mapping which minimizes the distances between the time-specific vectors for all words (Hamilton et al., 2016b) . Given two consecutive time periods a, b, and corresponding text corpora C a , C b , we learn two vector spaces A, B. Orthogonal Procrustes analysis can then be applied to find the optimal mapping matrix W * such that the sum of squared Euclidean distances between B's mapping BW and A is minimized:", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 314, |
| "text": "(Hamilton et al., 2016b)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "W * = arg min W BW \u2212 A 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The optimal solution for this problem is given by an application of Singular Value Decomposition (Artetxe et al., 2017) . 2 The degree of LSC of a word w is then measured with the cosine distance (Salton and McGill, 1983 ) between w's vectors in A and BW * (B's mapping). This approach has been found to outperform other LSC detection methods in various studies (Hamilton et al., 2016b; Kulkarni et al., 2015) . It has the advantage of not assuming that words keep the same meaning over time. A presumable downside of this approach is expected noise from the alignment, i.e., it may not be possible to align all words to each other that have similar meanings, because the spaces were learned independently.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 119, |
| "text": "(Artetxe et al., 2017)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 122, |
| "end": 123, |
| "text": "2", |
| "ref_id": null |
| }, |
| { |
| "start": 196, |
| "end": 220, |
| "text": "(Salton and McGill, 1983", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 362, |
| "end": 386, |
| "text": "(Hamilton et al., 2016b;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 387, |
| "end": 409, |
| "text": "Kulkarni et al., 2015)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "PPMI Another method to learn time-specific semantic vector space representations A, B is to store count-based co-occurrence information for each word in a high-dimensional sparse matrix and then apply Positive Pointwise Mutual Information (PPMI) weighting (Levy et al., 2015) . In such a matrix each column stores the co-occurrence statistics with a specific context word. This has the advantage that A and B can be aligned straightforwardly, because many context words occur as columns in both A and B and can hence be mapped onto each other. Mapping A and B to a common coordinate axis then corresponds to intersecting their columns (Hamilton et al., 2016b) . This has the advantage of avoiding the complex alignment procedure for embeddings, but also loses their performance advantages (Baroni et al., 2014; Levy et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 256, |
| "end": 275, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 635, |
| "end": 659, |
| "text": "(Hamilton et al., 2016b)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 789, |
| "end": 810, |
| "text": "(Baroni et al., 2014;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 811, |
| "end": 829, |
| "text": "Levy et al., 2015)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Temporal Referencing Temporal Referencing (TR) is an alternative to learning individual word representations for different time periods, which avoids alignment using a procedure radically simpler than proposed for dynamic embeddings. TR is potentially applicable to every vector space learning method. We treat all time-specific corpora C a , C b , ..., C n as one corpus C and learn word representations on the full corpus. However, we first replace each target word w \u2208 C t with a time-specific token w t . 3 This temporal referencing of w is only performed when it is a target word, when the word is considered a context word, it remains unchanged. Following this procedure, we learn one single space that contains a vector for each target-time pair w t , which may be compared directly without the need for alignment. Besides the considerable advantages of avoiding alignment and being applicable to count-based and embedding methods, it presumably lowers data requirements (because context words are collapsed, and thus shared, across corpora). Accordingly, we assume TR to produce smoother change values. As various other models, TR relies on the assumption that the semantics of the context words stays relatively stable over time.", |
| "cite_spans": [ |
| { |
| "start": 509, |
| "end": 510, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We aim to simulate semantic change under controlled settings, while keeping the corpus as natural as possible. 4 We call this procedure sense injection. We increase the semantic material of a recipient word w r in subsequent subcorpora by injecting contexts from a donor word w d . The context of the recipient word (illustrated as Sense 1 in Figure 1 ) stays as it is in the corpus. The first subcorpus contains only contexts from the recipient w r and all the contexts of the donor w d are removed. In the next time period we add 25% of the contexts of w d , with donor word replaced by the recipient word. In each subsequent corpus, an additional 25% of the donor word are injected until the last time periods contain equal amounts of contexts from the donor and recipient. As a result, seen from the recipient w r , the last time periods have double the amount of contexts as in the first time period |w", |
| "cite_spans": [ |
| { |
| "start": 111, |
| "end": 112, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 343, |
| "end": 351, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Synthetic Lexical Semantic Change", |
| "sec_num": "4" |
| }, |
| { |
| "text": "r (t n ) + w d (t n )| = 2 * |w r (t 1 )|.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic Lexical Semantic Change", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Note that due to the polysemous nature of words (each is usually associated with more than one sense), we preferred to add the donor words' contexts instead of simply replacing the existing contexts of the recipient words with the contexts of the donor words. This is because the former involves a single source of synthetic lexical semantic change, while the latter involves two sources (the removal of contexts associated with different senses of a recipient word, as well as the added contexts associated with the senses of a donor word). As a result, this procedure yields less noisy examples of synthetic lexical semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic Lexical Semantic Change", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We differ between cases where recipient and donor are related (e.g. maker \u2192 creator, Fig. 1a ) and unrelated (e.g. shoulders \u2192 horde, Fig. 1b) , following e.g., Pilehvar and Navigli (2013) . This procedure is aimed to give us insight into how much novel semantic material is needed for our methods to detect semantic change. Our hypothesis is that cases where the donor word is unrelated to the recipient word should be simpler to detect compared to those that are in close relation. It is linguistically motivated to choose semantically related words to simulate sense change; those are the most difficult cases of sense change, and a likely procedure of semantic change introducing polysemy (Blank, 1997) .", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 188, |
| "text": "Pilehvar and Navigli (2013)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 693, |
| "end": 706, |
| "text": "(Blank, 1997)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 85, |
| "end": 92, |
| "text": "Fig. 1a", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 134, |
| "end": 142, |
| "text": "Fig. 1b)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Synthetic Lexical Semantic Change", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Finally, to simulate the same increase in frequency, we repeat the sense injection for a set of control words. In this case recipient and donor word are the same w r = w d . This creates the same increased frequency of the recipient word |w r (t n )| = 2 * |w r (t 1 )| as the above, but without any added semantic information because the control word keeps its original contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic Lexical Semantic Change", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For Experiment 1 (Sec. 6.1) we used COHA (Davies, 2002) , of which we restrict ourselves to decadal bins spanning from 1920 to 1970 so as to have a comparable number of tokens for each time slice. For Experiment 2 (Sec. 6.2) we used COCA (Davies, 2008) , of which we remove the spoken and academic genres in order to maintain a more similar usage context of words. As a control setting, we created shuffled versions of the same corpora with the same periods, and straightforwardly followed Dubossarsky et al. (2017) .", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 55, |
| "text": "(Davies, 2002)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 238, |
| "end": 252, |
| "text": "(Davies, 2008)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 490, |
| "end": 515, |
| "text": "Dubossarsky et al. (2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For related words, we used the Noun-Noun pairs in SimLex-999 (Hill et al., 2015) as a starting point. However, even semantically unrelated pairs in SimLex were deemed somewhat related by our annotators, and therefore we kept only 10 of those.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 80, |
| "text": "(Hill et al., 2015)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic semantic change", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We created the rest of the list of unrelated words as follows: we randomly sampled 300 lowercased nouns 5 from our corpus, which we assembled into 150 pairs. We then asked three annotators to independently go through the list of generated pairs and determine whether they were semantically related or not. All 150 pairs were deemed semantically unrelated by at least 2 annotators. Only 5 pairs had a disagreement but were qualified as border line cases by the disagreeing annotator, and kept. This procedure yielded 356 word pairs in total, of which 196 were related and 160 were not related.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic semantic change", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We tested two models in our experiments: (i) lowdimensional embeddings learned with SGNS and (ii) high-dimensional sparse PPMI vectors. Each of these were tested with their respective alignment method (AL) and with Temporal Referencing (TR) as described in Section 3, leaving us with four models to compare:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model training", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "SGNS AL SGNS TR PPMI AL PPMI TR", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model training", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In order to avoid that replaced target words cooccur with other target words in TR we used the implementation of Levy et al. (2015) , allowing us to train SGNS and PPMI on extracted wordcontext pairs instead of the corpus directly. For this, we iterated over corpus C t such that for each token w and for each of its context words c within a symmetric window we extracted the word-context pair: (w t ,c) if w is a target word and (w,c) otherwise.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 131, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model training", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In this way, we guarantee a target word is never replaced and treated as context of any other word. For TR, SGNS and PPMI were then trained on these extracted pairs. For AL, we extracted only regular word-context pairs (w,c) and trained SGNS and PPMI on these. LSC is measured for all four models via cosine distance. 6 (See Appendix A for preprocessing and hyper-parameter details.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model training", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To test our methods we performed three main experiments, comparing the performances of TR to the existing state-of-the-art diachronic model alignment. In the first experiment, we compare the models' performance under control conditions that address complementary (potential) weaknesses. The second experiment tests different synthetic change types and assesses whether better models improve detection of lexical semantic change, in a controlled setting. Finally, we test our methods on a manually created testset on a genuine corpus, and manually inspect the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this experiment, we trained each model on two corpora, one genuine diachronic corpus with natural semantic change, and one shuffled where the diachronic change is distributed equally across all time periods (see Sec. 5.1). We study the average change of cosine distance as a proxy for semantic change. Following Dubossarsky et al. 2017we consider the average cosine distance (acd) trained on the genuine corpus to correspond to true semantic change + noise. In contrast, the average cosine distance on the shuffled corpus corresponds to pure noise. Therefore, the difference between the two equals to true signal, or in other words, true lexical semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Model comparison", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Importantly, we are interested in investigating, and hopefully mitigating, possible sources of the noise that might be found in some of the models. Specifically, we hypothesize that the alignment procedure adds considerable noise to the acd, and plan to test how TR can alleviate some of that noise. Moreover, TR is assumed to contribute not only by circumventing the alignment, but also by producing more stable context vectors due to the increased amount of data on which they are trained. 7 Therefore, we first tease-apart these factors using the following comparisons between the different models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Model comparison", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "1. For all models, we consider the difference in average cosine distance between genuine and shuffled conditions (acd genuine \u2212 acd shuf f led ) as being inversely proportional to the amount of noise that the original model unknowingly captures. Hence, the larger the difference, the less noisier (and better) the model is. We consider this to be an approximation of the true semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Model comparison", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "2. Focusing on the differences between the two PPMI models allows us to test the independent contribution of TR in providing more accurate context vectors because the intersection of the PPMI vectors are inherently aligned. 3. Focusing on the SGNS models conflates the potential benefits from more accurate context vectors with the disadvantage of Procrustes alignment (which is necessary for SGNS AL but not for SGNS TR ). 4. The difference between the last two would allow us to evaluate the independent contribution of these two sources on the (presumably) less noisy SGNS TR model scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Model comparison", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Results (experiment 1) We start analyzing the true semantic change for each of the models (PPMI AL to PPMI TR and SGNS AL to SGNS TR ) over the corpus. In Figure 2 , we can see that temporal referencing introduces less noise throughout the 5 decadal comparisons. For both PPMI and SGNS, the true semantic change increases for the TR models compared to the aligned. Importantly, Table 1 shows that for the PPMI models, Temporal Referencing has a much smaller improvement over the aligned model (.005) compared to the SGNS models (.026) (all reported differences are statistically significant, t-test p < .01). Temporal Referencing influences the PPMI models only by creating more stable context vectors. In contrast, for the SGNS models the introduction of Temporal Referencing circumvents the use of alignment in addition to creating more stable context vectors. Therefore, the results support our hypothesis that TR has two complementing factors that improve prior models; firstly, it avoids the need for alignment altogether (and the noise that usually comes with it), and secondly, it produces more stable context vectors due to the increased volume of data when using the full corpus. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 155, |
| "end": 163, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 378, |
| "end": 385, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: Model comparison", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We further analyzed the nature of the progression of the cumulative semantic change that words exhibit over time. Under the assumption that words change their meaning in a systematic way, it follows that words' semantic change would increase over the years. Therefore, an ecologically valid model of semantic change should show that the words change more as the time interval for comparison increases, for the vocabulary as a whole.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Smoothness of Temporal Referencing", |
| "sec_num": null |
| }, |
| { |
| "text": "In contrast, if a model captures stochastic fluctuations in the words' vectors instead of true semantic change, then such a shift in the distribution will be less prominent. We plot the distribution of the words' cosine distances with increasing time intervals (relative to 1920) for both SGNS models in Figure 3 . Both models show a gradual transition from left (smaller change scores) to right (larger change scores). This corroborates our basic assumption that words change more as the time interval for comparison increases. Crucially, Temporal Referencing shows a more constant cumulative progression of cosine distances over time in contrast to alignment where decadal cosine distance distributions seem to be more volatile. We follow Bamler and Mandt (2017) in interpreting these results as attesting for the relatively high noise factor in the SGNS AL over the SGNS TR .", |
| "cite_spans": [ |
| { |
| "start": 741, |
| "end": 764, |
| "text": "Bamler and Mandt (2017)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 304, |
| "end": 312, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Smoothness of Temporal Referencing", |
| "sec_num": null |
| }, |
| { |
| "text": "Overall, the different analyses converge to the same conclusion: Temporal Referencing is a better model for capturing a word's semantic information from diachronic text because it introduces less noise. Next, we will investigate if a less noisy model is also better at detecting semantic change. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Smoothness of Temporal Referencing", |
| "sec_num": null |
| }, |
| { |
| "text": "This experiment aims to see how well our methods can find different synthetic change types. In order to minimize natural semantic change in the dataset, we made use of the synchronic dataset COCA which we randomly shuffled, and simulated a diachronic corpus for which we have 7 time-bins. We randomly assigned a seventh of COCA to each of our artificial time periods, labeled t 1 to t 7 . Sentences in which either word of the synthetic semantic change pairs (see Sec. 4) or their corresponding control words appeared were held out. These sentences were subsequently added back to COCA according to the procedure outlined in Section 4, which enabled us to control for the fixed ratio incremental steps between the recipient and donor words (i.e., changes to the injection ratio were made only for t 2 -t 3 , t 3 -t 4 , t 4t 5 , and t 5 -t 6 , while t 1 -t 2 and t 6 -t 7 had no such changes).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2: Synthetic semantic change", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "All four models were trained on the 7 synthetic time-bins exactly as in Experiment 1. The target words were the 356 words with synthetic lexical semantic change and their 356 control words that were matched with the same frequency increase but otherwise are considered semantically stable. For each target word, the cosine distances between two consecutive synthetic time-bins were computed, resulting in 6 change scores per word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2: Synthetic semantic change", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We analyze the peak distribution of the individual words. We defined the peak position of each word as its vector argmax (the position in which it shows the maximum cosine distance). In order to evaluate the models' ability to truly detect semantic change, we formulate a na\u00efve binary classification task based on the words' peak positions. For each word, if the peak is in position 2-5, we classify it as changed, and otherwise as stable and measure accuracy and F1-score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2: Synthetic semantic change", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Results Figure 4 shows the acd of the four models for the change and stable words separately, according to the different sense injection ratios. The two plots differ markedly. For the semantic change words (upper plot), all four models show a noticeable peak when the new sense was first injected (step 2), followed by a steady decrease in acd until step 6. In contrast, the stable words only show the steady decrease starting from step 1, without any noticeable peaks. This decrease probably stems from the target words' increased frequency that can lead to more accurate word embeddings (Hellrich and Hahn, 2016). Because peaks in acd are interpreted as points were semantic change was the most profound, the results support the models' ability to detect synthetic semantic changes. Although the majority of peaks for the semantic change words fall in step 2, as expected by the acd analysis above, words had their peaks in other step positions as well (see Appendix B). 8 Table 2 reports accuracy and F-scores for the four models in the binary classification task. As clearly seen, all four models perform better than chance even under these very rudimentary conditions (finding the argmax of a vector of length 6). Crucially, SGNS TR outperforms the rest of the models, and especially SGNS AL that shows the worst performance. These results corroborate our hypothesis from Experiment 1 that noise is negatively influencing task performance. By alleviating the noise factor that exists in SGNS AL (due to alignment), SGNS TR is able to show substantial gains in this binary classification task. Table 2 : Accuracy (averaged, and split into individual classes) and F1-scores for semantic change detection. For stable words (control words), peaks at 1 and 6 steps are correct. For change words, peaks at steps 2-5 are correct. We see that all methods find unrelated change better than related change, and that SGNS TR outperforms the other methods. Discussion Table 2 shows that SGNS TR gains its performance advantage over SGNS AL mainly from a better classification of the stable words (0.37 vs. 0.57). In order to understand this better, we inspect their mean cosine distance curves only for stable words in Figure 5 . SGNS TR 's curve clearly declines, while SGNS AL 's curve declines much less and is more volatile. We attribute the decline of both curves to the diminishing noise that comes from the continuous increase in frequency of the control words (Dubossarsky et al., 2017) . It seems that this diminishing frequency noise is counteracted by the alignment noise, yielding a flatter curve for SGNS AL . The latter increases SGNS AL 's chance to have peaks in one of the center injection steps producing false positives in our classification task. However, this property may also have a positive influence on SGNS AL in related LSC detection tasks (Schlechtweg et al., 2019) . ", |
| "cite_spans": [ |
| { |
| "start": 2461, |
| "end": 2487, |
| "text": "(Dubossarsky et al., 2017)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 2860, |
| "end": 2886, |
| "text": "(Schlechtweg et al., 2019)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 8, |
| "end": 16, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 975, |
| "end": 982, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1598, |
| "end": 1605, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1961, |
| "end": 1968, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 2212, |
| "end": 2220, |
| "text": "Figure 5", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 2: Synthetic semantic change", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "So far, the results have been based on either a large random sample to show general tendencies for the language in the corpus as a whole, or synthetically injected semantic change. In this part, we test the behavior of our methods on a small, manually created testset for semantic change. We use the Word Sense Change Testset (Tahmasebi and Risse, 2017b ) that consists of words and the different associated change events, for the time span 1785 -2010. In this experiment, we ignore the sense changes and consider only words as changed or stable, and restrict our change words to those that have change events between 1920 and 1970. 9 In total we have 13 changed and 19 stable words (excluding words with a total frequency \u2264 100). In Table 3 we see acd of each model on the changed and stable words. We find that for all methods, SGNS AL , SGNS TR , PPMI AL and PPMI TR , the acd for the changed words is statistically significantly higher (p values \u2264 0.01) than for the stable words which nicely corresponds to intuition; words with true semantic change should have vectors that differ more than words without change. The mean difference between the stable and the changed words, that gives us some notion of how well the two different classes are separated, is highest for SGNS TR . Because of the limited size of the testset, the results are indicative rather than conclusive and we continue with a manual analysis of the nearest neighboring words.", |
| "cite_spans": [ |
| { |
| "start": 326, |
| "end": 353, |
| "text": "(Tahmasebi and Risse, 2017b", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 633, |
| "end": 634, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 734, |
| "end": 741, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 3: WSC testset", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We carry out a qualitative evaluation for the closest neighbors for computer (see Figure 6 ), a word we expect to have changed after the invention of the digital computer in the 1940s, for the SGNS aligned version and SGNS with Temporal Referencing. SGNS AL has only a few words in common in 1950-1970, and while the digital computer is showing here, there are few overlapping words. The time periods 1920-1940 have no common words. In comparison, the SGNS TR show clear patterns. We see a clear break between 1940 and 1950, without any overlapping word, and a pattern between 1950-1970; the closest words are the other computer 1940-1970 . 10 This is exactly the pattern that we expected to see using the sense injection; stable senses can be distinguished from changing senses by their relationship to the other temporally referenced vectors.", |
| "cite_spans": [ |
| { |
| "start": 629, |
| "end": 643, |
| "text": "1940-1970 . 10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 82, |
| "end": 90, |
| "text": "Figure 6", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 3: WSC testset", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Next, we study a word for which we expect no sense change, namely ship (see Appendix E). The SGNS AL show a fairly low acd, but still there , and over time we see that the 'self-similarity' decreases. For almost all decades, the most similar words are ship from the decade before and after. The lower words also help describe the meaning of ship, as a boat and later also as a spaceship. The pattern of stability is much more clear for SGNS TR than SGNS AL and holds for most other stable words as well. For the word tape, that has a change in dominant sense (or an addition of another strong sense) with the addition of the music tape to adhesive tape, we see the same patterns as for ship, but the bottom words contain ribbon, paper, adhesive for 1920-1940 and recorder, recording, stereo in 1950-1970. 11 For both the real change in Table 3 and the synthetic change in Table 4 , we find that SGNS TR is best at differentiating between the stable and the change classes for both datasets (50% for WSC and 26% for synthetic change).", |
| "cite_spans": [ |
| { |
| "start": 749, |
| "end": 807, |
| "text": "1920-1940 and recorder, recording, stereo in 1950-1970. 11", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 836, |
| "end": 843, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 872, |
| "end": 879, |
| "text": "Table 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 3: WSC testset", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "In this paper, we have empirically tested the temporal referencing method for lexical semantic change. We train one vector space model over the whole corpus, and thus share information of the context words while training individual vectors for each target word and time period. We compare two commonly used models, namely PPMI and SGNS because of their properties; the PPMI model is count-based and does not require alignment across time, while the SGNS model has shown state-of-the-art results in previous work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We find that the SGNS model trained with Temporal Referencing contains significantly less noise than the standard SGNS for which an alignment is necessary. In comparison, for the PPMI model where no alignment is needed, Temporal Refer-encing also significantly reduced the noise level, but to a lesser extent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Next we evaluated whether the noise reduction carries over performance on a synthetic lexical semantic change detection task. We simulated change in a controlled and semantically principled way, using sense injection and showed that words with semantically related and unrelated semantic change can be differentiated from control (stable) words that are not sense injected, but increase in frequency in the same way as the changed words. SGNS with Temporal Referencing outperforms the other methods in correctly classifying the words to the two classes (change vs. stable).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Finally, we evaluated on a small, handcrafted set of change and stable words and found that SGNS with Temporal Referencing gives the largest separation between words that undergo semantic change and those that stay stable over time. In particular, we observe a similar behavior between this smaller testset and the synthetic sense injection, supporting our sense injection method as a good proxy for isolating and studying lexical semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our results support the following conclusion; trained on a diachronic corpus, SGNS with Temporal Referencing will capture more true semantic change. In the future, we plan to evaluate Temporal Referencing against the related dynamic embedding models on an annotated empirical lexical change dataset with multiple languages. We also plan on testing how well Temporal Referencing deals with corpora that are too small for alignment-based methods, hopefully opening new avenues of quantitative research. Figure 9 we see the closest neighbors for ship, a word we expect to be stable, for the SGNS aligned version (upper) and SGNS with temporal referencing (lower). ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 501, |
| "end": 509, |
| "text": "Figure 9", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "For an extensive survey of computational approaches to lexical semantic change, we refer the readers toTahmasebi et al. (2018), and toKutuzov et al. (2018) for a specialized focus on diachronic word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "W is constrained to be orthogonal. A and B are first length-normalized and mean-centered and their rows are reduced to the intersection of the vocabulary of Ca and C b for finding the mapping.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our case, t is a decade. E.g., in the corpus for 1920 we replace each occurrence of computer with the string computer1920.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Hence, the target words' frequencies were not matched, but rather stayed natural.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The filtering was carried out on the basis of the output of NLTK(Bird et al., 2009)'s pos tag() function.6 Find a full implementation of the pipeline at https:// github.com/Garrafao/TemporalReferencing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We differ between stable vectors that do not change despite the randomness involved in training between multiple runs, and accurate vectors give a good representation of meaning. Note that when we use the term stable word we mean stable in meaning over time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We also ran experiments with moving the time point when the first change was injected and the results mimic those presented here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "As an example, the word car is considered stable since its change event occurred before 1920.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The closest words in 1920-1940 have high cosine distances and are thus not very related. Still, for each computertime, the other vectors of computer are among the neighbors, meaning that despite sparsity and little overlap in context, some structure is found.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Find all nearest neighbour lists at https://github. com/Garrafao/TemporalReferencing/tree/ master/data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank Dr.Barbara McGillivray for her encouragement, and the anonymous reviewers for their helpful comments and suggestions. This work has been funded in parts by the University of Helsinki (research visit grant C1/2019 from the Faculty of Arts, to SH), by the project Towards Computational Lexical Semantic Change Detection supported by a project grant (2019-2022; dnr 2018-01184, to NT), the Centre for Digital Humanities at University of Gothenburg, the Konrad Adenauer Foundation and the CRETA center funded by the German Ministry for Education and Research (BMBF), and the Blavatnik Postdoctoral Fellowship.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| }, |
| { |
| "text": "A Pre-processing and Hyperparameter DetailsWe lower-cased all tokens in the corpora before extracting word-context pairs. For pair extraction we chose a window size of 5 for both, AL and TR. Corpus tokens were skipped as word or context if they did not have a minimum frequency of 100 in the full corpus used (i.e., 1920-1970 for COHA and full COCA) or contained non-alphabetic characters (except hyphens). We tuned model parameters on the most recent time bin of COHA (2000-2009) based on word similarity task scores (Hill et al., 2015; Finkelstein et al., 2001 ) reaching near state-of-the-art results (Levy et al., 2015) . The parameters for SGNS were dim = 300 (vector dimensionality), cds = 0.75 (context distribution smoothing), k = 5 (number of negative samples) and ep = 1 (number of training epochs). PPMI was smoothed and shifted Levy et al. (2015) . The parameters were cds = 0.75 and k = 5 (shifting parameter).", |
| "cite_spans": [ |
| { |
| "start": 309, |
| "end": 325, |
| "text": "(i.e., 1920-1970", |
| "ref_id": null |
| }, |
| { |
| "start": 518, |
| "end": 537, |
| "text": "(Hill et al., 2015;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 538, |
| "end": 562, |
| "text": "Finkelstein et al., 2001", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 604, |
| "end": 623, |
| "text": "(Levy et al., 2015)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 840, |
| "end": 858, |
| "text": "Levy et al. (2015)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| }, |
| { |
| "text": "In Figure 7 we present the peak distributions of the four models for the 712 target words (356 changed and 356 stable), color coded according to the true classification (change/stable). The peaks represent the models' predictions with respect to where the maximal cosine distance is found for each word, which we later use in a naive and rudimentary binary classification task. As can be seen from the different distributions, all models frequently find peaks in position 2 (corresponding to the event of the first sense injection). However, they are still very much different in their overall peak distributions which influence their sensitivity in detecting synthetically semantic changed words (Table 2) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 697, |
| "end": 706, |
| "text": "(Table 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "B Peak distribution analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "In Table 5 we list the words that have undergone semantic change, as well as the change year(s) and a description of the change. In Table 6 we list words that do not have changed meanings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 132, |
| "end": 139, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "C WSC TestSet", |
| "sec_num": null |
| }, |
| { |
| "text": "In Figure 8 we see the closest neighbors for computer, a word we expect to have changed after the invention of the digital computer in the 1940s, for the SGNS aligned version (upper) and SGNS with temporal referencing (lower). ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "D Closest Neighbors for Computer", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Learning bilingual word embeddings with (almost) no bilingual data", |
| "authors": [ |
| { |
| "first": "Mikel", |
| "middle": [], |
| "last": "Artetxe", |
| "suffix": "" |
| }, |
| { |
| "first": "Gorka", |
| "middle": [], |
| "last": "Labaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "451--462", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Dynamic word embeddings", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Bamler", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Mandt", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 34th International Conference on Machine Learning", |
| "volume": "70", |
| "issue": "", |
| "pages": "380--389", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th In- ternational Conference on Machine Learning, vol- ume 70, pages 380-389.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "238--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. 52nd Annual Meeting of the Association for Computational Lin- guistics, ACL 2014 -Proceedings of the Conference, 1:238-247.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Diachronic analysis of the Italian language exploiting Google Ngram", |
| "authors": [ |
| { |
| "first": "Pierpaolo", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Annalina", |
| "middle": [], |
| "last": "Caputo", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberta", |
| "middle": [], |
| "last": "Luisi", |
| "suffix": "" |
| }, |
| { |
| "first": "Giovanni", |
| "middle": [], |
| "last": "Semeraro", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Third Italian Conference on computational Linguistics CLiC-it", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierpaolo Basile, Annalina Caputo, Roberta Luisi, and Giovanni Semeraro. 2016. Diachronic analysis of the Italian language exploiting Google Ngram. In Third Italian Conference on computational Linguis- tics CLiC-it 2016.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Natural language processing with Python: analyzing text with the natural language toolkit", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bird", |
| "suffix": "" |
| }, |
| { |
| "first": "Ewan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Loper", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. O'Reilly Media, Inc.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Prinzipien des lexikalischen Bedeutungswandels am Beispiel der romanischen Sprachen. Niemeyer", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Blank", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Blank. 1997. Prinzipien des lexikalischen Bedeutungswandels am Beispiel der romanischen Sprachen. Niemeyer, T\u00fcbingen.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The Corpus of Historical American English (COHA): 400 million words", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Davies", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1810--2009", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Davies. 2002. The Corpus of Historical Amer- ican English (COHA): 400 million words, 1810- 2009. Brigham Young University.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The corpus of contemporary American English (COCA): 400+ million words, 1990-present", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Davies", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Davies. 2008. The corpus of contemporary American English (COCA): 400+ million words, 1990-present. Brigham Young University.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Outta control: Laws of semantic change and inherent biases in word representation models", |
| "authors": [ |
| { |
| "first": "Haim", |
| "middle": [], |
| "last": "Dubossarsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Daphna", |
| "middle": [], |
| "last": "Weinshall", |
| "suffix": "" |
| }, |
| { |
| "first": "Eitan", |
| "middle": [], |
| "last": "Grossman", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1136--1145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haim Dubossarsky, Daphna Weinshall, and Eitan Grossman. 2017. Outta control: Laws of semantic change and inherent biases in word representation models. In EMNLP 2017, pages 1136-1145. ACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "On the linearity of semantic change: Investigating meaning variation via dynamic graph models", |
| "authors": [ |
| { |
| "first": "Steffen", |
| "middle": [], |
| "last": "Eger", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Mehler", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ACL 2016", |
| "volume": "", |
| "issue": "", |
| "pages": "52--58", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-2009" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steffen Eger and Alexander Mehler. 2016. On the lin- earity of semantic change: Investigating meaning variation via dynamic graph models. In ACL 2016, pages 52-58. ACL.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Detecting domain-specific ambiguities: an NLP approach based on wikipedia crawling and word embeddings", |
| "authors": [ |
| { |
| "first": "Alessio", |
| "middle": [], |
| "last": "Ferrari", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Donati", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefania", |
| "middle": [], |
| "last": "Gnesi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 IEEE 25th International Requirements Engineering Conference Workshops (REW)", |
| "volume": "", |
| "issue": "", |
| "pages": "393--399", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessio Ferrari, Beatrice Donati, and Stefania Gnesi. 2017. Detecting domain-specific ambiguities: an NLP approach based on wikipedia crawling and word embeddings. In 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), pages 393-399. IEEE.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th International Conference on World Wide Web, WWW '01", |
| "volume": "", |
| "issue": "", |
| "pages": "406--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The con- cept revisited. In Proceedings of the 10th Interna- tional Conference on World Wide Web, WWW '01, pages 406-414, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A Bayesian model of diachronic meaning change", |
| "authors": [ |
| { |
| "first": "Lea", |
| "middle": [], |
| "last": "Frermann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "TACL", |
| "volume": "4", |
| "issue": "", |
| "pages": "31--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lea Frermann and Mirella Lapata. 2016. A Bayesian model of diachronic meaning change. TACL, 4:31- 45.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Gulordava", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "GEMS 2011", |
| "volume": "", |
| "issue": "", |
| "pages": "67--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Gulordava and Marco Baroni. 2011. A distri- butional similarity approach to the detection of se- mantic change in the Google Books Ngram corpus. In GEMS 2011, pages 67-71. ACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Inducing domain-specific sentiment lexicons from unlabeled corpora", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [ |
| "L" |
| ], |
| "last": "Hamilton", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Jure", |
| "middle": [], |
| "last": "Leskovec", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "EMNLP 2016", |
| "volume": "", |
| "issue": "", |
| "pages": "595--605", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016a. Inducing domain-specific sen- timent lexicons from unlabeled corpora. In EMNLP 2016, pages 595-605. ACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Diachronic word embeddings reveal statistical laws of semantic change", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [ |
| "L" |
| ], |
| "last": "Hamilton", |
| "suffix": "" |
| }, |
| { |
| "first": "Jure", |
| "middle": [], |
| "last": "Leskovec", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ACL 2016", |
| "volume": "", |
| "issue": "", |
| "pages": "1489--1501", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statisti- cal laws of semantic change. In ACL 2016, pages 1489-1501. ACL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Bad company-neighborhoods in neural embedding spaces considered harmful", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Hellrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Udo", |
| "middle": [], |
| "last": "Hahn", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "2785--2796", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Hellrich and Udo Hahn. 2016. Bad com- pany-neighborhoods in neural embedding spaces considered harmful. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 2785- 2796.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "When Does it Mean?: Detecting Semantic Change in Historical Texts", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Hengchen", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Hengchen. 2017. When Does it Mean?: De- tecting Semantic Change in Historical Texts. Ph.D. thesis, Universit\u00e9 libre de Bruxelles.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "41", |
| "issue": "4", |
| "pages": "665--695", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A framework for analyzing semantic change of words across time", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Jatowt", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Duh", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of Joint Conference on Digital Libraries, JCDL '14", |
| "volume": "", |
| "issue": "", |
| "pages": "229--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Jatowt and Kevin Duh. 2014. A framework for analyzing semantic change of words across time. In Proceedings of Joint Conference on Digital Li- braries, JCDL '14, pages 229-238.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Temporal analysis of language through neural language models", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi-I", |
| "middle": [], |
| "last": "Chiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Hanaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Darshan", |
| "middle": [], |
| "last": "Hegde", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science LACSS 2014", |
| "volume": "", |
| "issue": "", |
| "pages": "61--65", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/W14-2517" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of lan- guage through neural language models. In Proceed- ings of the ACL 2014 Workshop on Language Tech- nologies and Computational Social Science LACSS 2014, pages 61-65. ACL.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Statistically significant detection of linguistic change", |
| "authors": [ |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "Rami", |
| "middle": [], |
| "last": "Al-Rfou", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Perozzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Skiena", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 24th International Conference on World Wide Web, WWW '15", |
| "volume": "", |
| "issue": "", |
| "pages": "625--635", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/2736277.2741627" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant de- tection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, WWW '15, pages 625-635.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Diachronic word embeddings and semantic shifts: A survey", |
| "authors": [ |
| { |
| "first": "Andrey", |
| "middle": [], |
| "last": "Kutuzov", |
| "suffix": "" |
| }, |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "\u00d8vrelid", |
| "suffix": "" |
| }, |
| { |
| "first": "Terrence", |
| "middle": [], |
| "last": "Szymanski", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Velldal", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of COLING 2018", |
| "volume": "", |
| "issue": "", |
| "pages": "1384--1397", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrey Kutuzov, Lilja \u00d8vrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embed- dings and semantic shifts: A survey. In Proceedings of COLING 2018, pages 1384-1397, Santa Fe. ACL.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Tracing armed conflicts with diachronic word embedding models", |
| "authors": [ |
| { |
| "first": "Andrey", |
| "middle": [], |
| "last": "Kutuzov", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Velldal", |
| "suffix": "" |
| }, |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "\u00d8vrelid", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Events and Stories in the News Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "31--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrey Kutuzov, Erik Velldal, and Lilja \u00d8vrelid. 2017. Tracing armed conflicts with diachronic word em- bedding models. In Proceedings of the Events and Stories in the News Workshop, pages 31-36, Van- couver, Canada. ACL.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Word sense induction for novel sense detection", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Jey Han Lau", |
| "suffix": "" |
| }, |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Newman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "EACL 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "591--601", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jey Han Lau, Paul Cook, Diana McCarthy, David New- man, and Timothy Baldwin. 2012. Word sense in- duction for novel sense detection. In EACL 2012, pages 591-601.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Improving distributional similarity with lessons learned from word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "211--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of NIPS.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "An automatic approach to identify word sense changes in text media across timescales", |
| "authors": [ |
| { |
| "first": "Sunny", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| }, |
| { |
| "first": "Ritwik", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Suman Kalyan Maity", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| }, |
| { |
| "first": "Pawan", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Animesh", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mukherjee", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Natural Language Engineering", |
| "volume": "21", |
| "issue": "5", |
| "pages": "773--798", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/S135132491500011X" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An automatic ap- proach to identify word sense changes in text media across timescales. Natural Language Engineering, 21(5):773-798.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "That's sick dude!: Automatic identification of word sense change across different timescales", |
| "authors": [ |
| { |
| "first": "Sunny", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| }, |
| { |
| "first": "Ritwik", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Animesh", |
| "middle": [], |
| "last": "Mukherjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Pawan", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL 2014", |
| "volume": "", |
| "issue": "", |
| "pages": "1020--1029", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sunny Mitra, Ritwik Mitra, Martin Riedl, Chris Bie- mann, Animesh Mukherjee, and Pawan Goyal. 2014. That's sick dude!: Automatic identification of word sense change across different timescales. In ACL 2014, pages 1020-1029.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "GASC: Genre-aware semantic change for Ancient Greek", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Valerio Perrone", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Palma", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Hengchen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jim", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Vatri", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mcgillivray", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valerio Perrone, Marco Palma, Simon Hengchen, Alessandro Vatri, Jim Q. Smith, and Barbara McGillivray. 2019. GASC: Genre-aware semantic change for Ancient Greek. CoRR, abs/1903.05587.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Paving the way to a large-scale pseudosenseannotated dataset", |
| "authors": [ |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Taher", |
| "suffix": "" |
| }, |
| { |
| "first": "Pilehvar", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1100--1109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammad Taher Pilehvar and Roberto Navigli. 2013. Paving the way to a large-scale pseudosense- annotated dataset. In Proceedings of the 2013 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1100-1109.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Deep neural models of semantic shift", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Rosenfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "474--484", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Rosenfeld and Katrin Erk. 2018. Deep neural models of semantic shift. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 474-484.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Dynamic embeddings for language evolution", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Maja", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rudolph", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Blei", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "WWW 2018", |
| "volume": "", |
| "issue": "", |
| "pages": "1003--1011", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3178876.3185999" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maja R. Rudolph and David M. Blei. 2018. Dynamic embeddings for language evolution. In WWW 2018, pages 1003-1011, Lyon. ACM.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Semantic density analysis: Comparing word meaning across time and phonetic space", |
| "authors": [ |
| { |
| "first": "Eyal", |
| "middle": [], |
| "last": "Sagi", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Kaufmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Brady", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "GEMS 2009", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2009. Semantic density analysis: Comparing word mean- ing across time and phonetic space. In GEMS 2009, pages 104-111. ACL.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Introduction to modern information retrieval", |
| "authors": [ |
| { |
| "first": "Gerard", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mcgill", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerard Salton and Michael J McGill. 1983. Introduc- tion to modern information retrieval. McGraw -Hill Book Company, New York.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "German in flux: Detecting metaphoric change via word entropy", |
| "authors": [ |
| { |
| "first": "Dominik", |
| "middle": [], |
| "last": "Schlechtweg", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Eckmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Enrico", |
| "middle": [], |
| "last": "Santus", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hole", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "354--367", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dominik Schlechtweg, Sabine Eckmann, Enrico San- tus, Sabine Schulte im Walde, and Daniel Hole. 2017. German in flux: Detecting metaphoric change via word entropy. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning, pages 354-367, Vancouver, Canada.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains", |
| "authors": [ |
| { |
| "first": "Dominik", |
| "middle": [], |
| "last": "Schlechtweg", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "H\u00e4tty", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Del Tredici", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dominik Schlechtweg, Anna H\u00e4tty, Marco del Tredici, and Sabine Schulte im Walde. 2019. A Wind of Change: Detecting and Evaluating Lexical Seman- tic Change across Times and Domains. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), Florence, Italy. ACL.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Automatic word sense discrimination", |
| "authors": [ |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational linguistics", |
| "volume": "24", |
| "issue": "1", |
| "pages": "97--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrim- ination. Computational linguistics, 24(1):97-123.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Survey of computational approaches to lexical semantic change", |
| "authors": [ |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Tahmasebi", |
| "suffix": "" |
| }, |
| { |
| "first": "Lars", |
| "middle": [], |
| "last": "Borin", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Jatowt", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. Survey of computational approaches to lexical se- mantic change. CoRR, abs/1811.06278.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "On the Uses of Word Sense Change for Research in the Digital Humanities", |
| "authors": [ |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Tahmasebi", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Risse", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Research and Advanced Technology for Digital Libraries", |
| "volume": "", |
| "issue": "", |
| "pages": "246--257", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nina Tahmasebi and Thomas Risse. 2017a. On the Uses of Word Sense Change for Research in the Digital Humanities. In Research and Advanced Technology for Digital Libraries, pages 246-257. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Word sense change testset", |
| "authors": [ |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Tahmasebi", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Risse", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.5281/zenodo.495572" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nina Tahmasebi and Thomas Risse. 2017b. Word sense change testset, 10.5281/zenodo.495572.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "The clin27 shared task: Translating historical text to contemporary language for improving automatic linguistic annotation", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [ |
| "Tjong" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcel", |
| "middle": [], |
| "last": "Bollman", |
| "suffix": "" |
| }, |
| { |
| "first": "Remko", |
| "middle": [], |
| "last": "Boschker", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Dietz", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Dipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Domingo", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Der Goot", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Van Koppen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ljube\u0161i\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Computational Linguistics in the Netherlands Journal", |
| "volume": "7", |
| "issue": "", |
| "pages": "53--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik Tjong Kim Sang, Marcel Bollman, Remko Boschker, Francisco Casacuberta, FM Dietz, Ste- fanie Dipper, Miguel Domingo, Rob van der Goot, JM van Koppen, Nikola Ljube\u0161i\u0107, et al. 2017. The clin27 shared task: Translating historical text to con- temporary language for improving automatic lin- guistic annotation. Computational Linguistics in the Netherlands Journal, 7:53-64.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "A sense-topic model for word sense induction with unsupervised data enrichment", |
| "authors": [ |
| { |
| "first": "Jing", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Brian", |
| "suffix": "" |
| }, |
| { |
| "first": "T Yu", |
| "middle": [], |
| "last": "Ziebart", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Clement", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "TACL", |
| "volume": "3", |
| "issue": "", |
| "pages": "59--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jing Wang, Mohit Bansal, Kevin Gimpel, Brian D Ziebart, and T Yu Clement. 2015. A sense-topic model for word sense induction with unsupervised data enrichment. TACL, 3:59-71.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Understanding semantic change of words over centuries", |
| "authors": [ |
| { |
| "first": "Reyyan", |
| "middle": [], |
| "last": "Derry Tanti Wijaya", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yeniterzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "DETECT '11", |
| "volume": "", |
| "issue": "", |
| "pages": "35--40", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/2064448.2064475" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Un- derstanding semantic change of words over cen- turies. In DETECT '11, pages 35-40. ACM.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Dynamic word embeddings for evolving semantic discovery", |
| "authors": [ |
| { |
| "first": "Zijun", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yifan", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Weicong", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikhil", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM '18", |
| "volume": "", |
| "issue": "", |
| "pages": "673--681", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3159652.3159703" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic word embeddings for evolving semantic discovery. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM '18, pages 673- 681. ACM.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "Increase in semantic material for a word by means of sense injection. I.: new injected sense is related to the existing sense. II.: new injected sense is unrelated.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "text": "Comparison of aligned embedding spaces and temporal referencing using both the genuine and the shuffled corpora. High difference in cosine distance indicate less noise captured by the model.", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "num": null, |
| "text": "Smoothed histograms of word distances for the two SGNS models. For the TR model, we see a more constant cumulative shift which is reflected by the overlap between the distributions as well as by differences in their means (dashed vertical lines).", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "text": "acd at different sense injection steps for the four models. Steps without sense injection are shaded.", |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "num": null, |
| "text": "Mean cosine distance curves for SGNS TR and SGNS AL .", |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "num": null, |
| "text": "Nearest neighbors for computer. Upper part SGNS AL , lower part SGNS TR . A larger rendering of this figure is available in Appendix D.", |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "num": null, |
| "text": "Nearest neighbors for computer. Upper part SGNS AL , lower part SGNS TR .", |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "num": null, |
| "text": "Nearest neighbors for ship. Upper part SGNS AL , lower part SGNS TR .", |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "num": null, |
| "text": "", |
| "content": "<table><tr><td colspan=\"2\">Difference in average cosine distance between</td></tr><tr><td colspan=\"2\">genuine and shuffled conditions (true semantic change)</td></tr><tr><td colspan=\"2\">for each method, collapsed over the 5 time bins (1920-</td></tr><tr><td>1970) in COHA.</td><td/></tr><tr><td>Align TR</td><td>\u2206</td></tr><tr><td colspan=\"2\">SGNS 0.033 0.059 0.026</td></tr><tr><td colspan=\"2\">PPMI 0.028 0.033 0.005</td></tr></table>", |
| "html": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "num": null, |
| "text": "acd for WSC testset. Var \u2208 (0.0 \u2212 0.01). CH = changed word, ST = stable word, DIFF = difference between ACD for change and stable in percent.", |
| "content": "<table><tr><td/><td colspan=\"2\">SGNS</td><td colspan=\"2\">PPMI</td></tr><tr><td/><td colspan=\"2\">Align TR</td><td colspan=\"2\">Align TR</td></tr><tr><td>CH</td><td>0.47</td><td colspan=\"2\">0.31 0.86</td><td>0.86</td></tr><tr><td>ST</td><td>0.34</td><td colspan=\"2\">0.21 0.71</td><td>0.73</td></tr><tr><td colspan=\"2\">DIFF 38%</td><td colspan=\"2\">50% 20%</td><td>17%</td></tr></table>", |
| "html": null |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "num": null, |
| "text": "acd for synthetic change. Var \u2208 (0.0 \u2212 0.01).", |
| "content": "<table><tr><td/><td colspan=\"2\">SGNS</td><td colspan=\"2\">PPMI</td></tr><tr><td/><td colspan=\"2\">Align TR</td><td colspan=\"2\">Align TR</td></tr><tr><td>CH</td><td>0.46</td><td colspan=\"2\">0.33 0.86</td><td>0.87</td></tr><tr><td>ST</td><td>0.37</td><td colspan=\"2\">0.26 0.83</td><td>0.83</td></tr><tr><td colspan=\"2\">DIFF 24%</td><td colspan=\"2\">26% 4%</td><td>4%</td></tr><tr><td colspan=\"5\">are large differences in the top neighboring words.</td></tr><tr><td colspan=\"5\">The SGNS TR show what we expect; the most simi-</td></tr><tr><td colspan=\"3\">lar words are the other ship</td><td/></tr></table>", |
| "html": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "num": null, |
| "text": "Changed words from WSC Testset", |
| "content": "<table><tr><td>Word</td><td colspan=\"2\">Change year Description</td></tr><tr><td colspan=\"2\">aeroplane 1919-1920</td><td>First use as weapon of war and commercial flights</td></tr><tr><td>cinema</td><td>1900</td><td>movie theatre</td></tr><tr><td colspan=\"2\">computer 1940</td><td>digital computer</td></tr><tr><td>cool</td><td>1964</td><td>a way of being</td></tr><tr><td>flight</td><td>1918</td><td>after WWI commercial aviation grows rapidly</td></tr><tr><td>gay</td><td>1985</td><td>recommended for use instead of homosexual</td></tr><tr><td>memory</td><td>1960</td><td>digital memory</td></tr><tr><td>mouse</td><td>1965</td><td>the computer mouse was introduced</td></tr><tr><td>record</td><td>1920</td><td>electrical music records</td></tr><tr><td>rock</td><td>1950-1960</td><td>birth of rock music</td></tr><tr><td>tank</td><td>1917</td><td>first tank in battle</td></tr><tr><td>tape</td><td>1960</td><td>common household use of the magnetic tape</td></tr></table>", |
| "html": null |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "num": null, |
| "text": "Stable words from WSC Testset", |
| "content": "<table><tr><td colspan=\"2\">automobile music</td></tr><tr><td>bank</td><td>newspaper</td></tr><tr><td>camera</td><td>paper</td></tr><tr><td>car</td><td>phone</td></tr><tr><td>deer</td><td>ship</td></tr><tr><td>export</td><td>symptom</td></tr><tr><td>founder</td><td>telephone</td></tr><tr><td>horse</td><td>train</td></tr><tr><td>mail</td><td>travel</td></tr><tr><td>mirror</td><td/></tr><tr><td colspan=\"2\">E Closest Neighbors for Ship</td></tr><tr><td>In</td><td/></tr></table>", |
| "html": null |
| } |
| } |
| } |
| } |