| { |
| "paper_id": "Y11-1028", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:39:17.365090Z" |
| }, |
| "title": "Automatic identification of words with novel but infrequent senses \u22c6", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Toronto", |
| "location": {} |
| }, |
| "email": "pcook@cs.toronto.edu" |
| }, |
| { |
| "first": "Graeme", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Toronto", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose a statistical method for identifying words that have a novel sense in one corpus compared to another based on differences in their lexico-syntactic contexts in those corpora. In contrast to previous work on identifying semantic change, we focus specifically on infrequent word senses. Given the challenges of evaluation for this task, we further propose a novel evaluation method based on synthetic examples of semantic change that allows us to simulate differing degrees of sense change. Our proposed method is able to identify rather subtle simulated sense changes, and outperforms both a random baseline and a previously-proposed approach.", |
| "pdf_parse": { |
| "paper_id": "Y11-1028", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose a statistical method for identifying words that have a novel sense in one corpus compared to another based on differences in their lexico-syntactic contexts in those corpora. In contrast to previous work on identifying semantic change, we focus specifically on infrequent word senses. Given the challenges of evaluation for this task, we further propose a novel evaluation method based on synthetic examples of semantic change that allows us to simulate differing degrees of sense change. Our proposed method is able to identify rather subtle simulated sense changes, and outperforms both a random baseline and a previously-proposed approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The methods we propose could form the basis for a lexicographical tool to aid in the identification of new word senses that are particularly difficult to find due to their relatively low frequency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Evaluating approaches to identifying diachronic semantic change is difficult; indeed, most previous studies have relied on rather small datasets (e.g., Sagi et al., 2009; Cook and Stevenson, 2010) or human judges' intuitions about changes in meaning, which might not accurately reflect changes in meaning observed in corpora (e.g., Gulordava and Baroni, 2011) . Therefore, taking inspiration from evaluation approaches in word sense disambiguation that make use of artificiallyambiguous words (e.g., Sch\u00fctze, 1992; Gale et al., 1992) , we propose evaluation methods that use synthetic examples of semantic change. Crucially, this enables us to to carefully control for the frequency of senses; this allows us to assess how rare a word sense may be, and yet still be identified by our method.", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 170, |
| "text": "Sagi et al., 2009;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 171, |
| "end": 196, |
| "text": "Cook and Stevenson, 2010)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 332, |
| "end": 359, |
| "text": "Gulordava and Baroni, 2011)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 500, |
| "end": 514, |
| "text": "Sch\u00fctze, 1992;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 515, |
| "end": 533, |
| "text": "Gale et al., 1992)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "One further consideration is that methods for identifying semantic change must be applicable to relatively small corpora. Although there is a move towards using ever-larger corpora in computational linguistics, many historical corpora and corpora for specific time periods are rather small. For example, one year of text from the New York Times Annotated Corpus (Sandhaus, 2008) consists of roughly 50 million words, which is small compared to contemporary corpora which often contain billions of words. In this study, we therefore focus on relatively small corpora.", |
| "cite_spans": [ |
| { |
| "start": 362, |
| "end": 378, |
| "text": "(Sandhaus, 2008)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We discuss some related work in Section 2, and then present our model for identifying words with differing senses in Section 3. In Sections 4 and 5 we empirically evaluate our model on synthetic examples of semantic change created from Senseval data and near synonyms. We then offer some concluding remarks in Section 6. Sagi et al. (2009) and Cook and Stevenson (2010) focus on identifying specific types of diachronic change-widening and narrowing, and amelioration and pejoration, respectively-and exploit properties of these phenomena in their methods for identifying them. Gulordava and Baroni (2011) consider the identification of diachronic changes in meaning from an n-gram database, but in contrast to Sagi et al. and Cook and Stevenson, do not focus on specific types of semantic change. Others have studied differences in meaning between dialects and domains, instead of over time. Peirsman et al. (2010) consider the identification of lectal markers-words typical of one dialect versus another, either because of their marked frequency or sense-in Belgian and Netherlandic Dutch. McCarthy et al. (2007) consider the identification of predominant word senses in corpora, focusing on differences between domains. This method can be applied to not only identify the words that differ in predominant sense in two corpora, but also the specific predominant senses of those words. Nevertheless, none of these studies has specifically considered the identification of words with novel infrequent senses, the focus of this study.", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 339, |
| "text": "Sagi et al. (2009)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 344, |
| "end": 369, |
| "text": "Cook and Stevenson (2010)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 578, |
| "end": 605, |
| "text": "Gulordava and Baroni (2011)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 893, |
| "end": 915, |
| "text": "Peirsman et al. (2010)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1060, |
| "end": 1114, |
| "text": "Belgian and Netherlandic Dutch. McCarthy et al. (2007)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Given the challenges of evaluating methods for identifying semantic change-namely a lack of suitable resources-we propose the use of synthetic examples of semantic change for evaluation. Gaustad (2001) showed that evaluations using pseudowords (artificially-ambiguous words) can over-estimate the accuracy of a word sense disambiguation system on real data. Gaustad suggests this is because word senses are typically related (i.e., words are polysemous) whereas pseudowords are usually created from words with distinct senses. Nakov and Hearst (2003) and Otrusina and Smrz (2010) propose methods for constructing more-realistic pseudowords by taking into account information about lexical categories, and lexical or distributional information, respectively. In this work we propose a new use for pseudowords-evaluating methods for identifying semantic change-and attempt to address concerns about the use of pseudowords, such as those raised by Gaustad, by creating our pseudowords (discussed in Sections 4 and 5) from real word senses and words with related senses.", |
| "cite_spans": [ |
| { |
| "start": 187, |
| "end": 201, |
| "text": "Gaustad (2001)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 527, |
| "end": 550, |
| "text": "Nakov and Hearst (2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 555, |
| "end": 579, |
| "text": "Otrusina and Smrz (2010)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The input to our method is two corpora which represent different text varieties, e.g., different time periods. The output is a set of words-in this study, either nouns or verbs-that are hypothesized by the method to have different senses in one of the corpora compared to the other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We consider a statistical model similar to one that Peirsman et al. (2010) used for automatically identifying lectal markers. This approach assumes that usages of different senses of a word will occur in different contexts, and that the aggregated contexts of a word in two corpora will differ if the senses of that word differ in those corpora. The model is based on a distributional representation of meaning that draws on work on automatically clustering similar words (Lin, 1998) that has been incorporated into tools used by lexicographers to identify word senses (Kilgarriff and Tugwell, 2002) . Specifically, this method measures the similarity of two lexico-syntactic representations of the aggregated contexts of a target word; these two representations would typically come from different corpora representing, for example, different time periods. The lexico-syntactic representations capture the association of a target word with dependency triples, and the similarity between two target word representations is determined with a number of metrics. We propose some variations to Peirsman et al.'s model-specifically a novel association measure (Section 3.1) and similarity metric (Section 3.2)-that are found to improve its performance (discussed in Section 5).", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 74, |
| "text": "Peirsman et al. (2010)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 472, |
| "end": 483, |
| "text": "(Lin, 1998)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 569, |
| "end": 599, |
| "text": "(Kilgarriff and Tugwell, 2002)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We use the following information-theoretic association measure proposed by Lin (1998) :", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 85, |
| "text": "Lin (1998)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "I(w 1 , r, w 2 ) = log ||w 1 , r, w 2 || || * , r, * || ||w 1 , r, * || || * , r, w 2 || (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where w 1 , r, and w 2 are a head, dependency relation, and dependent, respectively; || \u2022 || is the frequency of some tuple (e.g., w 1 , r, w 2 ); and * refers to any item (e.g., w 1 , r, * is a tuple with head w 1 , relation r, and any dependent).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In some cases I is not an appropriate association measure. For small corpora, counts for many dependency triples will be very low and hence unreliable. To avoid these data sparseness problems we therefore consider a second, and much simpler, association measure. Joanis et al. (2008) found that the frequency of a verb occurring in specific syntactic relations, as well as the frequency with which that verb co-occurs with specific prepositions, are useful features in verb classification. Our conditional probability-based association measure (cprob), captures information similar to that used by Joanis et al., and is calculated as below:", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 283, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "cprob(w 1 , r, w 2 ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ||w 1 , r, w 2 || ||w 1 , * , * ||", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "if r = preposition or particle", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "||w 1 , r, * || ||w 1 , * , * || otherwise", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Prepositions and particles both often indicate the meaning of a verb. Because these parts-ofspeech are frequent, and other dependents are ignored, this association measure can be estimated from smaller corpora more accurately than I.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Association measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We refer to the usages of a given word w in a corpus C as w C . We define the salient tuples T C for a word w C to be the set of all head, dependency relation, dependent tuples t in corpus C having w as head and frequency in C greater than a threshold, which we set to 5. Peirsman et al. (2010) find cosine to slightly outperform several other metrics-including the similarity metric proposed by Lin (1998) -in a cross-varietal synonymy detection task, and use cosine in their experiments on identifying lectal markers; we therefore also consider cosine and define the similarity for usages of tuples in corpora A and B as follows:", |
| "cite_spans": [ |
| { |
| "start": 396, |
| "end": 406, |
| "text": "Lin (1998)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Cosine(w A , w B ) = \u2211 t\u2208T A \u2229T B A A (t) * A B (t) \u2211 t\u2208T A A A (t) 2 * \u2211 t\u2208T B A B (t) 2 (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where A A (t) and A B (t) correspond to the association-either I or cprob-computed for corpora A and B, respectively. Cosine is a symmetrical similarity metric, but an asymmetrical metric may be more appropriate in some cases. For example, one method used by lexicographers to search for neologisms is to compare a corpus of recent texts to a reference corpus representing standard usage (O'Donovan and O'Neil, 2008) . In this case, focusing on salient usages in the corpus of newer texts that are less salient, or unattested, in the reference corpus may be more appropriate for identifying novel senses. We therefore propose the following asymmetrical metric-Newness-in which R is considered to be a reference corpus, and N a corpus of newer texts: 2", |
| "cite_spans": [ |
| { |
| "start": 388, |
| "end": 416, |
| "text": "(O'Donovan and O'Neil, 2008)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Newness(w N , w R ) = \u2211 t\u2208T N \u2212T R A N (t) + \u2211 t\u2208T N \u2229T R max(A N (t) \u2212 A R (t), 0) \u2211 t\u2208T N A N (t)", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The first part of the numerator", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(\u2211 t\u2208T N \u2212T R A N (t)) focuses on tuples that are in N but unattested in R. The second part of the numerator (\u2211 t\u2208T N \u2229T R max(A N (t) \u2212 A R (t), 0))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "focuses on tuples that have stronger association in N than R; the max prevents tuples with stronger association in R than N from impacting the score. The denominator ensures the final score is in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "[0, 1].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Cosine is a similarity metric-words that have similar usages in two corpora will receive high scores. However, Newness is a dissimilarity metric which assigns high scores to words that have novel usages in one corpus compared to a reference corpus. We use Cosine and Newness to produce a ranking of lemmas, and account for this difference between the measures by simply negating Cosine scores to reverse the rankings for Cosine.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity metrics", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To determine the extent to which the meanings of a word differ between two corpora, we could simply compute inter-corpus similarity for that word directly, using any of the above similarity metrics; e.g., to compute the difference in the meanings of click between corpora A and B using the Newness similarity metric, we could compute Newness(click A , click B ). However, as Peirsman et al. (2010) observe, it may also be important to take into account the extent to which the meanings of a word vary within a corpus. For example, suppose we observe that the computed difference in meaning is large for some word w between corpora A and B. Taking B as a reference corpus, suppose that we also observe that the computed difference in meaning for w between two random samples of usages from B alone is large. In this case we should not necessarily believe that w differs in meaning between A and B because, based on w's distribution in B, we expect its computed meaning to vary. However, if the computed difference in meaning of w between two random samples from B were small, then the observed computed difference for w between A and B might be more indicative of w having different senses in A and B.", |
| "cite_spans": [ |
| { |
| "start": 375, |
| "end": 397, |
| "text": "Peirsman et al. (2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-corpus and intra-corpus similarity", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We operationalize the above intuition as follows: we randomly 2-partition the documents of B-the reference corpus-10 times. For a given word w, this allows us to compute 10 intracorpus B similarities. However, corpus size can influence association measures. 3 In order to make comparisons between similar-size corpora, we randomly 2-partition the documents of A once. 4 This allows us to compute a total of 40 inter-corpus similarities for w. We then compute the difference between w's average inter-corpus and average intra-corpus similarity.", |
| "cite_spans": [ |
| { |
| "start": 258, |
| "end": 259, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 368, |
| "end": 369, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inter-corpus and intra-corpus similarity", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In this section we use Senseval data to create synthetic examples of words which have undergone semantic change, and evaluate our model on these items. Specifically, we simulate words taking on a novel sense. We do so by dividing the usages for a given word into two parts such that one sense-the \"novel\" sense-is present in only one of the parts. We can further vary the frequency of the novel sense to simulate more or less drastic changes in sense. Crucially, using manually senseannotated data allows us to create synthetic examples of semantic change based on real word senses; the resulting synthetic examples are constructed from related word senses, increasing our confidence that they are plausible examples of semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic examples of semantic change from Senseval data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We use the 32 verbs and 20 nouns in the data from the Senseval-3 English lexical sample task (Mihalcea et al., 2004) to create synthetic examples of semantic change. We restrict ourselves to the training portion of this data, which consists of from 26 to 266 manually sense-annotated usages of each word. We refer to the set of senses of each word w as S w . We select the second most frequent sense, s \u2208 S w , ignoring instances which are assigned multiple senses or annotated as unknown. (We select the second most frequent sense because it tends to be moderately frequent, but does not account for the majority of usages.) We then partition the sense-tagged usages of w into three approximately equal-sized parts w A , w B , and w C . In this discussion, w A and w B consist of instances whose corresponding manually-tagged senses are in S w \u2212 {s}. They are used to simulate w not undergoing semantic change; the frequency of any given sense of w is approximately the same in both w A and w B . We refer to this as the \"no change\" condition.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 116, |
| "text": "(Mihalcea et al., 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and preprocessing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "By contrast, w C consists of usages of w with any sense in S w such that the percentage of usages of s is approximately r. These usages are used to simulate w undergoing semantic change, namely w acquires a novel sense; specifically, sense s is not present in w A , but accounts for roughly r% of the usages in w C . We refer to this as the \"change\" condition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and preprocessing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The usages of w are divided into w A , w B , and w C in such a way that as much of the Senseval data as possible is used while still maintaining the appropriate ratio of senses in w C and keeping the sizes of w A , w B and w C approximately equal.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and preprocessing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "When a word changes in meaning by taking on a new sense, it typically goes through a period where it also maintains its original senses (Campbell, 2004, Chapter 9) . For example, the predominant meaning of gay has changed from 'merry' to 'homosexual', but the 'merry' sense is still understood. Furthermore, gay has taken on another sense-often considered offensive-meaning roughly 'of poor quality'. We model this aspect of semantic change in our synthetic examples through the choice of r (the percentage of usages of the novel sense in the \"change\" condition). Values of r close to 100 simulate a word that has lost its original senses and taken on an entirely new meaning, while values of r closer to 0 correspond to a novel but relatively infrequent sense.", |
| "cite_spans": [ |
| { |
| "start": 136, |
| "end": 163, |
| "text": "(Campbell, 2004, Chapter 9)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and preprocessing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We tag all sentences in our dataset with their part-of-speech using the TreeTagger (Schmid, 2004) , and then parse the sentences using the MALT dependency parser (Nivre et al., 2007) with the provided pre-trained linear model for English. 5 For each dependency triple w 1 , r, w 2 in the resulting parses we add a corresponding triple w 2 , r-inverse, w 1 . Then for each word w, we extract all dependency triples with head w, and compute the similarity between w A and w B , and w A and w C , using the cprob association measure. 6 In this first set of experiments we compute similarity directly-i.e., not taking into account intra-corpus similarity-due to the rather small number of tokens for each item in these experiments. 7", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 97, |
| "text": "(Schmid, 2004)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 162, |
| "end": 182, |
| "text": "(Nivre et al., 2007)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 531, |
| "end": 532, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and preprocessing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For each value of r in {20, 40, ..., 100}, we randomly repeat the process described in Section 4.1 100 times. For each word w and each trial, we compute the difference between the \"change\" and \"no change\" conditions using each similarity metric, e.g., for Newness we compute Newness(w A , w B ) \u2212 Newness(w A , w C ). If this difference is positive, the method is scored as correct, and as incorrect otherwise. We compute the accuracy over the 100 trials for each word, and then compute the average accuracy over all verbs and nouns separately. In Table 1 we report the average accuracy over all verbs and nouns in our dataset for each similarity metric and value of r.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 548, |
| "end": 555, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup and results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "As expected, the accuracies are higher for experimental setups with higher values of r, i.e., words with more-frequent novel senses are easier to identify. For each similarity metric and each value of r, we compare the accuracies to a chance baseline of 50% accuracy using a one-sided one-sample Wilcoxon signed-rank test (a non-parametric alternative to a one-sample t-test). The differences are significant (p < .05) in all cases except those not marked with * in Table 1 . This encouraging result demonstrates that-under the conditions of the present experiment-this method is better able to identify whether a given word has acquired a novel sense than a random baseline, even if that novel sense accounts for only 20% of the usages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 466, |
| "end": 473, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup and results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "It is not clear how to statistically test the difference between Cosine and Newness over all values of r. Because the results using different values of r are based on the same usages of the experimental items, the paired differences obtained for different values of r are not independent. Therefore statistical tests such as the paired t-test and the Wilcoxon signed-rank test are not appropriate. We can, however, test the difference between the similarity metrics for a specific value of r. For the 32 verbs, the accuracy using Cosine is significantly better (p < .05) than that using Newness according to a two-sided paired Wilcoxon signed-rank test, for each value of r. For the 20 nouns, Cosine is significantly better than Newness in 1 case (r = 20).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup and results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Many words have multiple related senses; i.e., many words are polysemous. Therefore, when a word takes on a novel sense, it is often related to its older senses. For example, the relatively recent sense of post referring to online message boards and social media is related to its older senses referring to physical message boards. In this section we use this knowledge of semantic change to construct plausible synthetic examples of lexical semantic change based on near synonyms. By replacing varying numbers of usages of a given word in a corpus with a near synonym, we can simulate different degrees of sense change; i.e., we simulate different degrees of a word acquiring a novel, but related, sense. This corresponds to the view that semantic change is gradual, with words taking on new senses, and their original senses remaining or fading from usage over time. Moreover, these synthetic examples can be viewed as a type of widening, a common type of semantic change in which the use of a word is extended to additional contexts (Campbell, 2004) . In contrast to the experiments in Section 4, here we have access to usages of our synthetic examples of semantic change in a corpus, and are able to calculate association measures such as I that require marginal frequencies; however, this comes at the expense of our synthetic examples of semantic change no longer being based on manually-identified word senses.", |
| "cite_spans": [ |
| { |
| "start": 1036, |
| "end": 1052, |
| "text": "(Campbell, 2004)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic examples of semantic change from near synonyms", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For these experiments we use the New York Times Annotated Corpus (Sandhaus, 2008) for the year 1990, a sample of approximately 50 million words of non-newswire text from the New York Times. We tag and parse this corpus using the TreeTagger and MALT parser as in Section 4.1.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 81, |
| "text": "(Sandhaus, 2008)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and experimental items", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We require a set of words to use in the creation of our synthetic examples of semantic change. These should be frequent words-so that we can accurately estimate our association measuresbut not very frequent and highly polysemous words, such as light verbs, e.g., give, take, and make. We select the 1000 verbal lemmas and 1000 nominal lemmas in our corpus with frequency rank 101-1100 amongst verbal and nominal lemmas, respectively. From this set of words we then identify all words w such that the first WordNet (Fellbaum, 1998) synset of w has at least one lemma, near synonym(w), that is not w and has frequency greater than 500; in the case that multiple lemmas satisfy these conditions, we randomly choose one. 8 For example, near synonym(propose) is suggest. Of the 1000 verbs and nouns, 252 verbs and 214 nouns, respectively, satisfy these conditions. These items are used to create synthetic examples of semantic change. The remaining 748 verbs and 786 nouns are used as examples of words that do not undergo semantic change.", |
| "cite_spans": [ |
| { |
| "start": 514, |
| "end": 530, |
| "text": "(Fellbaum, 1998)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and experimental items", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We randomly partition the documents in our corpus into two parts, referred to as A and B. For a given word w, we refer to its usages in corpus parts A and B as w A and w B , respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic example creation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "A synthetic example of semantic change is formed from w A , a random sample of (1 \u2212 r 100 ) \u2022 |w B | usages from w B , and r 100 \u2022 |w B | usages of near synonym(w) from corpus part B. We consider values of r-the proportion of usages in corpus part B that correspond to a novel sense-in {10, 20, 30, 40}. As an example, to form a synthetic example for propose with r = 20, we use all usages of propose in A, a sample of 80% of the usages of propose in B, and a sample of usages of suggest in B of size equal to 20% of the number of usages of propose in B.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic example creation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In these experiments, we consider both of the association measures (I and cprob) and both of the similarity metrics (Cosine and Newness) discussed in Sections 3.1 and 3.2. We further consider similarity computed directly between two corpora, and taking into account intra-corpus similarity (Section 3.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We randomly repeat the process described in Section 5.2 5 times. For each of the 5 trials, we compute the similarity of each synthetic example of semantic change between the corpus parts (using both association measures, both similarity metrics, and both computing inter-corpus similarity directly and taking into account intra-corpus similarity). We similarly compute similarity for Results for experiments in which the novel sense accounts for 10-40% of the usages of a noun are shown. Right: Top-100 % accuracy for identifying synthetic examples of nouns and verbs using the association measure I and Cosine similarity metric in experiments where the novel sense accounts for 10-40% of the usages of a word. The performance of a random baseline is also shown the examples of words that do not change in meaning. We compute the average similarity of each item across the 5 trials. For both types of synthetic semantic change-nouns and verbs-we rank all experimental items-synthetic examples of semantic change, and examples of words that don't undergo semantic change-by average similarity. We then compute interpolated precision-recall curves for the synthetic examples of semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The left panel of Figure 1 presents interpolated precision-recall curves for identifying synthetic examples of nouns using the association measure I, Cosine similarity metric, and taking intracorpus similarity into account, an experimental setup similar to one considered by Peirsman et al. (2010) . Results for experiments using r = 10-40 (i.e., the percentage of usages corresponding to the novel sense) are shown. As in the experiments in Section 4.2, stronger instances of semantic change are easier to identify. Results using synthetic examples of verbs (not shown) are similar. Because we expect the candidates returned by our method to be manually examined by a lexicographer, we are particularly interested in whether the top-ranked items are correct. We therefore also consider the accuracy of our method on the top 100 ranked items. Top-100 % accuracy for the same experimental conditions as above, and also for experiments using synthetic examples of verbs, is shown in the right panel of Figure 1 ; a random baseline is also shown. 9 In all cases, the accuracy is significantly better than the random baseline using a one-tailed binomial test (p < .05). For the rest of this study we focus on experiments with r = 10 because we are especially interested in cases where the novel sense is relatively infrequent.", |
| "cite_spans": [ |
| { |
| "start": 275, |
| "end": 297, |
| "text": "Peirsman et al. (2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 26, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1000, |
| "end": 1008, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We now consider whether the cprob association measure, and newly-proposed similarity metric Newness, are an improvement over the measures I and Cosine. The left panel of Figure 2 shows results on nouns using both association measures and similarity metrics-taking intra-corpus similarity into account-in experiments with r = 10. The methods using cprob outperform those using I. In terms of top-100 accuracy, cprob and Newness (93%), and cprob and Cosine (77%), are both significantly better than the best method using I (I and Cosine-48%) using a two-tailed binomial test (p \u226a .05 in both cases). The best performance is achieved using cprob in combination with Newness; the top-100 accuracy of this method is significantly better than that of the next best performing method-cprob and Cosine-using a two-tailed binomial test (p \u226a .05). In the case of verbs (not shown) the methods using cprob are again significantly better than those using I; however, in this case the accuracy using cprob and Newness (72%) is not significantly different than that using cprob and Cosine (64%, p > .05).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 170, |
| "end": 178, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In all the experiments so far in this section, we have taken intra-corpus similarity into account. We now examine the impact of this process on performance. The right panel of Figure 2 shows results using our best performing method-cprob in combination with Newness-for synthetic examples of nouns in experiments with r = 10. (Results using verbs-not shown-are similar.) The results taking intra-corpus similarity into account are much better than those for which similarity is computed directly. The top-100 accuracy using intra-corpus similarity (93%) is significantly better than that calculating similarity directly (47%) using a two-tailed binomial test (p \u226a .05). These results confirm Peirsman et al.'s (2010) observation that it is important to take into account information about intra-corpus variation when identifying differences between corpora.", |
| "cite_spans": [ |
| { |
| "start": 692, |
| "end": 716, |
| "text": "Peirsman et al.'s (2010)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 176, |
| "end": 184, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We have proposed a method for identifying words that are used in a novel sense in one corpus with respect to another. In contrast to previous work in this area, we focused specifically on infrequent novel senses, the identification of which is a challenge in lexicography due to the vast amount of text being produced nowadays. Our proposed method outperformed random baselines, even when evaluated on rather subtle changes in sense. Furthermore, the combination of a very simple association measure (cprob) and our newly-proposed asymmetrical similarity metric (Newness) outperformed methods using a standard association measure and symmetrical similarity metric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Given the challenges of evaluation for this task-namely a lack of gold standard data-we further proposed the use of two different types of synthetic examples of semantic change to empirically assess performance on this task. Sense-tagged data was used to construct a small number of synthetic examples of semantic change based on real word senses, while near synonyms were used to build greater numbers of synthetic examples of semantic change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Although we motivated this work in the context of identifying novel senses, these methods can be applied to any pair of comparable corpora to identifying words with different senses in those corpora. In our ongoing work we are applying our methods to pairs of corpora to manually assess its performance. Given the expense of this manual evaluation, synthetic examples of semantic change remain attractive as a means for determining the strengths and weaknesses of approaches to this task, and for selecting approaches for further manual evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this case, followingLin (1998), we additionally restrict the salient tuples to those having positive association. 3 I is known to assign high association to low frequency items; a given item will tend to have lower frequency in a smaller corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Here we assume A and B are approximately equal in size. Differences in corpus size could be accounted for through a partitioning scheme that ensures that comparisons are made between equal-size corpus parts. 5 http://maltparser.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Because we are using a dataset of usages, as opposed to a corpus, we cannot compute the marginal frequencies required for I here.7 Although the similarity of w A and w B could be viewed as a measure of intra-corpus similarity, here we are using these samples to simulate two corpora in which w has the same set of senses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Although for a given word w we choose near synonym(w) from the same synset, these words will typically differ somewhat in their usage and in the contexts in which they appear.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The baseline is calculated as the number of synthetic examples of semantic change divided by the total number of items.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Historical Linguistics: An Introduction", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Campbell", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Campbell, L. 2004. Historical Linguistics: An Introduction. MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Automatically identifying changes in the semantic orientation of words", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010)", |
| "volume": "", |
| "issue": "", |
| "pages": "28--34", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cook, P. and S. Stevenson. 2010. Automatically identifying changes in the semantic orientation of words. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010), pages 28-34, Valletta, Malta.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Work on statistical methods for word sense disambiguation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Cambridge", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Goldman", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Norvig", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Working Notes of the AAAI Fall Symposium on Probabilistic Approaches to Natural Language", |
| "volume": "", |
| "issue": "", |
| "pages": "54--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fellbaum, C., editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. Gale, W. A., K. W. Church, and D. Yarowsky. 1992. Work on statistical methods for word sense disam- biguation. In Goldman, R., Norvig, P., Charniak, E., and Gale, B., editors, Working Notes of the AAAI Fall Symposium on Probabilistic Approaches to Natural Language, pages 54-60. AAAI Press, Menlo Park, CA.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Statistical corpus-based word sense disambiguation: Pseudowords vs. real ambiguous words", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Gaustad", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Companion Volume to the Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "61--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gaustad, T. 2001. Statistical corpus-based word sense disambiguation: Pseudowords vs. real ambiguous words. In Companion Volume to the Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL 2001) -Proceedings of the Student Research Workshop, pages 61-66, Toulouse, France.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Gulordava", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "67--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gulordava, K. and M. Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 67-71, Edinburgh, Scotland.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A general feature space for automatic verb classification", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Joanis", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Natural Language Engineering", |
| "volume": "14", |
| "issue": "3", |
| "pages": "337--367", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joanis, E., S. Stevenson, and D. James. 2008. A general feature space for automatic verb classification. Natural Language Engineering, 14(3):337-367.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Sketching words", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Tugwell", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Lexicography and Natural Language Processing: A Festschrift in Honour of B. T. S. Atkins", |
| "volume": "", |
| "issue": "", |
| "pages": "125--137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kilgarriff, A. and D. Tugwell. 2002. Sketching words. In Corr\u00e9ard, M.-H., editor, Lexicography and Natural Language Processing: A Festschrift in Honour of B. T. S. Atkins, pages 125-137. Euralex, Grenoble, France.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Automatic retrieval and clustering of similar words", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (ACL/COLING 1998)", |
| "volume": "", |
| "issue": "", |
| "pages": "768--774", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, D. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (ACL/COLING 1998), pages 768-774, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Unsupervised acquisition of predominant word senses", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Koeling", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Weeds", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "4", |
| "pages": "553--590", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McCarthy, D., R. Koeling, J. Weeds, and J. Carroll. 2007. Unsupervised acquisition of predominant word senses. Computational Linguistics, 33(4):553-590.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The Senseval-3 English lexical sample task", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Chklovski", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text", |
| "volume": "", |
| "issue": "", |
| "pages": "25--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihalcea, R., T. Chklovski, and A. Kilgarriff. 2004. The Senseval-3 English lexical sample task. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 25-28, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Category-based pseudowords", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "I" |
| ], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Companion Volume of the Proceedings of HLT-NAACL 2003 -Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "67--69", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nakov, P. I. and M. A. Hearst. 2003. Category-based pseudowords. In Companion Volume of the Proceed- ings of HLT-NAACL 2003 -Short Papers, pages 67-69, Edmonton, Canada.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "MaltParser: A language-independent system for data-driven dependency parsing", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Chanev", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Eryigit", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Marinov", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Marsi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Natural Language Engineering", |
| "volume": "13", |
| "issue": "2", |
| "pages": "95--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, J., J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. K\u00fcbler, S. Marinov, and E. Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95-135.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A systematic approach to the selection of neologisms for inclusion in a large monolingual dictionary", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "O'donovan", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "O'neil", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 13th Euralex International Congress", |
| "volume": "", |
| "issue": "", |
| "pages": "571--579", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O'Donovan, R. and M. O'Neil. 2008. A systematic approach to the selection of neologisms for inclusion in a large monolingual dictionary. In Proceedings of the 13th Euralex International Congress, pages 571-579, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A new approach to pseudoword generation", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Otrusina", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Smrz", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010)", |
| "volume": "", |
| "issue": "", |
| "pages": "1195--1199", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Otrusina, L. and P. Smrz. 2010. A new approach to pseudoword generation. In Proceedings of the Sev- enth International Conference on Language Resources and Evaluation (LREC 2010), pages 1195-1199, Valletta, Malta.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The automatic identification of lexical variation between language varieties", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Peirsman", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Geeraerts", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Speelman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Natural Language Engineering", |
| "volume": "16", |
| "issue": "4", |
| "pages": "469--491", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peirsman, Y., D. Geeraerts, and D. Speelman. 2010. The automatic identification of lexical variation be- tween language varieties. Natural Language Engineering, 16(4):469-491.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Semantic density analysis: Comparing word meaning across time and space", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sagi", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kaufmann", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the EACL 2009 Workshop on GEMS: GEometrical Models of Natural Language Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sagi, E., S. Kaufmann, and B. Clark. 2009. Semantic density analysis: Comparing word meaning across time and space. In Proceedings of the EACL 2009 Workshop on GEMS: GEometrical Models of Natural Language Semantics, pages 104-111, Athens, Greece.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The New York Times Annotated Corpus. Linguistic Data Consortium", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sandhaus", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sandhaus, E. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Probabilistic part-of-speech tagging using decision trees", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the International Conference on New Methods in Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "44--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schmid, H. 2004. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the Interna- tional Conference on New Methods in Language Processing, pages 44-49, Manchester, UK.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Automatic word sense discrimination", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "1", |
| "pages": "97--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sch\u00fctze, H. 1992. Automatic word sense discrimination. Computational Linguistics, 24(1):97-123.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Left: Interpolated precision-recall curves for identifying synthetic examples of nouns using the association measure I and Cosine similarity metric.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "Left: Interpolated precision-recall curves for identifying synthetic examples of nouns with r = 10 using the association measures cprob (cp) and I, and the Cosine (C) and Newness (N) similarity metrics. Right: Interpolated precision-recall curves for identifying synthetic examples of nouns with r = 10 using the cprob association measure and Newness similarity metric, computing similarity based on intra-corpus similarity (+IC) and directly (\u2212IC).", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "num": null, |
| "text": "Average % accuracy over 32 verbs and 20 nouns using the Cosine and Newness similarity metrics for differing values of r-the percentage of usages of the novel sense in the \"change\" condition. Items marked * are significantly better (p < .05) than a random baseline.", |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"3\">Average % accuracy</td><td/></tr><tr><td/><td>Verbs</td><td/><td colspan=\"2\">Nouns</td></tr><tr><td>r</td><td colspan=\"4\">Cosine Newness Cosine Newness</td></tr><tr><td>100</td><td>85*</td><td>78*</td><td>82*</td><td>79*</td></tr><tr><td>80</td><td>79*</td><td>70*</td><td>81*</td><td>79*</td></tr><tr><td>60</td><td>75*</td><td>66*</td><td>77*</td><td>74*</td></tr><tr><td>40</td><td>67*</td><td>57*</td><td>71*</td><td>67*</td></tr><tr><td>20</td><td>58*</td><td>51</td><td>60*</td><td>54</td></tr></table>" |
| } |
| } |
| } |
| } |