ACL-OCL / Base_JSON /prefixS /json /starsem /2020.starsem-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:09.416204Z"
},
"title": "Assessing Polyseme Sense Similarity through Co-predication Acceptability and Contextualised Embedding Distance",
"authors": [
{
"first": "Janosch",
"middle": [],
"last": "Haber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Co-predication is one of the most frequently used linguistic tests to tell apart shifts in polysemic sense from changes in homonymic meaning. It is increasingly coming under criticism as evidence is accumulating that it tends to mis-classify specific cases of polysemic sense alteration as homonymy. In this paper, we collect empirical data to investigate these accusations. We asses how co-predication acceptability relates to explicit ratings of polyseme word sense similarity, and how well either measure can be predicted through the distance between target words' contextualised word embeddings. We find that sense similarity appears to be a major contributor in determining co-predication acceptability, but that co-predication judgements tend to rate less similar sense interpretations as being as unacceptable as homonym pairs, effectively misclassifying these instances. The tested contextualised word embeddings fail to predict word sense similarity consistently, but the similarities between BERT embeddings show a significant correlation with co-predication ratings. We take this finding as evidence that BERT embeddings might be better representations of context than encodings of word meaning.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Co-predication is one of the most frequently used linguistic tests to tell apart shifts in polysemic sense from changes in homonymic meaning. It is increasingly coming under criticism as evidence is accumulating that it tends to mis-classify specific cases of polysemic sense alteration as homonymy. In this paper, we collect empirical data to investigate these accusations. We asses how co-predication acceptability relates to explicit ratings of polyseme word sense similarity, and how well either measure can be predicted through the distance between target words' contextualised word embeddings. We find that sense similarity appears to be a major contributor in determining co-predication acceptability, but that co-predication judgements tend to rate less similar sense interpretations as being as unacceptable as homonym pairs, effectively misclassifying these instances. The tested contextualised word embeddings fail to predict word sense similarity consistently, but the similarities between BERT embeddings show a significant correlation with co-predication ratings. We take this finding as evidence that BERT embeddings might be better representations of context than encodings of word meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Polysemy is a form of lexical ambiguity which occupies a unique middle ground between monosemy -word forms with exactly one interpretationand homonymy -word forms associated with two or more completely unrelated interpretations. Unlike monosemes, polysemes can evoke different interpretations, but unlike homonyms, polysemic sense interpretations are thought to be closely related to each other (Lyons, 1977) . It is commonly This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http: //creativecommons.org/licenses/by/4.0/.",
"cite_spans": [
{
"start": 395,
"end": 408,
"text": "(Lyons, 1977)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "assumed that most words in natural language are in fact polysemous to some degree (Falkum and Vicente, 2015) , and the question whether there in fact are any proper monosemes has been the source of ongoing debate (see for example Jackendoff, 1989; Fodor, 1998) . Homonyms have been a driving factor in developing contextualised language models (e.g. Peters et al., 2018; Devlin et al., 2018; Radford et al., 2019) in order to account for the different unrelated meanings some words can evoke in different contexts:",
"cite_spans": [
{
"start": 82,
"end": 108,
"text": "(Falkum and Vicente, 2015)",
"ref_id": "BIBREF9"
},
{
"start": 230,
"end": 247,
"text": "Jackendoff, 1989;",
"ref_id": "BIBREF15"
},
{
"start": 248,
"end": 260,
"text": "Fodor, 1998)",
"ref_id": "BIBREF11"
},
{
"start": 350,
"end": 370,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 371,
"end": 391,
"text": "Devlin et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 392,
"end": 413,
"text": "Radford et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "a. The match burned my fingers. b. The match ended without a winner. Comparing these uses of the word match to the various closely related interpretations of canonical polyseme school illuminates the conceptual difference between the two phenomena of lexical ambiguity: 1 a. The school [building] is on fire. b. The school [rules] has prohibited wearing hats in the classroom. c. I have talked to the school [director, staff] about it already. d. The school [participants] went for a visit to the cathedral.",
"cite_spans": [
{
"start": 270,
"end": 271,
"text": "1",
"ref_id": null
},
{
"start": 323,
"end": 330,
"text": "[rules]",
"ref_id": null
},
{
"start": 408,
"end": 425,
"text": "[director, staff]",
"ref_id": null
},
{
"start": 458,
"end": 472,
"text": "[participants]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although the distinction is clear in theory, distinguishing monosemy, polysemy and homonymy in practice proves exceedingly difficult: At what point are interpretation nuances pronounced enough to speak of two different word senses? Is the coercion of word sense a manifestation of polysemic sense alteration or a context effect on a monosemic word form? Do word senses related through metaphor qualify as polysemes or are their interpretations a form or homonymic ambiguity? Traditionally, co-predication tests are used to provide a linguistic means to answer these questions and attempt a classification of word sense interpretations into one of the three categories. In co-predication tests, two interpretations of a word form are simultaneously invoked by the context. If this renders a felicitous construction (see Example 1), the two interpretations are considered to evoke the same sense or meaning of the word; if the reading is infelicitous (Example 2) they are considered to be derived from two different word meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) The newspaper wasn't very interesting, so she folded it and put it away. [content/object] (2) # The match burned my fingers but ended without a winner.",
"cite_spans": [
{
"start": 77,
"end": 93,
"text": "[content/object]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on a range of experiments finding that homonyms seem to be processed differently than polysemes (Frazier and Rayner, 1990; Rodd et al., 2002; Klepousniotou et al., 2008 Klepousniotou et al., , 2012 , the prevailing understanding of co-predication is that it is rendered felicitous if the different sense interpretations are activated simultaneously and can be shifted between without additional processing costs. Co-predication is thought to lead to infelicitous sentences if the different activations are not activated automatically, and cognitive effort is involved in updating the assumed meaning of a word. These hypotheses informed a number of linguistic models to define different mental representations of homonymic meaning and polysemic sense, respectively. The Generative Lexicon (Pustejovsky, 1991; Asher and Pustejovsky, 2006; Asher, 2011) for example postulates individual lexicon entries for different interpretations of a homonym, while all sense interpretations of a polyseme are represented by a single under-specified entry and therefore do not require any processing cost for sense switching. Recently, a growing body of work however came to challenge a unified, under-specified representation of polysemic sense (see Klepousniotou, 2002; Pylkk\u00e4nen et al., 2006; Frisson, 2015) . Klepousniotou et al. (2012) for example report that their experiments indicate that the processing of irregular polysemes resembles homonymic meaning alterations more than the sense alterations in regular polysemes, while an ongoing series of co-predication studies (e.g. Antunes and Chaves, 2003; Traxler et al., 2005; Schumacher, 2013; Filip and Sutton, 2017; Zobel, 2017; Sutton and Filip, 2018) show that not all polysemic senses can be co-predicated either, and that the co-predication of some poly-semic interpretations can lead to infelicitous and zeugmatic expressions: 2 a. # The newspaper fired its editor in chief and got wet from the rain. [publisher/publication] b. # They took the door off its hinges and walked through it. [object/opening] A recent model of polyseme sense clustering proposed by Ortega-Andr\u00e9s and Vicente (2019) tries to explain why certain polyseme senses lead to infelicitous co-predication by suggesting that polyseme senses might be grouped based on their similarity. According to their grouping, closely related senses are thought to form co-activation packages that remain active for a while, allowing for cost-free sense shifting and therefore felicitous co-predication. Distantly related sense interpretations on the other hand would not co-activate and therefore require cognitive effort to be changed, much like homonynic meaning alterations.",
"cite_spans": [
{
"start": 102,
"end": 128,
"text": "(Frazier and Rayner, 1990;",
"ref_id": "BIBREF12"
},
{
"start": 129,
"end": 147,
"text": "Rodd et al., 2002;",
"ref_id": "BIBREF28"
},
{
"start": 148,
"end": 174,
"text": "Klepousniotou et al., 2008",
"ref_id": "BIBREF18"
},
{
"start": 175,
"end": 203,
"text": "Klepousniotou et al., , 2012",
"ref_id": "BIBREF17"
},
{
"start": 795,
"end": 814,
"text": "(Pustejovsky, 1991;",
"ref_id": null
},
{
"start": 815,
"end": 843,
"text": "Asher and Pustejovsky, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 844,
"end": 856,
"text": "Asher, 2011)",
"ref_id": "BIBREF2"
},
{
"start": 1242,
"end": 1262,
"text": "Klepousniotou, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 1263,
"end": 1286,
"text": "Pylkk\u00e4nen et al., 2006;",
"ref_id": "BIBREF26"
},
{
"start": 1287,
"end": 1301,
"text": "Frisson, 2015)",
"ref_id": "BIBREF13"
},
{
"start": 1304,
"end": 1331,
"text": "Klepousniotou et al. (2012)",
"ref_id": "BIBREF17"
},
{
"start": 1576,
"end": 1601,
"text": "Antunes and Chaves, 2003;",
"ref_id": "BIBREF0"
},
{
"start": 1602,
"end": 1623,
"text": "Traxler et al., 2005;",
"ref_id": "BIBREF31"
},
{
"start": 1624,
"end": 1641,
"text": "Schumacher, 2013;",
"ref_id": "BIBREF29"
},
{
"start": 1642,
"end": 1665,
"text": "Filip and Sutton, 2017;",
"ref_id": "BIBREF10"
},
{
"start": 1666,
"end": 1678,
"text": "Zobel, 2017;",
"ref_id": "BIBREF34"
},
{
"start": 1679,
"end": 1702,
"text": "Sutton and Filip, 2018)",
"ref_id": "BIBREF30"
},
{
"start": 1956,
"end": 1979,
"text": "[publisher/publication]",
"ref_id": null
},
{
"start": 2042,
"end": 2058,
"text": "[object/opening]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The difficulty in assessing this hypothesis is the unavailability of a ready reference of contextualised word sense similarity for polysemes. To mitigate this, we collected human annotated data on a number of different measures of polysemic sense similarity to empirically investigate the correlation between sense similarity ratings and co-predication acceptability judgements. Specifically, we use crowdsourcing to collect i) graded co-predication acceptability judgements, ii) explicit (meta-linguistic) word sense similarity judgements, iii) word class similarity ratings, and iv) determine the similarity in a target word's contextualised embeddings derived from different models. If word sense similarity indeed governs co-activation and therefore co-predication acceptability, we expect similarity judgements to be a strong predictor for acceptability judgements. Conversely, if copredication acceptability is a representative test of the mental processing of lexically ambiguous items, we expect acceptability judgements to be a strong predictor of similarity judgements and reliably tell apart homonyms from polysemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We find that sense similarity appears to be a major contributor in determining co-predication acceptability, but that co-predication judgements tend to rate less similar sense interpretations equally as unacceptable as homonym pairs, effectively misclassifying these instances. We therefore argue that these findings provide both, a) support for a more hierarchical representation of polysemic sense based on sense similarity, and, b) an additional, empirically founded argument against copredication as a prevailing test for distinguishing polysemy and homonymy. Finally, the tested contextualised word embeddings fail to predict word sense similarity consistently, but the similarities between BERT embeddings show a significant correlation with co-predication ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to evaluate both, i) the hypothesis that polysemic senses might form groupings based on their similarity, and ii) the prevalence of copredication as a linguistic test for the distinction between homonymy and polysemy, we collect three human annotated measures of word sense similarity together with five word sense similarity proxies derived from computational methods. We investigate how well these different metrics distinguish homonyms from polysemes, and to what degree they can predict one another. In order to achieve a fair comparison of the different measures, we defined a fixed set of target words, sense interpretations and contexts to be used in all experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Since at least Apresjan (1974), polysemes are generally considered to be either regular or irregular, depending on whether or not their sense patterns are shared with other word forms. Irregular polysemes often demonstrate a metaphorical connection between the different interpretations of their senses that does not carry over for other uses (see Example 3), regular, or systematic polysemes on the other hand exhibit the same interpretation patterns across a number of word forms (Example 4). See Moldovan (2019) With growing evidence that irregular polysemes might be processed differently than their regular counterparts (e.g. Klepousniotou et al., 2012) , we decided to focus on regular polysemic nouns for this study. Regular polysemes can be more clearly distinguished from homonyms, maximising the impact of our findings if metrics fail to classify them correctly. With their canonical division of sense interpretations, they also allow for a clear separation of different sense interpretations, making it easier to generate contexts that unequivocally evoke the different senses. We selected ten of the systematic polysemy types compiled in D\u00f6lling (Forthcoming), with target expressions having between two and four clearly distinct but related senses, and picked one of the most frequently used expressions representing each class from his compilation.",
"cite_spans": [
{
"start": 499,
"end": 514,
"text": "Moldovan (2019)",
"ref_id": "BIBREF22"
},
{
"start": 631,
"end": 658,
"text": "Klepousniotou et al., 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Samples",
"sec_num": "2.1"
},
{
"text": "To create sample contexts invoking the different interpretations, we followed a custom template designed to guarantee that samples could be used individually to collect graded word sense judgements, class ratings and context embedding similarity, but could also be combined into a co-predication structure without invalidating acceptability due to repetitions or temporal or logical mis-matches. Following this template, samples were created such that i) the ambiguous target expression is the subject of the sentence, ii) the context is kept as short as possible, and iii) the context invokes a certain sense as clearly as possible without mentioning that sense explicitly. 3 Besides creating clear sample sentences for our human participants, these guidelines also minimise the impact of syntactic features and compounding context effects for contextualised models, which are shown to significantly impact embbedings (see e.g. Wiedemann et al., 2019) and cloud the accessibility of meaning representations.",
"cite_spans": [
{
"start": 675,
"end": 676,
"text": "3",
"ref_id": null
},
{
"start": 929,
"end": 952,
"text": "Wiedemann et al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Samples",
"sec_num": "2.1"
},
{
"text": "Two sample contexts were created for every sense interpretation of the ten polysemes, resulting in a total of 54 sentences. As an example, consider the six sample sentences of polyseme newspaper, generated for its three senses (1) organisation/institution, (2) physical object and (3) information/data: 1a The newspaper fired its editor in chief., 1b The newspaper was sued for defamation. 2a The newspaper lies on the kitchen table., 2b The newspaper got wet from the rain. 3a The newspaper wasn't very interesting., 3b The newspaper is rather satirical today.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Samples",
"sec_num": "2.1"
},
{
"text": "Besides the polyseme samples, we created an ad-ditional two samples sets. The first set is made up of 15 common homonyms, with two sentences invoking their two most dominant senses each. While our focus is on polysemes, comparing ratings for the homonym samples to ratings assigned to polyseme pairs, we will be able to test the different similarity measures' performance in predicting whether an ambiguous target pair is polysemic or homonymic. The second set contains 15 pairs of synonyms meant to be used as quality control and to calibrate the rating scale. All sample sentences were rated to be acceptable by annotators recruited from Amazon Mechanical Turk (AMT) 4 in a validation experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Samples",
"sec_num": "2.1"
},
{
"text": "Traditionally, co-predication acceptability is one of the most frequently used linguistic tests for distinguishing homonyms from polysemes. Acceptability usually is determined through introspection, classifying a sentence invoking two different interpretations of the same word form as either acceptable or not. When assessed through annotator judgements, co-predication acceptability however appears to be a graded measure (Lau et al., 2014) . We therefore decided to collect empirical data on graded annotator judgements, asking participants to rate the acceptability of co-predication structures combining different pairings of target word samples through conjunction reduction (Zwicky and Sadock, 1975) . As an example, the previously shown newspaper contexts 1a and 1b where combined into co-predication sample 1ab for data collection:",
"cite_spans": [
{
"start": 424,
"end": 442,
"text": "(Lau et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 681,
"end": 706,
"text": "(Zwicky and Sadock, 1975)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Co-predication Acceptability",
"sec_num": "2.2"
},
{
"text": "1ab The newspaper fired its editor in chief and was sued for defamation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Co-predication Acceptability",
"sec_num": "2.2"
},
{
"text": "Co-predication samples were generated for all combinations of sense interpretations, resulting in four samples for polysemes with two senses, nine for polysemes with three senses, and 16 for those with four, and a grand total of 75. We manually inspected the co-predication structures for any inconsistencies that might have emerged through the conjunction, and corrected issues with the least invasive measures possible. The samples were then distributed over 15 questionnaires so that no target expression appeared twice in any questionnaire. We added one of the homonym and synonym val-idation samples to each questionnaire, and filled all questionnaires to a total of ten items with copredication structures generated from random sentence pairs to obfuscate the focus on polysemes. Item order was then randomised per questionnaire.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Co-predication Acceptability",
"sec_num": "2.2"
},
{
"text": "We used AMT to collect graded co-predication acceptability judgements by asking workers to rate a given sentence using a slider labelled with \"The sentence is absolutely unacceptable\" on the left hand side and \"The sentence is absolutely acceptable\" on the right. The submitted slider positions were translated to a 100-point acceptability score ranging between 0 and 1, and stored in combination with a worker's unique ID. To improve judgement quality, we required workers to have obtained a US high school degree and reached the \"AMT Master\" qualification. 5 Workers were paid 0.35 USD for every completed questionnaire.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Co-predication Acceptability",
"sec_num": "2.2"
},
{
"text": "We collected between 20 and 40 judgements for each item. A total of 43 individual workers contributed to the study, with HITs taking an average of 146 seconds (median of 93). Through filtering out any submissions that rated at least two filler samples higher than 0.66 or the synonym sample lower than 0.33, 6 we excluded a total of 44 judgements. The resulting dataset features an average of 28 judgements per item.",
"cite_spans": [
{
"start": 308,
"end": 309,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Co-predication Acceptability",
"sec_num": "2.2"
},
{
"text": "As a first measure of sense similarity, we collected graded annotator judgements explicitly rating the similarity of word sense interpretations as invoked by different pairings of sample sentences. In contrast to co-predication judgements, these pairwise similarity ratings are less influenced by factors like sentence order and compound consistency, but do provide a meta-linguistic signal rather than the more ecological acceptability rating derived from co-predication. Still, if word sense similarity is the driving factor in determining the mental representation of polysemic sense, we should find a strong correlation between these judgements and the previously measured co-predication judgements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Word Sense Similarity",
"sec_num": "2.3"
},
{
"text": "We collected word sense similarity judgements using our custom polyseme sample set, this time combining samples into sentence pairs invoking different combinations of sense interpretations instead of joining them into a single co-predication structure. The same method as in the first experiment was used for distributing test items over questionnaires, with the distinction that now homonym, synonym and filler samples were presented as sentence pairs rather than co-predication structures as well. We highlighted target expressions in bold font and asked workers to rate the highlighted expressions using a slider labelled with \"The highlighted words have a completely different meaning\" on the left hand side and \"The highlighted words have completely the same meaning\" on the right. Qualification requirements and payment remained identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Word Sense Similarity",
"sec_num": "2.3"
},
{
"text": "We collected 20 judgements for each questionnaire. 65 individual workers in total contributed to the study, with HITs taking an average of 133 seconds (median = 90). Applying the same filtering as with the co-predication samples, we removed 9 submissions and retained at least 18 judgements per item.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graded Word Sense Similarity",
"sec_num": "2.3"
},
{
"text": "As a second judgement of word sense similarity, we collected categorical sense class labels. If the determining factor in whether or not word senses can be co-predicated is not specifically their distance, but whether or not both interpretations refer to the same type or class of object, the agreement in assigned sense class should be a good predictor of co-predication acceptability -and valid proxy of word sense similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Class Ratings",
"sec_num": "2.4"
},
{
"text": "To collect sense class labels, AMT Workers were presented with individual sample sentences together with a list of 16 sense class labels. Class labels were derived from the descriptions of the ten polyseme's different interpretations as used in D\u00f6lling (Forthcoming) and included an \"other\" category label. We used the same set of polyseme samples as before, with target expression highlighted like in the second experiment. Designed to validate the other two annotation metrics, we did not include any homonym, synonym or filler items in this experiment. Workers were asked to classify the highlighted target expression by selecting all applicable labels. Submissions were stored in 16dimensional multi-hot vectors indicating the selection of labels together with the worker's ID. We kept the same worker qualification requirements and payment regime as before and collected 15 la-bels for each item, incidentally provided by exactly 15 individual workers, i.e. each individual worker completed all 15 questionnaires. HIT's took an average of 178 seconds (median of 107). Classification results were not filtered, but averaged per item in order to create word sense class vectors. Pairwise sense class similarity was then calculated through the cosine between the different combinations of sense interpretations, i.e. the overlap in their averaged multi-class assignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Class Ratings",
"sec_num": "2.4"
},
{
"text": "The resulting dataset containing all three types of human annotations is publicly available. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Class Ratings",
"sec_num": "2.4"
},
{
"text": "Because the three previously described measures of word sense similarity are based on costly humanannotated labels, we were also interested in investigating how well sense similarity estimates derived from computational models would correlate with these metrics. Models of polysemy have previously been proposed in distributional semantics (see for example Boleda et al., 2012) , but for the most part, such models found limited application in computational linguistics. With the recent development of context-sensitive models of word embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) , the field however obtained a new tool to capture polysemic sense alterations, leading to a demonstrated improvement in various NLP systems. While ELMo was developed explicitly to capture a target word's context, BERT is a language model based on the encoder architecture of the Transformer model (Vaswani et al., 2017) , an attention mechanism for learning the contextual relations between words. While BERT's output is usually fed to a downstream model, our aim is to see whether it is able to capture differences in word sense by using its outputs directly.",
"cite_spans": [
{
"start": 357,
"end": 377,
"text": "Boleda et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 559,
"end": 580,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 590,
"end": 611,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 910,
"end": 932,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding Similarity",
"sec_num": "2.5"
},
{
"text": "To obtain ELMo embeddings we used a pretrained model available on TensorFlow Hub 8 and extracted target word vectors from the LSTM's second layer hidden state, which has previously been shown to encode more semantic information than the character-level first layer or the LSTM's first layer (Ethayarajh, 2019; Haber and Poesio, 2020) . For the investigation of BERT's embeddings we used the output of a pretrained cased model as provided by Huggingface 9 with 12 layers, a hidden state size of 768 and 12 attention heads. We i) extracted and averaged sub-word vectors before pooling, ii) extracted the embedding of the [CLS] token, and iii) used the pooled sentence embedding. Lastly, we also determined a primitive contextualised sentence embedding by averaging over the sentence's token embeddings as derived from Word2Vec (Mikolov et al., 2013) pretrained on the Google News Dataset. 10",
"cite_spans": [
{
"start": 291,
"end": 309,
"text": "(Ethayarajh, 2019;",
"ref_id": "BIBREF8"
},
{
"start": 310,
"end": 333,
"text": "Haber and Poesio, 2020)",
"ref_id": "BIBREF14"
},
{
"start": 619,
"end": 624,
"text": "[CLS]",
"ref_id": null
},
{
"start": 825,
"end": 847,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding Similarity",
"sec_num": "2.5"
},
{
"text": "We report the collected data in four steps: Firstly, we inspect to what degree the different metrics and combinations thereof can predict whether a pair of target sense interpretations is polysemic or homonymic. We then investigate the correlation between the three collected annotation metrics, and model_doc/bert.html report how well the computational measures predict the human annotations. Finally, we move to a more qualitative analysis, investigating in more detail the distribution of ratings over the different sense interpretations of a polyseme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "The top two graphs in Figure 1 show the distribution of human annotations for homonymic (blue) and polysemic (orange) target words based on their explicit word sense similarity ratings or copredication acceptability, respectively. Both annotation measures clearly separate the modes of the distributions, but while co-predication acceptability judgements for the tested polyseme pairs occupy the entire rating scale, explicit word sense similarity ratings only span the upper half (the lowest score is 0.48). Conversely, co-predication acceptability ratings for homonym pairs reach up to 0.67, while the highest-scoring homonym pair only reaches a similarity score of 0.44. This impacts the distribution means, which are closer to each other in the co-predication metric than in the similarity scores. The computational approaches to rating word sense similarities overall return relatively high scores for both, homonym and polyseme pairs, often only occupying the top 20% of the scale. As a result, the means of their distributions are significantly closer, as exemplified by the distributions of BERT word embedding similarity ratings for polyseme and homonym pairs in the third graph of Figure 1 . The primitive Word2Vec sentence embeddings lastly assign a higher mean similarity score to homonym pairs than to polysemes (last graph).",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1191,
"end": 1199,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Predicting Ambiguity Types",
"sec_num": "3.1"
},
{
"text": "Because co-predication acceptability judge- Table 1 : Correlations between the three different metrics of word sense similarity based on annotation judgements, and correlation between computational proxies of word sense similarity as compared to the human judgements. The first set of columns displays pairwise correlation based on Pearson's r, the second set shows the key statistics obtained from their OLS regression, and the third set contains the mean regression scores based on 5-fold cross validation.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 51,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Predicting Ambiguity Types",
"sec_num": "3.1"
},
{
"text": "ments show a higher overlap between the distributions of homonym and polyseme ratings than the similarity ratings, we expect similarity to be a stronger predictor in classifying target pairs as either homonyms or polysemes. To validate this intuition, we classified items through a support vector machine (SVM) with linear kernel under five-fold cross-validation. As our dataset is skewed towards polysemy samples, baseline performance is an accuracy of 0.825, achieved by assigning all samples to the polyseme class. Both classification based on similarity ratings and co-predication ratings outperform this baseline, with an accuracy of 0.988 for similarity ratings, and 0.895 for copredication ratings, respectively. Figure 2 shows the optimal decision boundary between homonym samples (blue) and polyseme pairs (orange) calculated for the two annotation metrics. The higher overlap in homonym and polyseme ratings indeed prevents a clear delineation between the two ambiguity types. None of the computational metrics manages to outperform the baseline, and consistently apply max-class labels. Neither combining the two human annotated metrics, nor combining any of the computational metrics improves their respective classification performance over the best individual score.",
"cite_spans": [],
"ref_spans": [
{
"start": 720,
"end": 728,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Predicting Ambiguity Types",
"sec_num": "3.1"
},
{
"text": "In order to establish a measure of correlation between the three human annotation metrics, we consider all six combinations of metrics and i) calculate their Pearson's r, ii) perform an ordinary least squares (OLS) regression, and iii) calculate the mean squared error (MSE) of OLS predictions under five-fold cross validation. The results of these calculations are displayed in Table 1 , and visualised in Figure 3 . We find a moderate but significant correlation between the three human annotation metrics. Similarity judgements and co-predication acceptability judgements show the lowest correlation in the set (Pearson's r of 0.529), while acceptability judgements and categorical class similarity achieve the highest correlation (Pearson's r of 0.563). These results indicate that categorical class boundaries between referent interpretations might have a more direct influence on whether two different senses can felicitously be co-predicated than their graded similarity score. The correlation graphs in Figure 3 again display the coverage of judgements obtained for the three human annotation metrics, indicating that class similarity ratings, like co-predication acceptability, span over the full scale, while similarity judgements only cover the top half. Here however this means that predicting co-predication ratings from similarity scores is more difficult than the inverse, leading to a higher error rate in the prediction of low-similarity items, and an overall higher mean squared error (MSE; 0.014 to 0.04). The same holds for predicting similarity class labels from similarity judgements, which is more difficult than predicting similarity judgements based on class similarity.",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 1",
"ref_id": null
},
{
"start": 407,
"end": 415,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 1011,
"end": 1019,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Relation Between Different Annotations of Sense Similarity",
"sec_num": "3.2"
},
{
"text": "The bottom part of Table 1 displays the results of predicting human judgements of polyseme sense similarity based on the different computational proxies. Only seven of the pairwise correlations are significant, and only the correlation between BERT contextualised word embeddings and copredication acceptability ratings approaches a moderate degree (Pearson's r of 0.48). We argue that it was to be expected that the correlation between the similarity of BERT's contextualised embeddings and co-predication acceptability should be higher than between BERT scores and explicit similarity ratings, as BERT does not specifically capture the sense of a target word, but rather the diversity and type of context it appears in. This way it is easier to predict whether a combined context as created by co-predication is natural to occur (and therefore more felicitous) than to directly predict the targets' sense similarity. Other notable significant pairs are ELMo word embeddings and classification similarity (Pearson's r of 0.32), ELMo and similarity ratings (r = 0.3), as well as BERT classification token similarity and co-predication acceptability (r = 0.27), indicating that BERT and ELMo might capture slightly different facets of word sense -but, as indicate above -not in such a way that combining them would improve their performance in predicting the ambiguity type of a target word pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation between Computational Estimates and Human Judgements",
"sec_num": "3.3"
},
{
"text": "While the correlation between explicit similarity judgements and co-predication acceptability is imperfect, our analysis reveals that judgements are more similar towards the upper end of the rating scale than at the lower end. To investigate this observation in more detail, we here analyse polyseme newspaper, which provides two samples to the low-similarity cluster. As mentioned before, in our experiments we assume that newspaper has three distinct but related sense interpretations: (1) organisation/institution, (2) physical object, and (3) information/data. Figure 4 shows the mean similarity and acceptability ratings for the nine combinations of sense interpretations: The first three bars represent same-sense pairs 11, 22 and 33, the other three groups the different combinations of cross-sense pairs. The figure reveals that the three same-sense pairs receive equally high similarity and acceptability ratings, but while similarity ratings show a gradual decrease in scores assigned to cross-sense pairs, the co-predication acceptability scores are only gradual for more similar crosssense pairs, and drop significantly for less similar ones. These results indicate that similarity ratings appear to be a more nuanced, continuous measure than co-predication acceptability, which can assigns extremely low scores for readings deemed to be infelicitous. A more detailed investigation of the grouping of polyseme senses and its implications for the hypothesis of hierarchical sense representation can be found in Haber and Poesio (2020) .",
"cite_spans": [
{
"start": 1522,
"end": 1545,
"text": "Haber and Poesio (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 565,
"end": 573,
"text": "Figure 4",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.4"
},
{
"text": "The data collected in this study allows for a number of observations about the role of word sense similarity in the processing of homonyms and polysemes. On the one hand, graded co-predication acceptability ratings are shown to be less able to tell apart samples of homonymic and polysemic sense pairs than explicit sense similarity ratings. This supports the growing collection of studies indicating that co-predication might not be as suited a tool to distinguish different types of lexical ambiguity as traditionally assumed. On the other hand, the collected judgements of word sense similarity indicate that polyseme sense pairs mis-classified by co-predication acceptability are overall less similar to each other than other sense pairs, and significantly so than same-sense interpretations. This to some degree vindicates co-predication as a linguistic test, suggesting that rather than distinguishing homonyms form polysemes per se, it might be a coarse indication of the underlying word sense similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Our results also provide support for recent hypotheses suggesting that polyseme representation in the mental lexicon cannot be fully underspecified. During data collection, annotators rated some polysemic sense interpretations to be significantly less similar to each other than other sense pairs, and even rated some of the polyseme cross-sense co-predication samples as unacceptable. This indicates that the interpretations of polysemic words might be grouped based on their similarity, and only grouped interpretations are available for cost-free sense shifting and felicitous co-predication. Because only a single target word per type of systematic polysemy was tested here, we cannot ascertain whether sense groupings are idiosyncratic or systematic across target words of a certain polysemy type. Data for an analysis of this question can however easily be obtained by repeating our experiments with a larger set of target words. In a similar vain, we also recommend an in-depth analysis of irregular or metaphorical polysemes, which were omitted in this data collection effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Lastly, investigating the suitability of contextualised language models as proxies for human word sense similarity judgements, we find that the tested contextualised embeddings fail to predict word sense similarity consistently, but that the similarities between BERT embeddings show a significant correlation with co-predication acceptability ratings. We take this finding as evidence that BERT might create better encodings of complex contexts than encodings of actual word meaning, as it seems to perform well in determining whether contexts can be felicitously combined without consistently determining the similarity of word senses from these contexts first. We strongly encourage further research into determining the exact lexical semantic information available in BERT encodings in order to shed more light on this issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Examples taken from Ortega-Andr\u00e9s and Vicente(2019)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Examples fromCruse (1995)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As in \"The school is an old building.\" for sense building. See Haber and Poesio (2020) for more details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.mturk.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "According to AMT's website, \"[T]hese Workers have consistently demonstrated a high degree of success in performing a wide range of HITs across a large number of Requesters,\" https://www.mturk.com/worker/help 6 Note that in co-predication the synonymity effect is lost as only one subject noun phrase remains in the conjunction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/dali-ambiguity/ Word-Sense-Dataset-v18 https://tfhub.dev/google/ELMo/3 9 https://huggingface.co/transformers/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work presented in this paper was supported by the DALI project, ERC Grant 695662. The authors would like to thank Derya \u00c7 okal and Andrea Bruera for their input, and the anonymous reviewers for their feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the Licensing Conditions of Co-Predication",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Antunes",
"suffix": ""
},
{
"first": "Rui Pedro",
"middle": [],
"last": "Chaves",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2nd International Workshop on Generative Approaches to the Lexicon",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra Antunes and Rui Pedro Chaves. 2003. On the Licensing Conditions of Co-Predication. In Pro- ceedings of the 2nd International Workshop on Gen- erative Approaches to the Lexicon.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Regular polysemy. Linguistics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Juri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Apresjan",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "12",
"issue": "",
"pages": "5--32",
"other_ids": {
"DOI": [
"10.1515/ling.1974.12.142.5"
]
},
"num": null,
"urls": [],
"raw_text": "Juri D. Apresjan. 1974. Regular polysemy. Linguistics, 12:5-32.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Lexical Meaning in Context: A Web of Words",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/CBO9780511793936"
]
},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher. 2011. Lexical Meaning in Context: A Web of Words. Cambridge University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A type composition logic for generative lexicon",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Cognitive Science",
"volume": "6",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher and James Pustejovsky. 2006. A type composition logic for generative lexicon. Journal of Cognitive Science, 6(1).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Modeling regular polysemy: A study on the semantic classification of catalan adjectives",
"authors": [
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Toni",
"middle": [],
"last": "Badia",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "3",
"pages": "575--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gemma Boleda, Sabine Schulte im Walde, and Toni Badia. 2012. Modeling regular polysemy: A study on the semantic classification of catalan adjectives. Computational Linguistics, 38(3):575-616.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Polysemy and related phenomena from a cognitive linguistic viewpoint",
"authors": [
{
"first": "D",
"middle": [],
"last": "Alan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cruse",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Lexical Semantics, Studies in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "33--49",
"other_ids": {
"DOI": [
"10.1017/CBO9780511527227.004"
]
},
"num": null,
"urls": [],
"raw_text": "Alan D. Cruse. 1995. Polysemy and related phenom- ena from a cognitive linguistic viewpoint. In Patrick Saint-Dizier and Evelyn Viegas, editors, Computa- tional Lexical Semantics, Studies in Natural Lan- guage Processing, page 33-49. Cambridge Univer- sity Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Forthcoming. Systematic Polysemy",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "D\u00f6lling",
"suffix": ""
}
],
"year": null,
"venue": "The Blackwell Companion to Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes D\u00f6lling. Forthcoming. Systematic Polysemy. In Daniel Gutzmann, Lisa Matthewson, C\u00e9cile Meier, Hotze Rullmann, and Thomas Ede Zimmer- mann, editors, The Blackwell Companion to Seman- tics. Wiley.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/d19-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contextu- alized word representations? comparing the geome- try of bert, elmo, and gpt-2 embeddings. Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Polysemy: Current perspectives and approaches",
"authors": [
{
"first": "Ingrid",
"middle": [],
"last": "Lossius Falkum",
"suffix": ""
},
{
"first": "Augustin",
"middle": [],
"last": "Vicente",
"suffix": ""
}
],
"year": 2015,
"venue": "Lingua",
"volume": "157",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ingrid Lossius Falkum and Augustin Vicente. 2015. Polysemy: Current perspectives and approaches. Lingua, 157:1-16.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Singular count NPs in measure constructions",
"authors": [
{
"first": "Hana",
"middle": [],
"last": "Filip",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Sutton",
"suffix": ""
}
],
"year": 2017,
"venue": "Semantics and Linguistic Theory",
"volume": "27",
"issue": "",
"pages": "340--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hana Filip and Peter Sutton. 2017. Singular count NPs in measure constructions. In Semantics and Linguis- tic Theory, volume 27, pages 340-357.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Concepts: Where Cognitive Science Went Wrong",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fodor",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry A. Fodor. 1998. Concepts: Where Cognitive Sci- ence Went Wrong. Oxford University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Taking on semantic commitments: Processing multiple meanings vs. multiple senses",
"authors": [
{
"first": "Lyn",
"middle": [],
"last": "Frazier",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of Memory and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/0749-596X(90)90071-7"
]
},
"num": null,
"urls": [],
"raw_text": "Lyn Frazier and Keith Rayner. 1990. Taking on seman- tic commitments: Processing multiple meanings vs. multiple senses. Journal of Memory and Language.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "About bound and scary books: The processing of book polysemies",
"authors": [
{
"first": "",
"middle": [],
"last": "Steven Frisson",
"suffix": ""
}
],
"year": 2015,
"venue": "Polysemy: Current Perspectives and Approaches",
"volume": "157",
"issue": "",
"pages": "17--35",
"other_ids": {
"DOI": [
"10.1016/j.lingua.2014.07.017"
]
},
"num": null,
"urls": [],
"raw_text": "Steven Frisson. 2015. About bound and scary books: The processing of book polysemies. Lingua, 157:17 -35. Polysemy: Current Perspectives and Ap- proaches.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word sense distance in human similarity judgements and contextualised word embeddings",
"authors": [
{
"first": "Janosch",
"middle": [],
"last": "Haber",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Probability and Meaning Conference",
"volume": "",
"issue": "",
"pages": "128--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janosch Haber and Massimo Poesio. 2020. Word sense distance in human similarity judgements and contex- tualised word embeddings. In Proceedings of the Probability and Meaning Conference (PaM 2020), pages 128-145, Gothenburg. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "What is a concept, that a person may grasp it?1. Mind & Language",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "4",
"issue": "",
"pages": "68--102",
"other_ids": {
"DOI": [
"10.1111/j.1468-0017.1989.tb00243.x"
]
},
"num": null,
"urls": [],
"raw_text": "Ray Jackendoff. 1989. What is a concept, that a person may grasp it?1. Mind & Language, 4(1-2):68-102.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Processing of Lexical Ambiguity: Homonymy and Polysemy in the Mental Lexicon",
"authors": [
{
"first": "Ekaterini",
"middle": [],
"last": "Klepousniotou",
"suffix": ""
}
],
"year": 2002,
"venue": "Brain and Language",
"volume": "81",
"issue": "1-3",
"pages": "205--223",
"other_ids": {
"DOI": [
"10.1006/BRLN.2001.2518"
]
},
"num": null,
"urls": [],
"raw_text": "Ekaterini Klepousniotou. 2002. The Processing of Lex- ical Ambiguity: Homonymy and Polysemy in the Mental Lexicon. Brain and Language, 81(1-3):205- 223.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Not all ambiguous words are created equal: An EEG investigation of homonymy and polysemy",
"authors": [
{
"first": "G",
"middle": [
"Bruce"
],
"last": "Ekaterini Klepousniotou",
"suffix": ""
},
{
"first": "Karsten",
"middle": [],
"last": "Pike",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Steinhauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gracco",
"suffix": ""
}
],
"year": 2012,
"venue": "Brain and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.bandl.2012.06.007"
]
},
"num": null,
"urls": [],
"raw_text": "Ekaterini Klepousniotou, G. Bruce Pike, Karsten Stein- hauer, and Vincent Gracco. 2012. Not all ambigu- ous words are created equal: An EEG investigation of homonymy and polysemy. Brain and Language.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Making sense of word senses: The comprehension of polysemy depends on sense overlap",
"authors": [
{
"first": "Ekaterini",
"middle": [],
"last": "Klepousniotou",
"suffix": ""
},
{
"first": "Debra",
"middle": [],
"last": "Titone",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Romero",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1037/a0013012"
]
},
"num": null,
"urls": [],
"raw_text": "Ekaterini Klepousniotou, Debra Titone, and Carolina Romero. 2008. Making sense of word senses: The comprehension of polysemy depends on sense over- lap.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Measuring Gradience in Speakers' Grammaticality Judgements",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 36th Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lappin. 2014. Measuring Gradience in Speakers' Grammat- icality Judgements. Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "1st International Conference on Learning Representations, ICLR 2013 -Workshop Track Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In 1st International Con- ference on Learning Representations, ICLR 2013 - Workshop Track Proceedings.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Descriptions and tests for polysemy",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Moldovan. 2019. Descriptions and tests for po- lysemy. Axiomathes, pages 1-21.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Polysemy and co-predication",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Ortega-Andr\u00e9s",
"suffix": ""
},
{
"first": "Agust\u00edn",
"middle": [],
"last": "Vicente",
"suffix": ""
}
],
"year": 2019,
"venue": "Glossa: a journal of general linguistics",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Ortega-Andr\u00e9s and Agust\u00edn Vicente. 2019. Po- lysemy and co-predication. Glossa: a journal of general linguistics, 4(1).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. CoRR, abs/1802.05365.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The representation of polysemy: Meg evidence",
"authors": [
{
"first": "Liina",
"middle": [],
"last": "Pylkk\u00e4nen",
"suffix": ""
},
{
"first": "Rodolfo",
"middle": [],
"last": "Llin\u00e1s",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"L"
],
"last": "Murphy",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of cognitive neuroscience",
"volume": "18",
"issue": "1",
"pages": "97--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liina Pylkk\u00e4nen, Rodolfo Llin\u00e1s, and Gregory L. Mur- phy. 2006. The representation of polysemy: Meg ev- idence. Journal of cognitive neuroscience, 18(1):97- 109.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Making sense of semantic ambiguity: Semantic competition in lexical access",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Rodd",
"suffix": ""
},
{
"first": "Gareth",
"middle": [],
"last": "Gaskell",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Marslen-Wilson",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Memory and Language",
"volume": "46",
"issue": "2",
"pages": "245--266",
"other_ids": {
"DOI": [
"10.1006/jmla.2001.2810"
]
},
"num": null,
"urls": [],
"raw_text": "Jennifer Rodd, Gareth Gaskell, and William Marslen- Wilson. 2002. Making sense of semantic ambiguity: Semantic competition in lexical access. Journal of Memory and Language, 46(2):245 -266.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "When combinatorial processing results in reconceptualization: toward a new approach of compositionality",
"authors": [
{
"first": "Petra",
"middle": [],
"last": "Schumacher",
"suffix": ""
}
],
"year": 2013,
"venue": "Frontiers in Psychology",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3389/fpsyg.2013.00677"
]
},
"num": null,
"urls": [],
"raw_text": "Petra Schumacher. 2013. When combinatorial process- ing results in reconceptualization: toward a new ap- proach of compositionality. Frontiers in Psychology, 4:677.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Counting Construcions and Coercion: Container, Portion and Measure Interpretations",
"authors": [
{
"first": "R",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Filip",
"suffix": ""
}
],
"year": 2018,
"venue": "Oslo Studies in Language",
"volume": "10",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter R. Sutton and Hana Filip. 2018. Counting Con- strucions and Coercion: Container, Portion and Mea- sure Interpretations. Oslo Studies in Language, 10(2).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Context effects in coercion: Evidence from eye movements",
"authors": [
{
"first": "Matthew",
"middle": [
"J"
],
"last": "Traxler",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Mcelree",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rihana",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pickering",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Memory and Language",
"volume": "53",
"issue": "1",
"pages": "1--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew J. Traxler, Brian McElree, and Martin J. Williams, Rihana S .and Pickering. 2005. Context effects in coercion: Evidence from eye movements. Journal of Memory and Language, 53(1):1-25.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Does bert make any sense? interpretable word sense disambiguation with contextualized embeddings",
"authors": [
{
"first": "Gregor",
"middle": [],
"last": "Wiedemann",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Chawla",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does bert make any sense? interpretable word sense disambiguation with con- textualized embeddings.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The sensitivity of natural language to the distinction between class nouns and role nouns",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2017,
"venue": "Semantics and Linguistic Theory",
"volume": "27",
"issue": "",
"pages": "438--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Zobel. 2017. The sensitivity of natural lan- guage to the distinction between class nouns and role nouns. In Semantics and Linguistic Theory, vol- ume 27, pages 438-458.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Ambiguity tests and how to fail them",
"authors": [
{
"first": "Arnold",
"middle": [
"M"
],
"last": "Zwicky",
"suffix": ""
},
{
"first": "Jerrold",
"middle": [
"M"
],
"last": "Sadock",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and Semantics",
"volume": "4",
"issue": "",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnold M. Zwicky and Jerrold M. Sadock. 1975. Am- biguity tests and how to fail them. In Syntax and Semantics volume 4, pages 1-36. Brill.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Distribution of human annotation ratings and computational similarity ratings for homonymic (blue) and polysemic (orange) sentence pairs, together with their means."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "10 https://code.google.com/archive/p/ word2vec/"
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Classification of homonym (blue) and polyseme (orange) sample pairs based on pairwise similarity annotations and co-predication acceptability judgements."
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Correlations between polysemic target word pairs based on the three collected judgements of word sense similarity, together with their best linear fit."
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Mean similarity ratings (left, ascending hatch) and co-predication acceptability ratings (right, descending hatch) for the nine sense interpretation pairs of polyseme newspaper. The first three bars represent same-sense pairs, the other three groups the different combinations of cross-sense readings, respectively."
}
}
}
}