| { |
| "paper_id": "N16-1020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:36:32.590798Z" |
| }, |
| "title": "Black Holes and White Rabbits: Metaphor Identification with Visual Features", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Maillard", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Metaphor is pervasive in our communication, which makes it an important problem for natural language processing (NLP). Numerous approaches to metaphor processing have thus been proposed, all of which relied on linguistic features and textual data to construct their models. Human metaphor comprehension is, however, known to rely on both our linguistic and perceptual experience, and vision can play a particularly important role when metaphorically projecting imagery across domains. In this paper, we present the first metaphor identification method that simultaneously draws knowledge from linguistic and visual data. Our results demonstrate that it outperforms linguistic and visual models in isolation, as well as being competitive with the best-performing metaphor identification methods, that rely on hand-crafted knowledge about domains and perception.", |
| "pdf_parse": { |
| "paper_id": "N16-1020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Metaphor is pervasive in our communication, which makes it an important problem for natural language processing (NLP). Numerous approaches to metaphor processing have thus been proposed, all of which relied on linguistic features and textual data to construct their models. Human metaphor comprehension is, however, known to rely on both our linguistic and perceptual experience, and vision can play a particularly important role when metaphorically projecting imagery across domains. In this paper, we present the first metaphor identification method that simultaneously draws knowledge from linguistic and visual data. Our results demonstrate that it outperforms linguistic and visual models in isolation, as well as being competitive with the best-performing metaphor identification methods, that rely on hand-crafted knowledge about domains and perception.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Metaphor lends vividness, sophistication and clarity to our thought and communication. At the same time, it plays a fundamental structural role in our cognition, helping us to organise and project knowledge (Lakoff and Johnson, 1980; Feldman, 2006) . Metaphors arise due to systematic associations between distinct, and seemingly unrelated, concepts. For instance, when we talk about \"the turning wheels of a political regime\", \"rebuilding the campaign machinery\" or \"mending foreign policy\", we view politics and political systems in terms of mechanisms, they can function, break, be mended etc. The existence of this association allows us to transfer knowledge and imagery from the domain of mechanisms (the source domain) to that of political systems (the target domain). According to Lakoff and Johnson (1980) , such metaphorical mappings, or conceptual metaphors, form the basis of metaphorical language.", |
| "cite_spans": [ |
| { |
| "start": 207, |
| "end": 233, |
| "text": "(Lakoff and Johnson, 1980;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 234, |
| "end": 248, |
| "text": "Feldman, 2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 788, |
| "end": 813, |
| "text": "Lakoff and Johnson (1980)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Metaphor is pervasive in our communication, which makes it important for NLP applications dealing with real-world text. A number of approaches to metaphor processing have thus been proposed, using supervised classification (Gedigian et al., 2006; Mohler et al., 2013; Tsvetkov et al., 2013; Hovy et al., 2013; Dunn, 2013a) , clustering (Shutova et al., 2010; Shutova and Sun, 2013) , vector space models (Shutova et al., 2012; Mohler et al., 2014) , lexical resources (Krishnakumaran and Zhu, 2007; Wilks et al., 2013) and web search with lexicosyntactic patterns (Veale and Hao, 2008; Bollegala and Shutova, 2013) . So far, these and other metaphor processing works relied on textual data to construct their models. Yet, several experiments indicated that perceptual properties of concepts, such as concreteness and imageability, are important features for metaphor identification (Turney et al., 2011; Neuman et al., 2013; Gandy et al., 2013; Strzalkowski et al., 2013; Tsvetkov et al., 2014) . However, all of these methods used manually-annotated linguistic resources to determine these properties (such as the MRC concreteness database (Wilson, 1988) ). To the best of our knowledge, there has not yet been a metaphor processing method that employed information learned from both linguistic and visual data. Ample re-search in cognitive science suggests that human meaning representations are not merely a product of our linguistic exposure, but are also grounded in our perceptual system and sensori-motor experience (Barsalou, 2008; Louwerse, 2011) . Semantic models integrating information from multiple modalities have been shown successful in tasks such as modeling semantic similarity and relatedness (Silberer and Lapata, 2012; Bruni et al., 2014) , lexical entailment (Kiela et al., 2015a) , compositionality (Roller and Schulte im Walde, 2013) and bilingual lexicon induction (Kiela et al., 2015b) . Using visual information is particularly relevant to modelling metaphor, where imagery is ported across domains.", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 246, |
| "text": "(Gedigian et al., 2006;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 247, |
| "end": 267, |
| "text": "Mohler et al., 2013;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 268, |
| "end": 290, |
| "text": "Tsvetkov et al., 2013;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 291, |
| "end": 309, |
| "text": "Hovy et al., 2013;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 310, |
| "end": 322, |
| "text": "Dunn, 2013a)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 336, |
| "end": 358, |
| "text": "(Shutova et al., 2010;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 359, |
| "end": 381, |
| "text": "Shutova and Sun, 2013)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 404, |
| "end": 426, |
| "text": "(Shutova et al., 2012;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 427, |
| "end": 447, |
| "text": "Mohler et al., 2014)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 468, |
| "end": 498, |
| "text": "(Krishnakumaran and Zhu, 2007;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 499, |
| "end": 518, |
| "text": "Wilks et al., 2013)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 564, |
| "end": 585, |
| "text": "(Veale and Hao, 2008;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 586, |
| "end": 614, |
| "text": "Bollegala and Shutova, 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 882, |
| "end": 903, |
| "text": "(Turney et al., 2011;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 904, |
| "end": 924, |
| "text": "Neuman et al., 2013;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 925, |
| "end": 944, |
| "text": "Gandy et al., 2013;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 945, |
| "end": 971, |
| "text": "Strzalkowski et al., 2013;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 972, |
| "end": 994, |
| "text": "Tsvetkov et al., 2014)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 1141, |
| "end": 1155, |
| "text": "(Wilson, 1988)", |
| "ref_id": null |
| }, |
| { |
| "start": 1523, |
| "end": 1539, |
| "text": "(Barsalou, 2008;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1540, |
| "end": 1555, |
| "text": "Louwerse, 2011)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 1712, |
| "end": 1739, |
| "text": "(Silberer and Lapata, 2012;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 1740, |
| "end": 1759, |
| "text": "Bruni et al., 2014)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1781, |
| "end": 1802, |
| "text": "(Kiela et al., 2015a)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1822, |
| "end": 1857, |
| "text": "(Roller and Schulte im Walde, 2013)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1890, |
| "end": 1911, |
| "text": "(Kiela et al., 2015b)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we present the first metaphor identification method integrating meaning representations learned from linguistic and visual data. We construct our representations using a skip-gram model of Mikolov et al. (2013a) trained on textual data to obtain linguistic embeddings and a deep convolutional neural network (Kiela and Bottou, 2014) trained on image data to obtain visual embeddings. Linguistic word embeddings have been previously successfully used to answer analogy questions (Mikolov et al., 2013b; Levy and Goldberg, 2014) . These works have shown that such representations capture the nuances of word meaning needed to recognise relational similarity (e.g. between pairs \"king : queen\" and \"man : woman\"), quantified by the respective vector offsets (king -queen \u2248 man -woman). In our experiments, we investigate how well these representations can capture information about source and target domains and their interaction in a metaphor. We then enrich these representations with visual information. We first acquire linguistic and visual embeddings for individual words and then extend the methods to learn embeddings for longer phrases. The focus of our experiments is on metaphorical expressions in verb-subject, verb-direct object and adjectival modifier-noun constructions. We thus learn embeddings for verbs, adjectives, nouns, as well as verb-noun and adjective-noun phrases. We then use a set of arithmetic operations on word and phrase embedding vectors to classify phrases as literal or metaphorical. To the best of our knowledge, our approach is also the first one to apply word or phrase embeddings to the task of metaphor identification.", |
| "cite_spans": [ |
| { |
| "start": 204, |
| "end": 226, |
| "text": "Mikolov et al. (2013a)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 323, |
| "end": 347, |
| "text": "(Kiela and Bottou, 2014)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 493, |
| "end": 516, |
| "text": "(Mikolov et al., 2013b;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 517, |
| "end": 541, |
| "text": "Levy and Goldberg, 2014)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our results demonstrate that the joint model in-corporating linguistic and visual representations outperforms the linguistic model in isolation, as well as being competitive with the best-performing metaphor identification methods that rely on hand-crafted information about domains, concreteness and imageability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A strand of metaphor processing research cast the problem as a classification of linguistic expressions as metaphorical or literal. They experimented with a number of features, including lexical and syntactic information and higher-level features such as semantic roles and domain types. Gedigian et al. (2006) Turney et al. (2011) hypothesized that metaphor is commonly used to describe abstract concepts in terms of more concrete or physical experiences. Thus, Turney and colleagues expected that there would be some discrepancy in the level of concrete-ness of source and target terms in the metaphor. They developed a method to automatically measure concreteness of words and applied it to identify verbal and adjectival metaphors. Neuman et al. (2013) and Gandy et al. (2013) followed in Turney's steps, extending the models by incorporating information about selectional preferences. Heintz et al. (2013) and Strzalkowski et al. (2013) focused on modeling topical structure of text to identify metaphor. Their main hypothesis was that metaphorical language (coming from a different domain) would represent atypical vocabulary within the topical structure of the text. Strzalkowski et al. (2013) acquired a set of topic chains by linking semantically related words in a given text. They then looked for vocabulary outside the topic chain and yet connected to topic chain words via syntactic dependencies and exhibiting high imageability. Heintz et al. (2013) used LDA topic modelling to identify sets of source and target domain vocabulary. In their system, the acquired topics represented source and target domains, and sentences containing vocabulary from both were tagged as metaphorical.", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 310, |
| "text": "Gedigian et al. (2006)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 311, |
| "end": 331, |
| "text": "Turney et al. (2011)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 736, |
| "end": 756, |
| "text": "Neuman et al. (2013)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 761, |
| "end": 780, |
| "text": "Gandy et al. (2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 890, |
| "end": 910, |
| "text": "Heintz et al. (2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 915, |
| "end": 941, |
| "text": "Strzalkowski et al. (2013)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 1174, |
| "end": 1200, |
| "text": "Strzalkowski et al. (2013)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 1443, |
| "end": 1463, |
| "text": "Heintz et al. (2013)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Other approaches addressed automatic identification of conceptual metaphor. Mason (2004) automatically acquired domain-specific selectional preferences of verbs, and then, by mapping their common nominal arguments in different domains, arrived at the corresponding metaphorical mappings. For example, the verb pour has a strong preference for liquids in the LAB domain and for money in the FINANCE domain, suggesting the mapping MONEY is LIQUID. pointed out that the metaphorical uses of words constitute a large portion of the dependency features extracted for abstract concepts from corpora. For example, the feature vector for politics would contain GAME or MECH-ANISM terms among the frequent features. As a result, distributional clustering of abstract nouns with such features identifies groups of diverse concepts metaphorically associated with the same source domain (or sets of source domains). exploit this property of co-occurrence vectors to identify new metaphorical mappings starting from a set of examples. Shutova and Sun (2013) used hierarchical clustering to derive a network of concepts in which metaphorical associations are learned in an unsupervised way.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 88, |
| "text": "Mason (2004)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We obtained our linguistic representations using the log-linear skip-gram model of Mikolov et al. (2013a) . Given a corpus of words w and their contexts c, the model learns a set of parameters \u03b8 that maximize the overall corpus probability", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 105, |
| "text": "Mikolov et al. (2013a)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning linguistic representations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "arg max \u03b8 w [ c\u2208C(w) p(c|w; \u03b8)],", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Learning linguistic representations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where C(w) is a set of contexts of word w and p(c|w; \u03b8) is a softmax function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning linguistic representations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p(c|w; \u03b8) = e vc\u2022vw c \u2208C e v c \u2022vw ,", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Learning linguistic representations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where v c and v w are vector representations of c and w. The parameters we need to set are thus v c i and v w i for all words in our word vocabulary V and context vocabulary C, and the set of dimensions i \u2208 1, . . . , d. Given a set D of word-context pairs, embeddings are learned by optimizing the following objective:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning linguistic representations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "arg max \u03b8 (w,c)\u2208D log p(c|w) = (w,c)\u2208D (log e vc\u2022vw \u2212 log c \u2208C e v c \u2022vw )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Learning linguistic representations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We used a recent dump of Wikipedia 1 as our corpus. The text was lemmatized, tagged, and parsed with Stanford CoreNLP (Manning et al., 2014) . Words that appeared less than 100 times in their lemmatized form were ignored. The 100-dimensional word and phrase embeddings were learned in two stages: in a first pass, we obtained word-level embeddings (e.g. for white and rabbit) using the standard skip-gram with negative sampling of Eq. 3; we then obtained phrase embeddings (e.g. for white rabbit) through a second pass over the same corpus. In the second pass, the vectors v c and v c of Eq. (3) were set to their values from the first pass, and kept fixed. Verb-noun phrases were extracted by finding nsubj and dobj arcs with V B head and N N dependent; analogously, adjective-noun phrases were extracted by finding amod arcs with N N head and JJ dependent. No frequency cutoff was applied for phrases. All embeddings were trained on the corpus for 3 epochs, using a symmetric window of 5, and 10 negative samples per word-context pair.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 140, |
| "text": "(Manning et al., 2014)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning linguistic representations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Visual embeddings were obtained in a manner similar to Kiela and Bottou (2014) . Using the deep learning framework Caffe (Jia et al., 2014) , we extracted image embeddings from a deep convolutional neural network that was trained on the ImageNet classification task (Russakovsky et al., 2015) . The network (Krizhevsky et al., 2012) consists of 5 convolutional layers, followed by two fully connected rectified linear unit (ReLU) layers that feed into a softmax for classification. The network learns through a multinomial logistic regression objective:", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 78, |
| "text": "Kiela and Bottou (2014)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 121, |
| "end": 139, |
| "text": "(Jia et al., 2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 266, |
| "end": 292, |
| "text": "(Russakovsky et al., 2015)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 307, |
| "end": 332, |
| "text": "(Krizhevsky et al., 2012)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning visual representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "J(\u03b8) = \u2212 D i=1 K k=1 1{y (i) = k} log exp(\u03b8 (k) x (i) ) K j=1 exp(\u03b8 (j) x (i) )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Learning visual representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where 1{\u2022} is the indicator function and we train on D examples with K classes. We obtain image embeddings by doing a forward pass with a given image and taking the 4096-dimensional fully connected layer that precedes the softmax (typically called FC7) as the representation of that image. To construct our embeddings, we used up to 10 images for a given word or phrase, which were obtained through Google Images. It has been shown that images from Google yield higher quality representations than comparable resources such as Flickr and are competitive with hand-crafted datasets (Fergus et al., 2005; Bergsma and Goebel, 2011) . We created our final visual representations for words and phrases by taking the average of the extracted image embeddings for a given word or phrase.", |
| "cite_spans": [ |
| { |
| "start": 581, |
| "end": 602, |
| "text": "(Fergus et al., 2005;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 603, |
| "end": 628, |
| "text": "Bergsma and Goebel, 2011)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning visual representations", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "While it is desirable to jointly learn representations from different modalities at the same time, this is often not feasible (or may lead to poor performance) due to data sparsity. Instead, we learn uni-modal representations independently, as described above, and then combine them into multi-modal ones. Previous work in multi-modal semantics (Bruni et al., 2014 ) investigated different ways of combining, or fusing, linguistic and perceptual cues. When calculating similarity, for instance, one can either combine the representations first and subsequently compute similarity scores; or compute similarity scores independently per modality and afterwards combine the scores. In contrast with joint learning (which has also been called early fusion), these two possibilities represent middle and late fusion, respectively (Kiela and Clark, 2015) .", |
| "cite_spans": [ |
| { |
| "start": 345, |
| "end": 364, |
| "text": "(Bruni et al., 2014", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 825, |
| "end": 848, |
| "text": "(Kiela and Clark, 2015)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multimodal fusion strategies", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We experiment with middle and late fusion strategies. In middle fusion, we L-2 normalise and concatenate the vectors for linguistic and visual representations and then compute a metaphoricity score for a phrase based on this joint representation. In late fusion, we first compute the metaphoricity scores based on linguistic and visual representations independently, and then combine the metaphoricity scores by taking their average.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multimodal fusion strategies", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We investigate a set of arithmetic operations on the linguistic, visual and multimodal embedding vectors to determine whether the two words in the phrase belong to the same domain or rather a word from one domain is metaphorically used to describe another.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring metaphoricity", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In our first set of experiments, we compare embeddings learned for individual words in order to determine whether they come from the same domain. This is done by determining similarity between the representations of the two words in a phrase:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word-level embeddings", |
| "sec_num": "3.4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "sim(word 1 , word 2 ),", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Word-level embeddings", |
| "sec_num": "3.4.1" |
| }, |
| { |
| "text": "where word 1 is either a verb or an adjective, word 2 is a noun, and similarity is defined as cosine similarity:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word-level embeddings", |
| "sec_num": "3.4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "cos(x, y) = x \u2022 y ||x||||y||", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Word-level embeddings", |
| "sec_num": "3.4.1" |
| }, |
| { |
| "text": "We expect the similarity of word representations to be lower for metaphorical expressions (where one word comes from the source domain and one from the target), than for the literal ones (where both words come from the target domain). We will further refer to this method as WORDCOS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word-level embeddings", |
| "sec_num": "3.4.1" |
| }, |
| { |
| "text": "In our second set of experiments, we investigate compositional properties of metaphorical phrases by comparing the embeddings learned for the whole phrase with those of the individual words in the phrase. This allows us to determine which properties the phrase shares with each of the words, providing another criterion for metaphor identification. We expect that the embeddings of literal phrases will be more similar to the embeddings of individual words in the phrase (or a combination thereof) than those of metaphorical phrases. We use the following measures to test this hypothesis:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-level embeddings", |
| "sec_num": "3.4.2" |
| }, |
| { |
| "text": "PHRASCOS1: cos(phrase \u2212 word 1 , word 2 ) (7) PHRASCOS2: cos(phrase \u2212 word 2 , word 1 ) (8) PHRASCOS3: cos(phrase, word 1 + word 2 ), (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-level embeddings", |
| "sec_num": "3.4.2" |
| }, |
| { |
| "text": "where phrase is the phrase embedding vector, and word 1 and word 2 are defined as above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-level embeddings", |
| "sec_num": "3.4.2" |
| }, |
| { |
| "text": "We use a small development set (a collection of phrases annotated as metaphorical or literal) to determine an optimal classification threshold for each of the above scoring methods. We have optimized the threshold by maximizing classification accuracy on the development set. 2 All instances with values above the threshold were considered literal and those with values below the threshold metaphorical. The thresholds were then applied to classify the test instances as literal or metaphorical.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification", |
| "sec_num": "3.4.3" |
| }, |
| { |
| "text": "We evaluate our method using two datasets manually annotated for metaphoricity: sentences (1639 in total) were then annotated for metaphoricity by 10 annotators each via the crowdsourcing platform CrowdFlower 3 . Mohammad et al. selected the verbs that were tagged by at least 70% of the annotators as metaphorical or literal to create their dataset. We extracted verb-direct object and verb-subject relations of the annotated verbs from this dataset, discarding the instances with pronominal or clausal subject or object. This resulted in a dataset of 647 verb-noun pairs, 316 of which were metaphorical and 331 literal. tated test set. Metaphorical phrases that depend on wider context for their interpretation (e.g. drowning students) were removed. The training set was annotated by one annotator only, and it is thus likely that the annotations are less reliable than those in the test set. We thus evaluate our methods on Tsvetkov et al.'s test set (TSV-TEST). However, we will also report results on TSV-TRAIN to confirm whether the observed trends hold in a larger, though likely noisier, dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotated datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We selected the above two datasets since they include examples for different senses (both metaphorical and literal) of the same verbs or adjectives. This allows us to test the extent to which our model is able to discriminate between different word senses, as opposed to merely selecting the most frequent class for a given word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotated datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We divided the verb-and adjective-noun datasets into development and test sets. The verb-noun development set contained 80 instances from MOH (40 literal and 40 metaphorical), leaving us with the test set of 567 verb-noun pairs from MOH. We created the adjective-noun development set using 80 adjective-noun pairs (40 literal and 40 metaphorical) from TSV-TRAIN, leaving all of the 222 adjectivenoun pairs in TSV-TEST for evaluation. In a separate experiment, we also applied our methods to the remainder of TSV-TRAIN (1688 adjective-noun pairs) to evaluate our system on a larger adjective dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We used the development sets to determine an op- timal threshold value for each of our scoring methods. The thresholds for verb-noun and adjectivenoun phrases were optimized independently using the corresponding development sets. We experimented with the three phrase-level scoring methods on the development sets, and found that PHRAS-COS1 consistently outperformed PHRASCOS2 and PHRASCOS3 for both verb-noun and adjectivenoun phrases. We thus report results for PHRAS-COS1 on our test sets. We first evaluated the performance of WORDCOS and PHRASCOS1 using linguistic and visual representations in isolation, and then evaluated the multimodal models using middle and late fusion strategies. In middle fusion, we concatenated the linguistic and visual vectors, and then applied WORDCOS and PHRASCOS1 methods to the resulting multimodal vectors. We will refer to these methods as WORDMID and PHRASMID respectively. In late fusion, we used an average of linguistic and visual scores to determine metaphoricity. We experimented with three different scoring methods: (1) WORDLATE, where linguistic and visual WORD-COS scores were combined; (2) PHRASLATE, where linguistic and visual PHRASCOS1 scores were combined; and (3) MIXLATE, where linguistic and WORDCOS and visual PHRASCOS1 scores were combined.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We evaluated the performance of our methods on the MOH and TSV-TEST test sets in terms of precision, recall and F-score and the results are presented in Tables 1 and 2 PHRASECOS1 for both verbs and adjectives by 17-19%. This suggests that linguistic word embeddings already successfully capture domain and compositional information necessary for metaphor identification. In contrast, the visual PHRASECOS1 model, when applied in isolation, tends to outperform the visual WORDCOS model. PHRASCOS1 measures to what extent the meaning of the phrase can be composed by simple combination of the representations of individual words. In metaphorical language, however, a meaning transfer takes place and this is no longer the case. Particularly in visual data, where no linguistic conventionality and stylistic effects take place, PHRASCOS1 captures this property. For adjectives this trend was more evident than for verbs. The visual PHRASECOS1 model, even when applied on its own, attains a high F-score of 0.73 on TSV-TEST, suggesting that concreteness and other visual features are highly informative in identification of adjectival metaphors. This effect was present, though not as pronounced, for verbal metaphors, where the vision-only PHRASECOS1 attains an F-score of 0.66.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 153, |
| "end": 167, |
| "text": "Tables 1 and 2", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and discussion", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The multimodal model, integrating linguistic and visual embeddings, outperforms the linguistic models for both verbs and adjectives, clearly demonstrating the utility of visual features across word classes. The late fusion method MIXLATE, which combines the linguistic WORDCOS score and the visual PHRASECOS1, attains an F-score of 0.75 for verbs and 0.79 for adjectives, which makes it bestperforming among our fusion strategies. When the same type of scoring (i.e. either WORDCOS or PHRASCOS1) is used with both linguistic and visual embeddings, middle and late fusion techniques attain comparable levels of performance, with WORD-COS being the leading measure. The reason behind the higher performance of MIXLATE is likely to be the combination of different scoring methods, one of which is more suitable for the linguistic model and the other for the visual one.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and discussion", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The differences between verbs and adjectives with respect to the utility of visual information can be explained by the following two factors. Firstly, previous psycholinguistic research on abstractness and concreteness (Hill et al., 2014) suggests that humans find it easier to judge the level of concreteness of adjectives and nouns than that of verbs. It is thus possible that visual representations capture the concreteness of adjectives and nouns more accurately than that of verbs. Besides concreteness, it is also likely that perceptual properties in general are more important for the semantics of nouns (e.g. objects) and adjectives (their attributes), than for the semantics of verbs (actions), since the latter are grounded in our motor activity and not merely perception. Secondly, following the majority of multimodal semantic models, we used images as our visual data rather than videos. However, some verbs, e.g. stative verbs and verbs for continuous actions, may be better captured in video than images. We thus expect that using video data along with the images as input to the acquisition of visual embeddings is likely to improve metaphor identification performance for verbal metaphors. However, we leave the investigation of this issue for future work.", |
| "cite_spans": [ |
| { |
| "start": 219, |
| "end": 238, |
| "text": "(Hill et al., 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and discussion", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In an additional experiment, we evaluated our methods on the larger TSV-TRAIN dataset (specifically using its portion that was not employed for development purposes) and the trends observed were the same. MIXLATE attained an F-score of 0.71, outperforming language-only and vision-only models. The performance of all scoring methods on TSV-TRAIN was lower than that on the TSV-TEST. This may be the result of the fact that the labelling of TSV-TRAIN was less consistent than that of TSV-TEST. As TSV-TEST is a set of metaphors annotated by 5 annotators with a high agreement, the evaluation on TSV-TEST is likely to be more reliable (Tsvetkov et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 633, |
| "end": 656, |
| "text": "(Tsvetkov et al., 2014)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and discussion", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "It is important to note that, unlike other supervised approaches to metaphor, our methods do not require large training sets to learn the respective thresholds. The results reported here were obtained using only 80 annotated examples for training. This is sufficient since the necessary lexical knowledge and the knowledge about domain, concreteness and visual properties of concepts is already captured in the linguistic and visual embeddings. However, we additionally investigated how stable the thresholds learned by the model are using the TSV-TRAIN dataset. For this purpose, we divided the dataset into 10 portions of approximately 170 examples (balanced for metaphoricity). We then trained the thresholds first on a small set of 170 examples and then increasing the dataset by 170 examples at each round. The thesholds appear to be relatively stable, with a standard deviation of 0.03 for MIXLATE; 0.02 for WORDCOS (linguistic); and 0.05 for PHRASECOS1 (visual). This suggests that our methods do not require a large annotated dataset and training on a small number of examples is sufficient.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and discussion", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Despite the limited need in training data and no reliance on hand-coded lexical resources, the performance of our method favourably compares to that of existing metaphor identification systems (Turney et al., 2011; Neuman et al., 2013; Gandy et al., 2013; Dunn, 2013b; Tsvetkov et al., 2013; Hovy et al., 2013; Hovy et al., 2013; Shutova and Sun, 2013; Strzalkowski et al., 2013; Beigman Klebanov et al., 2015) , that typically use such resources. For instance, Turney et al. (2011) used hand-annotated abstractness scores for words to develop their system, and reported an F-score of 0.68 for verb-noun metaphors and an accuracy of 0.79 for adjectivenoun metaphors (though the latter was only evaluated on a small dataset of 10 adjectives and Turney and colleagues did not report results in terms of F-score, which is likely to be lower). Our use of visual features is in line with Turney's hypothesis concerning the relevance of concreteness features to metaphor processing. However, our results indicate that extracting this information from image data directly is a more suitable way to capture the concreteness itself, as well as capturing other relevant perceptual properties of concepts. The method of Tsvetkov et al. (2014) used both concreteness features (which they extracted from the MRC concreteness database) and hand-coded do-main information for words (which they extracted from WordNet). They report a high F-score of 0.85 for adjective-noun classification on TSV-TEST. The performance of our method on the same dataset is a little lower than that of Tsvetkov et al. However, we do not use any hand-annotated resources and acquire linguistic, domain and perceptual information in the data-driven way. It is thus encouraging that, even though resource-lean, our methods approach the performance level of the methods using hand-annotated features (as in case of Tsvetkov et al. 2014 2015and many others). For further comparison with these approaches and their results see a recent review by Shutova (2015) .", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 214, |
| "text": "(Turney et al., 2011;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 215, |
| "end": 235, |
| "text": "Neuman et al., 2013;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 236, |
| "end": 255, |
| "text": "Gandy et al., 2013;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 256, |
| "end": 268, |
| "text": "Dunn, 2013b;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 269, |
| "end": 291, |
| "text": "Tsvetkov et al., 2013;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 292, |
| "end": 310, |
| "text": "Hovy et al., 2013;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 311, |
| "end": 329, |
| "text": "Hovy et al., 2013;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 330, |
| "end": 352, |
| "text": "Shutova and Sun, 2013;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 353, |
| "end": 379, |
| "text": "Strzalkowski et al., 2013;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 380, |
| "end": 410, |
| "text": "Beigman Klebanov et al., 2015)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1209, |
| "end": 1231, |
| "text": "Tsvetkov et al. (2014)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 2005, |
| "end": 2019, |
| "text": "Shutova (2015)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and discussion", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We presented the first method that uses visual features for metaphor identification. Our results demonstrate that the multi-modal model combining both linguistic and visual knowledge outperforms language-only models, suggesting the importance of visual information for metaphor processing. Unlike previous metaphor processing approaches, that employed hand-crafted resources to model perceptual properties of concepts, our method learns visual knowledge from images directly, thus reducing the risk of human annotation noise and having a wider coverage and applicability. Since the method relies on automatically acquired lexical knowledge, in the form of linguistic and visual embeddings, and is otherwise resource-independent, it can be applied to unrestricted text in any domain and easily tailored to other metaphor processing tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In the future, it would be interesting to apply multimodal word and phrase embeddings to automatically interpret metaphorical language, e.g. by deriving literal or conventional paraphrases for metaphorical expressions (similarly to the task of Shutova (2010)). Multimodal embeddings are also likely to provide useful information for the models of metaphor translation, as they have already proved successful in bilingual lexicon induction more generally (Kiela et al., 2015b) . Finally, it would be interest-ing to further investigate compositional properties of metaphorical language using multimodal phrase embeddings and to apply the embeddings to automatically generalise metaphorical associations between distinct concepts or domains.", |
| "cite_spans": [ |
| { |
| "start": 454, |
| "end": 475, |
| "text": "(Kiela et al., 2015b)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://dumps.wikimedia.org/enwiki/20150805/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We have also experimented with optimizing F-score on the development set and the results exhibited similar trends across methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "www.crowdflower.com 4 https://www.sketchengine.co.uk/xdocumentation/wiki/Corpora/enTenTen", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We are grateful to the NAACL reviewers for their helpful feedback. Ekaterina Shutova's research is supported by the Leverhulme Trust Early Career Fellowship. Douwe Kiela is supported by EPSRC grant EP/I037512/1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgment", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Grounded cognition. Annual Review of Psychology", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [ |
| "W" |
| ], |
| "last": "Barsalou", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "59", |
| "issue": "", |
| "pages": "617--645", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence W. Barsalou. 2008. Grounded cognition. An- nual Review of Psychology, 59(1):617-645.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Supervised word-level metaphor detection: Experiments with concreteness and reweighting of examples", |
| "authors": [ |
| { |
| "first": "Chee Wee", |
| "middle": [], |
| "last": "Beata Beigman Klebanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Leong", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Flor", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Third Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "11--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beata Beigman Klebanov, Chee Wee Leong, and Michael Flor. 2015. Supervised word-level metaphor detec- tion: Experiments with concreteness and reweighting of examples. In Proceedings of the Third Workshop on Metaphor in NLP, pages 11-20, Denver, Colorado, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Using visual information to predict lexical preference", |
| "authors": [ |
| { |
| "first": "Shane", |
| "middle": [], |
| "last": "Bergsma", |
| "suffix": "" |
| }, |
| { |
| "first": "Randy", |
| "middle": [], |
| "last": "Goebel", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "RANLP", |
| "volume": "", |
| "issue": "", |
| "pages": "399--405", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shane Bergsma and Randy Goebel. 2011. Using visual information to predict lexical preference. In RANLP, pages 399-405.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Metaphor interpretation using paraphrases extracted from the web", |
| "authors": [ |
| { |
| "first": "Danushka", |
| "middle": [], |
| "last": "Bollegala", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "PLoS ONE", |
| "volume": "8", |
| "issue": "9", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Danushka Bollegala and Ekaterina Shutova. 2013. Meta- phor interpretation using paraphrases extracted from the web. PLoS ONE, 8(9):e74304.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Multimodal distributional semantics", |
| "authors": [ |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Nam-Khanh", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "J. Artif. Intell. Res.(JAIR)", |
| "volume": "49", |
| "issue": "", |
| "pages": "1--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res.(JAIR), 49:1-47.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Evaluating the premises and results of four metaphor identification systems", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Dunn", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of CICLing'13", |
| "volume": "", |
| "issue": "", |
| "pages": "471--486", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Dunn. 2013a. Evaluating the premises and results of four metaphor identification systems. In Proceedings of CICLing'13, pages 471-486, Samos, Greece.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "What metaphor identification systems can tell us about metaphor-in-language", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Dunn", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the First Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Dunn. 2013b. What metaphor identification systems can tell us about metaphor-in-language. In Proceedings of the First Workshop on Metaphor in NLP, pages 1-10, Atlanta, Georgia.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "From Molecule to Metaphor: A Neural Theory of Language", |
| "authors": [ |
| { |
| "first": "Jerome", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jerome Feldman. 2006. From Molecule to Metaphor: A Neural Theory of Language. The MIT Press.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "WordNet: An Electronic Lexical Database (ISBN: 0-262-06197-X)", |
| "authors": [], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database (ISBN: 0-262-06197-X). MIT Press, first edition.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Learning object categories from google's image search", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Fergus", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| }, |
| { |
| "first": "Pietro", |
| "middle": [], |
| "last": "Perona", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Zisserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on", |
| "volume": "2", |
| "issue": "", |
| "pages": "1816--1823", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Fergus, Li Fei-Fei, Pietro Perona, and Andrew Zisserman. 2005. Learning object categories from google's image search. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, volume 2, pages 1816-1823. IEEE.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Automatic identification of conceptual metaphors with limited knowledge", |
| "authors": [ |
| { |
| "first": "Lisa", |
| "middle": [], |
| "last": "Gandy", |
| "suffix": "" |
| }, |
| { |
| "first": "Nadji", |
| "middle": [], |
| "last": "Allan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Atallah", |
| "suffix": "" |
| }, |
| { |
| "first": "Ophir", |
| "middle": [], |
| "last": "Frieder", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lisa Gandy, Nadji Allan, Mark Atallah, Ophir Frieder, Newton Howard, Sergey Kanareykin, Moshe Kop- pel, Mark Last, Yair Neuman, and Shlomo Argamon. 2013. Automatic identification of conceptual meta- phors with limited knowledge. In Proceedings of AAAI 2013.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Catching metaphors", |
| "authors": [ |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gedigian", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bryant", |
| "suffix": "" |
| }, |
| { |
| "first": "Srini", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Branimir", |
| "middle": [], |
| "last": "Ciric", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 3rd Workshop on Scalable Natural Language Understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "41--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matt Gedigian, John Bryant, Srini Narayanan, and Bran- imir Ciric. 2006. Catching metaphors. In In Proceed- ings of the 3rd Workshop on Scalable Natural Lan- guage Understanding, pages 41-48, New York.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Automatic extraction of linguistic metaphors with lda topic modeling", |
| "authors": [ |
| { |
| "first": "Ilana", |
| "middle": [], |
| "last": "Heintz", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Gabbard", |
| "suffix": "" |
| }, |
| { |
| "first": "Mahesh", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Dave", |
| "middle": [], |
| "last": "Barner", |
| "suffix": "" |
| }, |
| { |
| "first": "Donald", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "Majorie", |
| "middle": [], |
| "last": "Friedman", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the First Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "58--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilana Heintz, Ryan Gabbard, Mahesh Srivastava, Dave Barner, Donald Black, Majorie Friedman, and Ralph Weischedel. 2013. Automatic extraction of linguistic metaphors with lda topic modeling. In Proceedings of the First Workshop on Metaphor in NLP, pages 58-66, Atlanta, Georgia.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A quantitative empirical analysis of the abstract/concrete distinction", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Bentz", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Cognitive Science", |
| "volume": "38", |
| "issue": "1", |
| "pages": "162--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Anna Korhonen, and Christian Bentz. 2014. A quantitative empirical analysis of the abstract/concrete distinction. Cognitive Science, 38(1):162-177.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Identifying metaphorical word use with tree kernels", |
| "authors": [ |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Shashank", |
| "middle": [], |
| "last": "Shrivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Sujay", |
| "middle": [], |
| "last": "Kumar Jauhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Mrinmaya", |
| "middle": [], |
| "last": "Sachan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kartik", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Huying", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Whitney", |
| "middle": [], |
| "last": "Sanders", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the First Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "52--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dirk Hovy, Shashank Shrivastava, Sujay Kumar Jauhar, Mrinmaya Sachan, Kartik Goyal, Huying Li, Whit- ney Sanders, and Eduard Hovy. 2013. Identifying metaphorical word use with tree kernels. In Proceed- ings of the First Workshop on Metaphor in NLP, pages 52-57, Atlanta, Georgia.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Caffe: Convolutional architecture for fast feature embedding", |
| "authors": [ |
| { |
| "first": "Yangqing", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Evan", |
| "middle": [], |
| "last": "Shelhamer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Donahue", |
| "suffix": "" |
| }, |
| { |
| "first": "Sergey", |
| "middle": [], |
| "last": "Karayev", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Long", |
| "suffix": "" |
| }, |
| { |
| "first": "Ross", |
| "middle": [], |
| "last": "Girshick", |
| "suffix": "" |
| }, |
| { |
| "first": "Sergio", |
| "middle": [], |
| "last": "Guadarrama", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Darrell", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1408.5093" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convo- lutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douwe Kiela and L\u00e9on Bottou. 2014. Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP-14).", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Multi-and crossmodal semantics beyond vision: Grounding in auditory perception", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2461--2470", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douwe Kiela and Stephen Clark. 2015. Multi-and cross- modal semantics beyond vision: Grounding in audi- tory perception. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 2461-2470, Lisbon, Portugal, Septem- ber. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Exploiting image generality for lexical entailment detection", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Rimell", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Vuli\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douwe Kiela, Laura Rimell, Ivan Vuli\u0107, and Stephen Clark. 2015a. Exploiting image generality for lexi- cal entailment detection. In Proceedings of the 53rd", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Pa- pers), Beijing, China.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Visual bilingual lexicon induction with transferred convnet features", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Vuli\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douwe Kiela, Ivan Vuli\u0107, and Stephen Clark. 2015b. Vi- sual bilingual lexicon induction with transferred con- vnet features. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, Lisbon, Portugal.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Hunting elusive metaphors using lexical resources", |
| "authors": [ |
| { |
| "first": "Saisuresh", |
| "middle": [], |
| "last": "Krishnakumaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojin", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Workshop on Computational Approaches to Figurative Language", |
| "volume": "", |
| "issue": "", |
| "pages": "13--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saisuresh Krishnakumaran and Xiaojin Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proceedings of the Workshop on Computational Approaches to Figurative Language, pages 13-20, Rochester, NY.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Imagenet classification with deep convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1097--1105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Metaphors We Live By", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Lakoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Lakoff and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press, Chicago.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Linguistic regularities in sparse and explicit word representations", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "171--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy and Yoav Goldberg. 2014. Linguistic regu- larities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Compu- tational Natural Language Learning, pages 171-180, Ann Arbor, Michigan, June.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Data-driven metaphor recognition and explanation", |
| "authors": [ |
| { |
| "first": "Hongsong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenny", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Haixun", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "379--390", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hongsong Li, Kenny Q. Zhu, and Haixun Wang. 2013. Data-driven metaphor recognition and explanation. Transactions of the Association for Computational Linguistics, 1:379-390.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Symbol interdependency in symbolic and embodied cognition", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Max", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Louwerse", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Topics in Cognitive Science", |
| "volume": "3", |
| "issue": "2", |
| "pages": "273--302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Max M Louwerse. 2011. Symbol interdependency in symbolic and embodied cognition. Topics in Cognitive Science, 3(2):273-302.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "The Stanford CoreNLP natural language processing toolkit", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Cormet: a computational, corpus-based conventional metaphor extraction system", |
| "authors": [ |
| { |
| "first": "Zachary", |
| "middle": [], |
| "last": "Mason", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "30", |
| "issue": "1", |
| "pages": "23--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zachary Mason. 2004. Cormet: a computational, corpus-based conventional metaphor extraction sys- tem. Computational Linguistics, 30(1):23-44.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. In Proceedings of ICLR, Scotts- dale, AZ.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Linguistic regularities in continuous space word representations", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yih", |
| "middle": [], |
| "last": "Wen-Tau", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Zweig", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "746--751", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of NAACL-HLT, pages 746-751.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Metaphor as a medium for emotion: An empirical study. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Saif", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016. Metaphor as a medium for emotion: An empiri- cal study. Language Resources and Evaluation, forth- coming.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Semantic signatures for example-based linguistic metaphor detection", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mohler", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Bracewell", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Tomlinson", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hinote", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the First Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "27--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Mohler, David Bracewell, Marc Tomlinson, and David Hinote. 2013. Semantic signatures for example-based linguistic metaphor detection. In Pro- ceedings of the First Workshop on Metaphor in NLP, pages 27-35, Atlanta, Georgia.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A novel distributional approach to multilingual conceptual metaphor recognition", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mohler", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Rink", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Bracewell", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Tomlinson", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Mohler, Bryan Rink, David Bracewell, and Marc Tomlinson. 2014. A novel distributional approach to multilingual conceptual metaphor recognition. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, Dublin, Ireland.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Metaphor identification in large texts corpora", |
| "authors": [ |
| { |
| "first": "Yair", |
| "middle": [], |
| "last": "Neuman", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Assaf", |
| "suffix": "" |
| }, |
| { |
| "first": "Yohai", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Last", |
| "suffix": "" |
| }, |
| { |
| "first": "Shlomo", |
| "middle": [], |
| "last": "Argamon", |
| "suffix": "" |
| }, |
| { |
| "first": "Newton", |
| "middle": [], |
| "last": "Howard", |
| "suffix": "" |
| }, |
| { |
| "first": "Ophir", |
| "middle": [], |
| "last": "Frieder", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "PLoS ONE", |
| "volume": "8", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yair Neuman, Dan Assaf, Yohai Cohen, Mark Last, Shlomo Argamon, Newton Howard, and Ophir Frieder. 2013. Metaphor identification in large texts corpora. PLoS ONE, 8(4):e62343.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "A multimodal lda model integrating textual, cognitive and visual modalities", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1146--1157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A multimodal lda model integrating textual, cognitive and visual modalities. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1146-1157.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "ImageNet Large Scale Visual Recognition Challenge", |
| "authors": [ |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Russakovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Jia", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Krause", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Satheesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Sean", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiheng", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrej", |
| "middle": [], |
| "last": "Karpathy", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Khosla", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Bernstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "C" |
| ], |
| "last": "Berg", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Journal of Computer Vision (IJCV)", |
| "volume": "115", |
| "issue": "3", |
| "pages": "211--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexan- der C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Unsupervised metaphor identification using hierarchical graph factorization clustering", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| }, |
| { |
| "first": "Lin", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NAACL 2013", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ekaterina Shutova and Lin Sun. 2013. Unsupervised metaphor identification using hierarchical graph fac- torization clustering. In Proceedings of NAACL 2013, Atlanta, GA, USA.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Metaphor identification using verb and noun clustering", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| }, |
| { |
| "first": "Lin", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of Coling 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "1002--1010", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun cluster- ing. In Proceedings of Coling 2010, pages 1002-1010, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Unsupervised metaphor paraphrasing using a vector space model", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Van De Cruys", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of COLING 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ekaterina Shutova, Tim Van de Cruys, and Anna Korho- nen. 2012. Unsupervised metaphor paraphrasing us- ing a vector space model. In Proceedings of COLING 2012, Mumbai, India.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Automatic metaphor interpretation as a paraphrasing task", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of NAACL 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "1029--1037", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ekaterina Shutova. 2010. Automatic metaphor inter- pretation as a paraphrasing task. In Proceedings of NAACL 2010, pages 1029-1037, Los Angeles, USA.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Design and Evaluation of Metaphor Processing Systems", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "41", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ekaterina Shutova. 2015. Design and Evaluation of Metaphor Processing Systems. Computational Lin- guistics, 41(4).", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Grounded models of semantic representation", |
| "authors": [ |
| { |
| "first": "Carina", |
| "middle": [], |
| "last": "Silberer", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1423--1433", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1423-1433. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Robust extraction of metaphor from novel data", |
| "authors": [ |
| { |
| "first": "Tomek", |
| "middle": [], |
| "last": "Strzalkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [ |
| "Aaron" |
| ], |
| "last": "Broadwell", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurie", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| }, |
| { |
| "first": "Samira", |
| "middle": [], |
| "last": "Shaikh", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Boris", |
| "middle": [], |
| "last": "Yamrom", |
| "suffix": "" |
| }, |
| { |
| "first": "Kit", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the First Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "67--76", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomek Strzalkowski, George Aaron Broadwell, Sarah Taylor, Laurie Feldman, Samira Shaikh, Ting Liu, Boris Yamrom, Kit Cho, Umit Boz, Ignacio Cases, and Kyle Elliot. 2013. Robust extraction of metaphor from novel data. In Proceedings of the First Workshop on Metaphor in NLP, pages 67-76, Atlanta, Georgia.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Cross-lingual metaphor detection using common semantic features", |
| "authors": [ |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Elena", |
| "middle": [], |
| "last": "Mukomel", |
| "suffix": "" |
| }, |
| { |
| "first": "Anatole", |
| "middle": [], |
| "last": "Gershman", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the First Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "45--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yulia Tsvetkov, Elena Mukomel, and Anatole Gershman. 2013. Cross-lingual metaphor detection using com- mon semantic features. In Proceedings of the First Workshop on Metaphor in NLP, pages 45-51, Atlanta, Georgia.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Metaphor detection with cross-lingual model transfer", |
| "authors": [ |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Leonid", |
| "middle": [], |
| "last": "Boytsov", |
| "suffix": "" |
| }, |
| { |
| "first": "Anatole", |
| "middle": [], |
| "last": "Gershman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Nyberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "248--258", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 248-258, Baltimore, Maryland, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Literal and metaphorical sense identification through concrete and abstract context", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Yair", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Neuman", |
| "suffix": "" |
| }, |
| { |
| "first": "Yohai", |
| "middle": [], |
| "last": "Assaf", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "680--690", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense iden- tification through concrete and abstract context. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, EMNLP '11, pages 680-690, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "A fluid knowledge representation for understanding and generating creative metaphors", |
| "authors": [ |
| { |
| "first": "Tony", |
| "middle": [], |
| "last": "Veale", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanfen", |
| "middle": [], |
| "last": "Hao", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of COLING 2008", |
| "volume": "", |
| "issue": "", |
| "pages": "945--952", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tony Veale and Yanfen Hao. 2008. A fluid knowledge representation for understanding and generating cre- ative metaphors. In Proceedings of COLING 2008, pages 945-952, Manchester, UK.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Automatic metaphor detection using large-scale lexical resources and conventional metaphor extraction", |
| "authors": [ |
| { |
| "first": "Yorick", |
| "middle": [], |
| "last": "Wilks", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Dalton", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Allen", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucian", |
| "middle": [], |
| "last": "Galescu", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Proceedings of the First Workshop on Metaphor in NLP", |
| "volume": "20", |
| "issue": "", |
| "pages": "6--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yorick Wilks, Adam Dalton, James Allen, and Lucian Galescu. 2013. Automatic metaphor detection using large-scale lexical resources and conventional meta- phor extraction. In Proceedings of the First Workshop on Metaphor in NLP, pages 36-44, Atlanta, Georgia. M.D. Wilson. 1988. The MRC Psycholinguistic Database: Machine Readable Dictionary, Version 2. Behavioural Research Methods, Instruments and Computers, 20:6-11.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Annotated verb-direct object and verb-subject pairs from MOH", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "Figure 1shows some examples of annotated verbs from Mohammad et al.'s dataset.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Annotated adjective-noun pairs from TSV-TEST", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": ") or outperform them (as in case of Turney et al. (2011), Neuman et al. (2013), Dunn (2013b), Mohler et al. (2013), Gandy et al. (2013), Strzalkowski et al. (2013), Beigman Klebanov et al.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF5": { |
| "html": null, |
| "content": "<table><tr><td>TEST) in terms of precision (P ), recall (R) and F-score (F 1)</td></tr></table>", |
| "text": "System performance on Tsvetkov et al. test set (TSV-", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |