ACL-OCL / Base_JSON /prefixF /json /figlang /2020.figlang-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:39.664243Z"
},
"title": "Character aware models with similarity learning for metaphor detection",
"authors": [
{
"first": "Tarun",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yashvardhan",
"middle": [],
"last": "Sharma",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work on automatic sequential metaphor detection has involved recurrent neural networks initialized with different pre-trained word embeddings and which are sometimes combined with hand engineered features. To capture lexical and orthographic information automatically, in this paper we propose to add character based word representation. Also, to contrast the difference between literal and contextual meaning, we utilize a similarity network. We explore these components via two different architectures-a BiLSTM model and a Transformer Encoder model similar to BERT to perform metaphor identification. We participate in the Second Shared Task on Metaphor Detection on both the VUA and TOFEL datasets with the above models. The experimental results demonstrate the effectiveness of our method as it outperforms all the systems which participated in the previous shared task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work on automatic sequential metaphor detection has involved recurrent neural networks initialized with different pre-trained word embeddings and which are sometimes combined with hand engineered features. To capture lexical and orthographic information automatically, in this paper we propose to add character based word representation. Also, to contrast the difference between literal and contextual meaning, we utilize a similarity network. We explore these components via two different architectures-a BiLSTM model and a Transformer Encoder model similar to BERT to perform metaphor identification. We participate in the Second Shared Task on Metaphor Detection on both the VUA and TOFEL datasets with the above models. The experimental results demonstrate the effectiveness of our method as it outperforms all the systems which participated in the previous shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Metaphors are an inherent component of natural language and enrich our day-to-day communication both in verbal and written forms. A metaphoric expression involves the use of one domain or concept to explain or represent another concept (Lakoff and Johnson, 1980) . Detecting metaphors is a crucial step in interpreting semantic information and thus building better representations for natural language understanding (Shutova and Teufel, 2010) . This is beneficial for applications which require to infer the literal/metaphorical usage of words such as information extraction, conversational systems and sentiment analysis (Tsvetkov et al., 2014) .",
"cite_spans": [
{
"start": 236,
"end": 262,
"text": "(Lakoff and Johnson, 1980)",
"ref_id": "BIBREF16"
},
{
"start": 416,
"end": 442,
"text": "(Shutova and Teufel, 2010)",
"ref_id": "BIBREF30"
},
{
"start": 622,
"end": 645,
"text": "(Tsvetkov et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The detection of metaphorical usage is not a trivial task. For example, in phrases such as breaking the habit and absorption of knowledge, the words breaking and absorption are used metaphorically to mean to destroy/end and understand/learn respectively. In the phrase, All the world's a stage, the world (abstract) has been portrayed in a more concrete (stage) sense. Thus, computational approaches to metaphor identification need to exploit world knowledge, context and domain understanding (Tsvetkov et al., 2014) .",
"cite_spans": [
{
"start": 493,
"end": 516,
"text": "(Tsvetkov et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A number of approaches to metaphor detection have been proposed in the last decade. Many of them use explicit hand-engineered lexical and syntactic information (Hovy et al., 2013; , higher level features such as concreteness scores (Turney et al., 2011; K\u00f6per and Schulte im Walde, 2017) and WordNet supersenses (Tsvetkov et al., 2014) . The more recent methods have modeled metaphor detection as a sequence labeling task, and hence have used BiLSTM (Graves and Schmidhuber, 2005) in different ways (Wu et al., 2018; Gao et al., 2018; Mao et al., 2019; Bizzoni and Ghanimifard, 2018) .",
"cite_spans": [
{
"start": 160,
"end": 179,
"text": "(Hovy et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 232,
"end": 253,
"text": "(Turney et al., 2011;",
"ref_id": "BIBREF35"
},
{
"start": 254,
"end": 287,
"text": "K\u00f6per and Schulte im Walde, 2017)",
"ref_id": "BIBREF15"
},
{
"start": 312,
"end": 335,
"text": "(Tsvetkov et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 450,
"end": 480,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF9"
},
{
"start": 499,
"end": 516,
"text": "(Wu et al., 2018;",
"ref_id": "BIBREF39"
},
{
"start": 517,
"end": 534,
"text": "Gao et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 535,
"end": 552,
"text": "Mao et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 553,
"end": 583,
"text": "Bizzoni and Ghanimifard, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use concatenation of GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) vectors augmented with character level features using CNN and highway network (Kim et al., 2016; Srivastava et al., 2015) . Such a method of combining pre-trained embeddings with character level representations has been previously used in several sequence tagging tasks -part-of-speech (POS) tagging (Ma and Hovy, 2016) and named entity recognition (NER) (Chiu and Nichols, 2016), question answering (Seo et al., 2016) and multitask learning (Sanh et al., 2019) . This inspires us to explore similar setting for metaphor identification as well.",
"cite_spans": [
{
"start": 45,
"end": 70,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 80,
"end": 101,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 180,
"end": 198,
"text": "(Kim et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 199,
"end": 223,
"text": "Srivastava et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 402,
"end": 421,
"text": "(Ma and Hovy, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 502,
"end": 520,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 544,
"end": 563,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose two models for metaphor detection 1 with the input prepared as above -a vanilla BiL-STM model and a vanilla Transformer Encoder (Vaswani et al., 2017) model similar to BERT (Devlin et al., 2019) (but without pre-training). To contrast the difference between a word's literal and contextual representation (Mao et al., 2019) con-catenated the two before feeding into the softmax classifier. Instead, we extend the idea of cosine similarity between two words in a phrase of signifying metaphoricity Rei et al., 2017) to similarity between the literal and contextual representations of a word and then feed this result into the classifier.",
"cite_spans": [
{
"start": 139,
"end": 161,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 316,
"end": 334,
"text": "(Mao et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 508,
"end": 525,
"text": "Rei et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we participate in The Second Shared Task on Metaphor Detection 2 on both the VU Amsterdam Metaphor Corpus (VUA) (Steen et al., 2010) and TOEFL, a subset of ETS Corpus of Non-Native Written English datasets with the above models and a vanilla combination of them. The combination of the models outperforms the winner (Wu et al., 2018 ) of the previous shared task ( .",
"cite_spans": [
{
"start": 121,
"end": 141,
"text": "(Steen et al., 2010)",
"ref_id": "BIBREF33"
},
{
"start": 325,
"end": 341,
"text": "(Wu et al., 2018",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous metaphor detection frameworks include supervised machine learning approaches utilizing explicit hand-engineered features, approaches based on unsupervised learning and representation learning, and deep learning models to detect metaphors in an end-to-end manner. (K\u00f6per and Schulte im Walde, 2017) determine the difference of concreteness scores between the target word and its context and use this to predict the metaphoricity of verbs in the VUA dataset. (Tsvetkov et al., 2014) combine vector space representations with features such as abstractness and imageability and Word-Net Supersenses to model the metaphor detection problem in two syntactic constructions -subjectverb-object (SVO) and adjective-noun (AN). Evaluating their approach on the TroFi dataset (Birke and Sarkar, 2006) , they achieve competitive accuracy. (Hovy et al., 2013) explore differences in compositional behaviour of a word's literal and metaphorical use in certain syntactic settings. Using lexical, WordNet supersense features and PoS tags of sentence tree, they train an SVM using tree-kernel. (Klebanov et al., 2016) use semantic classes of verbs such as orthographic unigram, lemma unigram, distributional clusters etc. to identify metaphors in the VUA dataset.",
"cite_spans": [
{
"start": 272,
"end": 306,
"text": "(K\u00f6per and Schulte im Walde, 2017)",
"ref_id": "BIBREF15"
},
{
"start": 466,
"end": 489,
"text": "(Tsvetkov et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 773,
"end": 797,
"text": "(Birke and Sarkar, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 835,
"end": 854,
"text": "(Hovy et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some of the methods for metaphor detection utilize unsupervised learning. (Mao et al., 2018) train word embeddings on wikipedia dump and use WordNet compute a best-fit word corresponding to a target word in a sentence. The cosine similarity between these two words indicates the metaphoricity of the target word. compute word embeddings and phrase embeddings on wikipedia dump. They extract visual features from CNNs using images from Google Images. Next, multimodal fusion strategies are explored to determine metaphoricity.",
"cite_spans": [
{
"start": 74,
"end": 92,
"text": "(Mao et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, approaches based on deep learning have been proposed. The first in this line is Supervised Similarity network by (Rei et al., 2017) . They capture metaphoric composition by modeling the interaction between source and target domain by a gating function and then using a cosine similarity network to compute metaphoricity. They evaluate their method on adjective-noun, verb-subject and verb-direct object constructions on the MOH (Mohammad et al., 2016) and TSV (Tsvetkov et al., 2014) datasets.",
"cite_spans": [
{
"start": 123,
"end": 141,
"text": "(Rei et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 438,
"end": 461,
"text": "(Mohammad et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 470,
"end": 493,
"text": "(Tsvetkov et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More recently, the problem has been modeled as a sequence labeling task, in which at each timestep the word is predicted as literal or metaphoric. (Wu et al., 2018) used word2vec (Mikolov et al., 2013) , PoS tags and word clusters as input features to a CNN and BiLSTM network. They compared inference using softmax and CRF layers, and found softmax to work better. (Bizzoni and Ghanimifard, 2018) propose two models -a BiLSTM with dense layers before and after it and a recursive model for bigram phrase composition using fully-connected neural network. They also added concreteness scores to boost performance. (Gao et al., 2018) fed GloVe and ELMo embeddings into a vanilla BiLSTM followed by softmax. (Mao et al., 2019) proposed models based on MIP (Group, 2007) and SVP (Wilks, 1975 (Wilks, , 1978 linguistic theories and achieved competitive performance on VUA, MOH and TroFi datasets.",
"cite_spans": [
{
"start": 147,
"end": 164,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF39"
},
{
"start": 179,
"end": 201,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
},
{
"start": 366,
"end": 397,
"text": "(Bizzoni and Ghanimifard, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 613,
"end": 631,
"text": "(Gao et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 705,
"end": 723,
"text": "(Mao et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 753,
"end": 766,
"text": "(Group, 2007)",
"ref_id": "BIBREF10"
},
{
"start": 775,
"end": 787,
"text": "(Wilks, 1975",
"ref_id": "BIBREF37"
},
{
"start": 788,
"end": 802,
"text": "(Wilks, , 1978",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper we propose two architectures for metaphor detection based on sequence labeling paradigm -a BiLSTM model and a Transformer Encoder model. Both the models are initialized with rich word representations. First, we describe the word representations, then, the similarity network, and subsequently the models (Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 318,
"end": 327,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The first step in building word representations is the concatenation of GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) Figure 1 : Proposed model which includes character embeddings and similarity network combination of these two have shown good performance across an array of NLP tasks (Peters et al., 2018) . While these two representations are based on corpus statistics and bidirectional language models respectively and serve as a good starting point as shown by (Gao et al., 2018) and (Mao et al., 2019) , however to learn explicit lexical, syntactic and orthographic information (so as to be more suited for metaphor tasks) we augment these word representations with character level embeddings. We follow (Kim et al., 2016) to compute characterlevel representations by a 1D CNN (see Figure 2 ) followed by a highway network (Srivastava et al., 2015) . Let word at position t be made up of characters [c 1 , . . . , c l ], where each c i \u2208 R d , l is the length of word and d is dimensionality 3 of character embeddings. Let C t \u2208 R d\u00d7l denote the character-level embedding matrix of word t. This matrix is convolved with filter H \u2208 R d\u00d7w of width w, followed by a non-linearity.",
"cite_spans": [
{
"start": 78,
"end": 103,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 113,
"end": 134,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 302,
"end": 323,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 483,
"end": 501,
"text": "(Gao et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 506,
"end": 524,
"text": "(Mao et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 727,
"end": 745,
"text": "(Kim et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 846,
"end": 871,
"text": "(Srivastava et al., 2015)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 1",
"ref_id": null
},
{
"start": 805,
"end": 813,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "f t = tanh(C t * H + b), f t \u2208 R l\u2212w+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "Next, we apply max-pooling over the length of f 3 d is chosen less than the |C|, the size of vocabulary of characters to get a output for one filter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "y t = max 1\u2264j\u2264l\u2212w+1 {f t j } (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "Now, we take multiple filters of different widths and concatenate the output of each to get a vector representation of word t. Let h be the number of filters and y 1 , . . . , y h be the outputs, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "c t = [y t 1 , . . . , y t h ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": ". We concatenate GloVe embedding (g t ) with c t and run it through a single layer highway network (Srivastava et al., 2015) .",
"cite_spans": [
{
"start": 99,
"end": 124,
"text": "(Srivastava et al., 2015)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "a t = [g t ; c t ] (3) t = \u03c3(W T a t + b T ) (4) z t = t g(W H a t + b H ) + (1 \u2212 t) a t (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "z t and a t have same dimensionality by construction, W H and W T are square matrices, g is ReLU activation. t is called as transform gate and (1 \u2212 t) as the carry gate. The role of highway network is to select the dimensions which are to be modified and which are to be passed directly to output. Thus, we allow the network to adjust the contribution of GloVe and character-based embeddings for better learning (thus an adjustment between semantic and lexical information). We also concatenated GloVe, ELMo and character embeddings and passed through highway layer, but the former approach performed better with lesser parameters. Our input representation is [z t ; e t ] (where e t is ELMo vector) which is fed to BiL-STM/Transformer. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representations",
"sec_num": "3.1"
},
{
"text": "We use a single-layer BiLSTM model (Graves and Schmidhuber, 2005) to produce hidden states h t for each position t. These hidden states represent our contextual meaning, the meaning which we will contrast with the input literal meaning. Using hidden states as a candidate for contextual meaning has been done previously (Gao et al., 2018; Mao et al., 2019; Wu et al., 2018) . A simple approach would be to pass h t directly to softmax layer for predictions. But we condition our predictions both on h t and input representation as shown in next sub-section.",
"cite_spans": [
{
"start": 35,
"end": 65,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF9"
},
{
"start": 320,
"end": 338,
"text": "(Gao et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 339,
"end": 356,
"text": "Mao et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 357,
"end": 373,
"text": "Wu et al., 2018)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM model",
"sec_num": "3.2"
},
{
"text": "(Rei et al., 2017) use a weighted cosine similarity network to determine similarity between two word vectors in a phrase . We extend this idea further to calculation of similarity between literal and contextual representations. To perform this computation, we first project the input embeddings to the size of hidden dimension of BiLSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x t = [z t ; e t ] (6) x t = tanh(W z x t )",
"eq_num": "(7)"
}
],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "This step serves two purposes -first reduces the size to enable calculation, second performs vector space mapping. Since input embeddings are in a different semantic vector space (due to the pretrained vectors), we allow the network to learn a mapping to the more metaphor specific vector space. Next, we element-wise multiplyx t with h t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "m t =x t h t (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "m t is input to a dense layer as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u t = tanh(W u m t )",
"eq_num": "(9)"
}
],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "If u t has length 1, W u has all weights equal to 1 and linear activation is used instead of tanh, then the above two steps mimic the cosine similarity function. But, to provide better generalization, |u t | > 1 and tanh is used to allow the model to learn custom features for metaphor detection (Rei et al., 2017) . u t is fed to softmax classifier to make predictions.",
"cite_spans": [
{
"start": 296,
"end": 314,
"text": "(Rei et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u0177 t |u t ) = \u03c3(W y u t + b)",
"eq_num": "(10)"
}
],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "\u03c3 is the softmax function, W y and b are trainable weights and bias respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Network",
"sec_num": "3.3"
},
{
"text": "The advent of Transformer (Vaswani et al., 2017) and further general language models such as BERT (Devlin et al., 2019) , GPT-2 (Radford et al., 2019) have shown excellent performance across multiple NLP, NLU and NLG tasks. Inspired by this, we explore a vanilla transformer model in this paper which consists of only the encoder stack and is not pre-trained on any corpus. The input to the transformer model is the same as the BiLSTM model. To contrast the literal meaning with the contextual meaning, we employ equations 6,7,8,9,10 except that h t would denote the output of the transformer at position t. (Mao et al., 2019) also explored transformers in their experiments, but they only computed word representations from a pre-trained BERT large model and fed it to BiL-STM, they did not train a transformer model from scratch. Since transformers do not track positional information, positional encodings are added for this purpose, but in our case adding such encoding did not improve performance. Furthermore, our transformer model is composed of only a single transformer block (that is depth=1) with a single head. Such a simple model is able to reach good score on the metaphor detection task.",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 98,
"end": 119,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 122,
"end": 150,
"text": "GPT-2 (Radford et al., 2019)",
"ref_id": null
},
{
"start": 608,
"end": 626,
"text": "(Mao et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer model",
"sec_num": "3.4"
},
{
"text": "We evaluate our models on two metaphor datasets on both ALL-POS and VERB track in the Second Shared Task on Metaphor Detection. Table 1 shows the dataset statistics.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "First is the VU Amsterdam Metaphor Corpus (VUA) (Steen et al., 2010) widely studied dataset for metaphor detection. All the words in this dataset are labeled as either metaphoric or literal according to MIPVU (Steen et al., 2010; Group, 2007) protocol. This dataset was also used in the 2018 Shared Task on Metaphor Detection .",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "(Steen et al., 2010)",
"ref_id": "BIBREF33"
},
{
"start": 209,
"end": 229,
"text": "(Steen et al., 2010;",
"ref_id": "BIBREF33"
},
{
"start": 230,
"end": 242,
"text": "Group, 2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Second is the TOEFL corpus, a subset of ETS Corpus of Non-Native Written English . This dataset contains the essays written by takers of the TOEFL test having either medium or high English proficiency. The words in this dataset are annotated for argumentation-relevant metaphors. The essays are in response to prompts, for which test-takers were required to argue for or against and in such process the metaphors used to support one's argument were annotated. So, the protocol used (Beigman Klebanov and Flor, 2013) is different from MIPVU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The first four baselines are evaluated on the VUA test set and the last two on the TOEFL test set. CNN-BiLSTM (Wu et al., 2018) : This model is the winner of the previous shared task . They proposed an ensemble of CNN-BiLSTM network with input features as word2vec, PoS tags and word2vec clusters. BiLSTM (Gao et al., 2018) : This model is a simple BiLSTM with inputs as concatenation of GloVe and ELMo embeddings. BiLSTM-MCHA (Mao et al., 2019) : This model employs BiLSTM followed by a multi-head contextual attention which is inspired by SPV protocol of metaphor identification. They also use GloVe and ELMo as input features. BiLSTM-Concat (Bizzoni and Ghanimifard, 2018) : This model achieved the second position in the previous shared task. They combined a BiL-STM (preceded and followed by dense layers) and a model based on recursive composition of word embedding. Concreteness scores were added to boost performance. CE-BiLSTM : We add a variant of our proposed model without the Transformer model and the similarity network. All other components are kept same. CE denotes character embeddings. Feature-based (Beigman Klebanov et al., 2018) : They use several hand-crafted features and train a logistic regression classifier to predict metaphoricity. This is the only known work on TOEFL dataset to the best of our knowledge.",
"cite_spans": [
{
"start": 110,
"end": 127,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF39"
},
{
"start": 305,
"end": 323,
"text": "(Gao et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 427,
"end": 445,
"text": "(Mao et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 644,
"end": 675,
"text": "(Bizzoni and Ghanimifard, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "We note that BiLSTM and BiLSTM-MHCA models above have different experimental settings than ours. They trained and tested their models on different amount of data when compared to the shared task. For a fair comparison, we evaluate (train and test) our method in the same data setting (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 293,
"text": "(Table 3)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "The 300d pre-trained GloVe embeddings are used along with 1024d pre-trained ELMo embeddings. The dimension of character-level embeddings is set to 50. The filters used in CharCNN are [(1, 25) , (2, 50), (3, 75), (4, 100)], where first element of each tuple denotes the width of filter and second element denotes the number of filters used. Inspired by the effectiveness of PoS tags (Wu et al., 2018; Beigman Klebanov et al., 2014) in metaphor detection, we concatenate 30 dimensional PoS embeddings. We found 30d embeddings to work better than one-hot encodings. These embeddings are learned during model training. The uni-directional hidden state size of BiLSTM is set to 300. We apply Dropout (Srivastava et al., 2014) on input to BiLSTM and to the output of BiLSTM. The dimension of u t , the output size of similarity network is set to 50.",
"cite_spans": [
{
"start": 183,
"end": 191,
"text": "[(1, 25)",
"ref_id": null
},
{
"start": 382,
"end": 399,
"text": "(Wu et al., 2018;",
"ref_id": "BIBREF39"
},
{
"start": 400,
"end": 430,
"text": "Beigman Klebanov et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 695,
"end": 720,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.3"
},
{
"text": "The hidden state size of Transformer is set to 300 as well. We use a single head and single layer architecture. We also tried multiple heads (8, 16), but the performance dropped a little. The attention due to padded tokens is masked out in the attention matrix during forward pass. The feedforward network which is applied after the selfattention layer consists of two linear transformations with ReLU activation in between (Vaswani et al., 2017) . First transformation projects 300d to 1200d and second transformation projects 1200d back to 300d. Dropout is applied both before and after the feed-forward network. It can be seen that this transformer model is simplified in terms of number of parameters when compared to BERT (Devlin et al., 2019) . Our focus here is on the power of transformer architecture rather than on transformer based huge language models.",
"cite_spans": [
{
"start": 424,
"end": 446,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 727,
"end": 748,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.3"
},
{
"text": "We also explore the combination of both the models. Specifically, BiLSTM and Transformer model are combined at the pre-activation stage, that is, the logits of both networks are averaged and then input to the softmax layer for predictions. Both the models are trained in parallel, with their own losses, whereas the F1-score is calculated from the combined prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.3"
},
{
"text": "The objective function used is weighted crossentropy loss as used in (Mao et al., 2019; Wu et al., 2018) . where y n is the gold label,\u0177 n is the predicted score and w yn is set to 1 if y n is literal and 2 otherwise. We use Adam optimizer (Kingma and Ba, 2014) and early stopping on the basis of validation Fscore. Batch-size is set to 4.",
"cite_spans": [
{
"start": 69,
"end": 87,
"text": "(Mao et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 88,
"end": 104,
"text": "Wu et al., 2018)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 M n=1 w yn y n log(\u0177 n )",
"eq_num": "(11)"
}
],
"section": "Setup",
"sec_num": "4.3"
},
{
"text": "TOEFL dataset contains essays annotated for metaphor and metadata mapping essays to the respective prompts and English proficiency of testtakers. We extract all sentences from all the essays and prepare our dataset considering one sentence as one example (batch-size x means x such examples). In this paper, we do not exploit the metadata of TOEFL corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.3"
},
{
"text": "For both VUA and TOEFL datasets, we have a pre-specified train and test partition, so for hyperparameter tuning we split the train set into train and validation in the ratio of 10:1 randomly. Since the models predict labels for all the words in a sequence, we train a single model and use it for evaluating both ALL-POS and Verb tracks. We report F-score on test set for metaphor class on both datasets and tasks. Section 6 presents an ablation study and explores the performance of different components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.3"
},
{
"text": "We first compare our method against the baseline systems which have the same experimental setting as ours on the VUA test set -CNN-BiLSTM and BiLSTM-Concat. Table 2 reports the results. As shown, our proposed model (comprising of both BiLSTM and Transformer) outperforms the other methods on both the tracks. Specifically, we achieve F-score of 66.6 on VUA All POS and 71.2 for VUA Verb set. Furthermore, we employ ensembling to boost our performance. This strategy mainly improves precision (60.6 to 63.0 for All POS, 62.7 to 66.7 for Verb). For ensembling we run the model 7 times which involves different dropout probabilities, changing the ratio of metaphoric to literal loss weights, increasing/decreasing number of epochs. Thus, we do not modify the number of parameters in any run. At the end, we take a majority vote to produce final predictions. Our best F-score on All POS track is 67.0 and Verb track is 71.7. We observe higher F-scores on Verb track than on All POS track, this might be due to fact that a higher percentage of verbs are annotated as being metaphoric, hence more training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We now compare our method with the other two baselines on a common experimental setting. We tune our hyperparameters in this setting due to difference in training and validation data. Specifically, since training set is of smaller size, we increase Dropout probabilities, and the dimension of PoS embedding is reduced from 30 to 10. As shown in Table 3 , the single best model achieves a higher F-score than the baselines and the ensemble (with similar setting as above) improves the performance Lastly, we explore the performance of our method on the TOEFL test set (Table 4) . We added an extra baseline which does not include the Transformer model and the similarity network. Also, the CE-BiLSTM-Transformer model here does not include the similarity network. The reason for this is because it degraded performance. The similarity network contrasts the literal meaning with the contextual meaning of the target word which is in line with MIP (Steen et al., 2010) protocol. Since, TOEFL corpus is annotated for argument-specific metaphors and not MIP, we hypothesize that this might be the reason for lower performance. However, VUA is annotated according to MIP, thus similarity component improves performance here, as we show in the ablation section. Table 4 shows that both our baseline (CE-BiLSTM) and baseline + Transformer improve upon the Feature-based model by 8.8 and 9.0 points respectively on All POS track and 8.9 and 9.3 points respectively on Verb track. Similar to VUA, here also Verbs score higher than All POS because of more training instances for verbs.",
"cite_spans": [
{
"start": 945,
"end": 965,
"text": "(Steen et al., 2010)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 567,
"end": 576,
"text": "(Table 4)",
"ref_id": "TABREF6"
},
{
"start": 1255,
"end": 1262,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The scores on TOEFL dataset are lower than the VUA dataset. This is due to the lesser number of training instances in TOEFL dataset. Also, while we have higher recall on VUA, on TOEFL we have higher precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "This section considers the performance of different components of our method in isolation and combination on the VUA validation set unless otherwise specified. The reason for choosing validation set is because we were not able to evaluate some settings on the test set due to limited time and number of submissions. Wherever we have test set results we report those as well. Impact of Character Embeddings We first note the performances of vanilla BiLSTM and vanilla Transformer models and a simple combination of them in Table 5 . Note that vanilla implementation still includes GloVe and ELMo vectors. We see that BiLSTM performs better than Transformer model and that a combination of them seems to complement each other. Now, we see the impact of adding characterlevel embeddings on both the models. As Table 6 shows, addition of character embeddings improves both the networks. Particularly, Transformer benefits more from this addition as F1-score increases from 69.1 to 70.8. On the test set, our vanilla combination scores 65.2 whereas the combination of models with character embeddings scores 66.1. This helps in asserting the usefulness of characterbased features in learning pro-metaphor features. demonstrate the utility of unigram lemmas and orthographic features in metaphor detection. Our character embeddings computed from CNN combines features at different n-grams of a word and thus helps to learn lexical and orthographic information automatically which aids in improving performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 522,
"end": 529,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 807,
"end": 814,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6"
},
{
"text": "We suspect that employing the baseline unigram features (Beigman Klebanov et al., 2014) provided by the organizers instead of learned characterembeddings may be seen as a way to achieve the same goal. But our method is more robust in the sense that, we allow for learning of different ngram features of a word (including unigram itself). Particularly, our method is helpful in cases where the target word has incorrect spelling, because we learn representations instead of using fixed precomputed features. Table 7 depicts the performance after the addition of similarity network. As the similarity network is guided by the MIP protocol, it indeed boosts results for the VUA dataset. We observe that in this case too Transformer benefits more by the inclusion and the benefit (1.9 points) is even more than by adding character embeddings (1.7 points). However, for both the components increments in BiLSTM performance are equal. Also, the combination of both models with similarity network outperforms the combination with character embeddings although by a small margin. The above reasoning indicates towards similarity network as being an important component for detection of MIP guided labeling of metaphors. Table 8 reports the numbers when both character embeddings and similarity network are added to the base models. The results improve from either of the additions which indicate that they complement each other. Our best model so far contains both the base models and the components. This model on the VUA test set, scores 66.5 and the model in the last row of Table 6 scores 66.1.",
"cite_spans": [
{
"start": 56,
"end": 87,
"text": "(Beigman Klebanov et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 507,
"end": 514,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 1212,
"end": 1219,
"text": "Table 8",
"ref_id": "TABREF12"
},
{
"start": 1570,
"end": 1577,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6"
},
{
"text": "In all the cases examined till now, Transformer based models have higher precision than the BiL-STM based models, and in 3 out of 4 cases of (Vanilla, CE, SN, CE + SN), the combination has as even better precision than either of the individual models. In terms of F-score, BiLSTM based models score higher than Transformer based ones in 2 cases (Vanilla and CE), equal in CE + SN and lower in SN. Impact of PoS tags Incorporation of PoS tags proves to be beneficial. It improves the F-score of the last model in Table 8 from 73.4 to 73.5. On the test set, it improves the F-score from 66.5 to 66.6 which is in line with (Hovy et al., 2013; Wu et al., 2018) .",
"cite_spans": [
{
"start": 620,
"end": 639,
"text": "(Hovy et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 640,
"end": 656,
"text": "Wu et al., 2018)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 512,
"end": 519,
"text": "Table 8",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Impact of Similarity Network",
"sec_num": null
},
{
"text": "We proposed two metaphor detection models, a BiLSTM model based on prior work and a Transformer model based on their success in NLP tasks. We augment these models with two components -Character Embeddings and Similarity network to learn lexical features and contrast literal and contextual meanings respectively. Our experimental results demonstrate the effectiveness of our method as we achieve superior performance than all the previous methods on VUA corpus and TOEFL corpus. Through an ablation study we examine the contribution of different parts of our framework in the task of metaphor detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In our future work we would explore metaphor detection in a multi-task setting with semantically similar tasks such as Word Sense Disambiguation and Co-reference Resolution. These auxiliary tasks may help to better understand the contextual meaning and reach of a word. For TOEFL dataset, future avenues would include strategies to exploit the metadata, and similarity measures more suitable for argumentation-relevant metaphors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://competitions.codalab.org/ competitions/22188",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Argumentation-relevant metaphors in test-taker essays",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flor",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the First Workshop on Metaphor in NLP",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov and Michael Flor. 2013. Argumentation-relevant metaphors in test-taker es- says. In Proceedings of the First Workshop on Metaphor in NLP, pages 11-20, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Different texts, same metaphors: Unigrams and beyond",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flor",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Second Workshop on Metaphor in NLP",
"volume": "",
"issue": "",
"pages": "11--17",
"other_ids": {
"DOI": [
"10.3115/v1/W14-2302"
]
},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Ben Leong, Michael Heil- man, and Michael Flor. 2014. Different texts, same metaphors: Unigrams and beyond. In Proceedings of the Second Workshop on Metaphor in NLP, pages 11-17, Baltimore, MD. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic classifications for detection of verb metaphors",
"authors": [
{
"first": "Chee Wee",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "E",
"middle": [
"Dario"
],
"last": "Leong",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Gutierrez",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flor",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "101--106",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2017"
]
},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Chee Wee Leong, E. Dario Gutierrez, Ekaterina Shutova, and Michael Flor. 2016. Semantic classifications for detection of verb metaphors. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 101-106, Berlin, Germany. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A corpus of non-native written English annotated for metaphor",
"authors": [
{
"first": "Chee",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Wee",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flor",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "86--91",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2014"
]
},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Chee Wee (Ben) Leong, and Michael Flor. 2018. A corpus of non-native written English annotated for metaphor. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 86-91, New Orleans, Louisiana. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A clustering approach for nearly unsupervised recognition of nonliteral language",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Birke",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2006,
"venue": "11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Birke and Anoop Sarkar. 2006. A clustering ap- proach for nearly unsupervised recognition of non- literal language. In 11th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bigrams and BiLSTMs two neural networks for sequential metaphor detection",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Bizzoni",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Ghanimifard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "91--101",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0911"
]
},
"num": null,
"urls": [],
"raw_text": "Yuri Bizzoni and Mehdi Ghanimifard. 2018. Bigrams and BiLSTMs two neural networks for sequential metaphor detection. In Proceedings of the Workshop on Figurative Language Processing, pages 91-101, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "P",
"middle": [
"C"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00104"
]
},
"num": null,
"urls": [],
"raw_text": "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans- actions of the Association for Computational Lin- guistics, 4:357-370.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural metaphor detection in context",
"authors": [
{
"first": "Ge",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "607--613",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1060"
]
},
"num": null,
"urls": [],
"raw_text": "Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettle- moyer. 2018. Neural metaphor detection in context. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 607-613, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural networks",
"volume": "18",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural net- works, 18(5-6):602-610.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mip: A method for identifying metaphorically used words in discourse",
"authors": [
{
"first": "Pragglejaz",
"middle": [],
"last": "Group",
"suffix": ""
}
],
"year": 2007,
"venue": "Metaphor and Symbol",
"volume": "22",
"issue": "1",
"pages": "1--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pragglejaz Group. 2007. Mip: A method for iden- tifying metaphorically used words in discourse. Metaphor and Symbol, 22(1):1-39.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying metaphorical word use with tree kernels",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Mrinmaya",
"middle": [],
"last": "Sachan",
"suffix": ""
},
{
"first": "Kartik",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Huying",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Whitney",
"middle": [],
"last": "Sanders",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the First Workshop on Metaphor in NLP",
"volume": "",
"issue": "",
"pages": "52--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Shashank Srivastava, Sujay Kumar Jauhar, Mrinmaya Sachan, Kartik Goyal, Huying Li, Whit- ney Sanders, and Eduard Hovy. 2013. Identifying metaphorical word use with tree kernels. In Pro- ceedings of the First Workshop on Metaphor in NLP, pages 52-57.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2016. Character-aware neural language models. In Thirtieth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semantic classifications for detection of verb metaphors",
"authors": [
{
"first": "Chee Wee",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Gutierrez",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flor",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "101--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Chee Wee Leong, E Dario Gutierrez, Ekaterina Shutova, and Michael Flor. 2016. Semantic classifications for detection of verb metaphors. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 101-106.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving verb metaphor detection by propagating abstractness to words, phrases and individual senses",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications",
"volume": "",
"issue": "",
"pages": "24--30",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1903"
]
},
"num": null,
"urls": [],
"raw_text": "Maximilian K\u00f6per and Sabine Schulte im Walde. 2017. Improving verb metaphor detection by propagating abstractness to words, phrases and individual senses. In Proceedings of the 1st Workshop on Sense, Con- cept and Entity Representations and their Applica- tions, pages 24-30, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Metaphors we live by",
"authors": [
{
"first": "George",
"middle": [],
"last": "Lakoff",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Lakoff and Mark Johnson. 1980. Metaphors we live by. Chicago, IL: University of Chicago.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A report on the 2018 VUA metaphor detection shared task",
"authors": [
{
"first": "Chee",
"middle": [],
"last": "Wee",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Ben",
"suffix": ""
},
{
"first": ")",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Klebanov",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "56--66",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0907"
]
},
"num": null,
"urls": [],
"raw_text": "Chee Wee (Ben) Leong, Beata Beigman Klebanov, and Ekaterina Shutova. 2018. A report on the 2018 VUA metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56-66, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1064--1074",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064-1074, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Word embedding and WordNet based metaphor identification and interpretation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Guerin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1222--1231",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1113"
]
},
"num": null,
"urls": [],
"raw_text": "Rui Mao, Chenghua Lin, and Frank Guerin. 2018. Word embedding and WordNet based metaphor iden- tification and interpretation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1222-1231, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Endto-end sequential metaphor identification inspired by linguistic theories",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Guerin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3888--3898",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1378"
]
},
"num": null,
"urls": [],
"raw_text": "Rui Mao, Chenghua Lin, and Frank Guerin. 2019. End- to-end sequential metaphor identification inspired by linguistic theories. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3888-3898, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Metaphor as a medium for emotion: An empirical study",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "23--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016. Metaphor as a medium for emotion: An em- pirical study. In Proceedings of the Fifth Joint Con- ference on Lexical and Computational Semantics, pages 23-33.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Grasping the finer point: A supervised similarity network for metaphor detection",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Luana",
"middle": [],
"last": "Bulat",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1537--1546",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Marek Rei, Luana Bulat, Douwe Kiela, and Ekaterina Shutova. 2017. Grasping the finer point: A su- pervised similarity network for metaphor detection. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1537-1546, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A hierarchical multi-task approach for learning embeddings from semantic tasks",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6949--6956",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning em- beddings from semantic tasks. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6949-6956.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Black holes and white rabbits: Metaphor identification with visual features",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Maillard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "160--170",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1020"
]
},
"num": null,
"urls": [],
"raw_text": "Ekaterina Shutova, Douwe Kiela, and Jean Maillard. 2016. Black holes and white rabbits: Metaphor iden- tification with visual features. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 160-170, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Metaphor corpus annotated for source-target domain mappings",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2010,
"venue": "LREC",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Shutova and Simone Teufel. 2010. Metaphor corpus annotated for source-target domain map- pings. In LREC, volume 2. Citeseer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Training very deep networks",
"authors": [
{
"first": "K",
"middle": [],
"last": "Rupesh",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2377--2385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rupesh K Srivastava, Klaus Greff, and J\u00fcrgen Schmid- huber. 2015. Training very deep networks. In Advances in neural information processing systems, pages 2377-2385.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A method for linguistic metaphor identification: From MIP to MIPVU",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Steen",
"suffix": ""
},
{
"first": "Aletta",
"middle": [],
"last": "Dorst",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Berenike Herrmann",
"suffix": ""
},
{
"first": "Tina",
"middle": [],
"last": "Kaal",
"suffix": ""
},
{
"first": "Trijntje",
"middle": [],
"last": "Krennmayr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pasma",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Steen, Aletta Dorst, J Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A method for linguistic metaphor identifica- tion: From MIP to MIPVU, volume 14. John Ben- jamins Publishing.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Metaphor detection with cross-lingual model transfer",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Leonid",
"middle": [],
"last": "Boytsov",
"suffix": ""
},
{
"first": "Anatole",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "248--258",
"other_ids": {
"DOI": [
"10.3115/v1/P14-1024"
]
},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detec- tion with cross-lingual model transfer. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 248-258, Baltimore, Maryland. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Literal and metaphorical sense identification through concrete and abstract context",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Neuman",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Assaf",
"suffix": ""
},
{
"first": "Yohai",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "680--690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney, Yair Neuman, Dan Assaf, and Yohai Co- hen. 2011. Literal and metaphorical sense identifica- tion through concrete and abstract context. In Pro- ceedings of the 2011 Conference on Empirical Meth- ods in Natural Language Processing, pages 680- 690, Edinburgh, Scotland, UK. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A preferential, pattern-seeking, semantics for natural language inference. Artificial intelligence",
"authors": [
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "6",
"issue": "",
"pages": "53--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yorick Wilks. 1975. A preferential, pattern-seeking, se- mantics for natural language inference. Artificial in- telligence, 6(1):53-74.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Making preferences more active",
"authors": [
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1978,
"venue": "Artificial intelligence",
"volume": "11",
"issue": "3",
"pages": "197--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yorick Wilks. 1978. Making preferences more active. Artificial intelligence, 11(3):197-223.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Neural metaphor detecting with CNN-LSTM model",
"authors": [
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "110--114",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0913"
]
},
"num": null,
"urls": [],
"raw_text": "Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neu- ral metaphor detecting with CNN-LSTM model. In Proceedings of the Workshop on Figurative Lan- guage Processing, pages 110-114, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "CNN for extracting character-level representations",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>car</td><td>drinks</td><td>gasoline</td></tr><tr><td>0</td><td>1</td><td>0</td></tr><tr><td/><td>Dense +</td><td/></tr><tr><td/><td>Softmax</td><td/></tr><tr><td/><td>Dense +</td><td/></tr><tr><td/><td>Tanh</td><td/></tr><tr><td/><td/><td>Similarity network</td></tr><tr><td/><td>X</td><td>Dense + Tanh</td></tr><tr><td/><td>BiLSTM/</td><td/></tr><tr><td/><td colspan=\"2\">Transformer</td></tr><tr><td/><td>Concat</td><td/></tr><tr><td/><td>+</td><td/></tr><tr><td/><td>Dense + ReLU</td><td>Highway network</td></tr><tr><td/><td>Concat</td><td/></tr><tr><td>ELMo</td><td>GloVe</td><td>Char</td></tr><tr><td/><td/><td>Character based</td></tr><tr><td/><td/><td>word representation</td></tr><tr><td>car</td><td>drinks</td><td>gasoline</td></tr></table>",
"html": null,
"type_str": "table",
"text": "embeddings. The",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Dataset Statistics. #T denotes the number of tokens which are annotated. #S denotes the number of sentences. %M denotes the token-level metaphoricity percentage.",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">VUA ALL POS</td><td/><td>VUA VERB</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>CNN-BiLSTM</td><td colspan=\"6\">60.8 70.0 65.1 60.0 76.3 67.1</td></tr><tr><td>BiLSTM-Concat</td><td colspan=\"3\">59.5 68.0 63.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>CE-BiLSTM-Transformer</td><td colspan=\"6\">60.6 73.9 66.6 62.7 82.2 71.2</td></tr><tr><td>CE-BiLSTM-</td><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Transformer (Ensemble) 63.0 71.6 67.0 66.7 77.5 71.7",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">VUA ALL POS</td><td/><td>VUA VERB</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>BiLSTM</td><td colspan=\"6\">71.6 73.6 72.6 68.2 71.3 69.7</td></tr><tr><td>BiLSTM-MHCA</td><td colspan=\"6\">73.0 75.7 74.3 66.3 75.2 70.5</td></tr><tr><td>CE-BiLSTM-Transformer</td><td colspan=\"6\">71.3 78.5 74.7 66.1 76.2 70.8</td></tr><tr><td colspan=\"7\">CE-BiLSTM-Transformer (Ensemble) 75.9 74.1 75.0 68.0 75.1 71.4</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Comparison of our method against the baseline systems on the VUA test set.",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">TOEFL ALL POS</td><td colspan=\"3\">TOEFL VERB</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Feature-based</td><td colspan=\"6\">49.0 58.0 53.0 50.0 64.0 56.0</td></tr><tr><td>CE-BiLSTM</td><td colspan=\"6\">62.7 60.8 61.8 70.0 60.5 64.9</td></tr><tr><td colspan=\"7\">CE-BiLSTM-Transformer 62.3 61.7 62.0 66.9 63.8 65.3</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Comparison of our method against the baseline systems on the VUA test set with different experimental setting.",
"num": null
},
"TABREF6": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Comparison of our method against the baseline systems on the TOEFL test set.",
"num": null
},
"TABREF8": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">VUA Validation</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td></tr><tr><td>CE + BiLSTM</td><td colspan=\"3\">65.7 78.8 71.6</td></tr><tr><td>CE + Transformer</td><td colspan=\"3\">67.3 74.7 70.8</td></tr><tr><td colspan=\"4\">CE + BiLSTM + Transformer 71.3 74.2 72.7</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Performance of vanilla models on VUA validation set.",
"num": null
},
"TABREF9": {
"content": "<table><tr><td>a little more scoring 75.0 on All POS and 71.4 on</td></tr><tr><td>Verb tracks.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Addition of Character embeddings to models.",
"num": null
},
"TABREF11": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">VUA Validation</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td></tr><tr><td>CE + SN + BiLSTM</td><td colspan=\"3\">66.7 79.3 72.4</td></tr><tr><td>CE + SN + Transformer</td><td colspan=\"3\">67.7 77.8 72.4</td></tr><tr><td>CE + SN + BiLSTM + Transformer</td><td colspan=\"3\">68.5 79.1 73.4</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Addition of Similarity network to models. (SN is the Similarity network)",
"num": null
},
"TABREF12": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Addition of both SN and CE to the models.",
"num": null
}
}
}
}