{ "paper_id": "P16-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:55:34.624851Z" }, "title": "Idiom Token Classification using Sentential Distributed Semantics", "authors": [ { "first": "Giancarlo", "middle": [ "D" ], "last": "Salton", "suffix": "", "affiliation": {}, "email": "giancarlo.salton@mydit.ie" }, { "first": "Robert", "middle": [ "J" ], "last": "Ross", "suffix": "", "affiliation": {}, "email": "robert.ross@dit.ie" }, { "first": "John", "middle": [ "D" ], "last": "Kelleher", "suffix": "", "affiliation": {}, "email": "john.d.kelleher@dit.ie" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Idiom token classification is the task of deciding for a set of potentially idiomatic phrases whether each occurrence of a phrase is a literal or idiomatic usage of the phrase. In this work we explore the use of Skip-Thought Vectors to create distributed representations that encode features that are predictive with respect to idiom token classification. We show that classifiers using these representations have competitive performance compared with the state of the art in idiom token classification. Importantly, however, our models use only the sentence containing the target phrase as input and are thus less dependent on a potentially inaccurate or incomplete model of discourse context. We further demonstrate the feasibility of using these representations to train a competitive general idiom token classifier.", "pdf_parse": { "paper_id": "P16-1019", "_pdf_hash": "", "abstract": [ { "text": "Idiom token classification is the task of deciding for a set of potentially idiomatic phrases whether each occurrence of a phrase is a literal or idiomatic usage of the phrase. In this work we explore the use of Skip-Thought Vectors to create distributed representations that encode features that are predictive with respect to idiom token classification. We show that classifiers using these representations have competitive performance compared with the state of the art in idiom token classification. Importantly, however, our models use only the sentence containing the target phrase as input and are thus less dependent on a potentially inaccurate or incomplete model of discourse context. We further demonstrate the feasibility of using these representations to train a competitive general idiom token classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Idioms are a class of multiword expressions (MWEs) whose meaning cannot be derived from their individual constituents (Sporleder et al., 2010) . Idioms often present idiosyncratic behaviour such as violating selection restrictions or changing the default semantic roles of syntactic categories (Sporleder and Li, 2009) . Consequently, they present many challenges for Natural Language Processing (NLP) systems. For example, in Statistical Machine Translation (SMT) it has been shown that translations of sentences containing idioms receive lower scores than translations of sentences that do not contain idioms (Salton et al., 2014) .", "cite_spans": [ { "start": 118, "end": 142, "text": "(Sporleder et al., 2010)", "ref_id": "BIBREF21" }, { "start": 294, "end": 318, "text": "(Sporleder and Li, 2009)", "ref_id": "BIBREF20" }, { "start": 611, "end": 632, "text": "(Salton et al., 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Idioms are pervasive across almost all languages and text genres and as a result broad cov-erage NLP systems must explicitly handle idioms (Villavicencio et al., 2005) . A complicating factor, however, is that many idiomatic expressions can be used both literally or figuratively. In general, idiomatic usages are more frequent, but for some expressions the literal meaning may be more common (Li and Sporleder, 2010a) . As a result, there are two fundamental tasks in NLP idiom processing: idiom type classification is the task of identifying expressions that have possible idiomatic interpretations and idiom token classification is the task of distinguishing between idiomatic and literal usages of potentially idiomatic phrases (Fazly et al., 2009) . In this paper we focus on this second task, idiom token classification.", "cite_spans": [ { "start": 139, "end": 167, "text": "(Villavicencio et al., 2005)", "ref_id": "BIBREF24" }, { "start": 393, "end": 418, "text": "(Li and Sporleder, 2010a)", "ref_id": "BIBREF12" }, { "start": 732, "end": 752, "text": "(Fazly et al., 2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work on idiom token classification, such as (Sporleder and Li, 2009) and (Peng et al., 2014) , often frame the problem in terms of modelling the global lexical context. For example, these models try to capture the fact that the idiomatic expression break the ice is likely to have a literal meaning in a context containing words such as cold, frozen or water and an idiomatic meaning in a context containing words such as meet or discuss (Li and Sporleder, 2010a) . Frequently these global lexical models create a different idiom token classifier for each phrase. However, a number of papers on idiom type and token classification have pointed to a range of other features that could be useful for idiom token classification; including local syntactic and lexical patterns (Fazly et al., 2009) and cue words (Li and Sporleder, 2010a) . However, in most cases these non-global features are specific to a particular phrase. So a key challenge is to identify from a range of features which features are the correct features to use for idiom token classification for a specific expression.", "cite_spans": [ { "start": 53, "end": 77, "text": "(Sporleder and Li, 2009)", "ref_id": "BIBREF20" }, { "start": 82, "end": 101, "text": "(Peng et al., 2014)", "ref_id": "BIBREF17" }, { "start": 447, "end": 472, "text": "(Li and Sporleder, 2010a)", "ref_id": "BIBREF12" }, { "start": 782, "end": 802, "text": "(Fazly et al., 2009)", "ref_id": "BIBREF6" }, { "start": 817, "end": 842, "text": "(Li and Sporleder, 2010a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Meanwhile, in recent years there has been an explosion in the use of neural networks for learning distributed representations for language (e.g., Socher et al. (2013) , Kalchbrenner et al. (2014) and Kim (2014) ). These representations are automatically trained from data and can simultaneously encode multiple linguistics features. For example, word embeddings can encode gender distinctions and plural-singular distinctions (Mikolov et al., 2013b) and the representations generated in sequence to sequence mappings have been shown to be sensitive to word order (Sutskever et al., 2014) . The recent development of Skip-Thought Vectors (or Sent2Vec) has provided an approach to learn distributed representations of sentences in an unsupervised manner.", "cite_spans": [ { "start": 146, "end": 166, "text": "Socher et al. (2013)", "ref_id": "BIBREF19" }, { "start": 169, "end": 195, "text": "Kalchbrenner et al. (2014)", "ref_id": "BIBREF9" }, { "start": 200, "end": 210, "text": "Kim (2014)", "ref_id": "BIBREF10" }, { "start": 426, "end": 449, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF16" }, { "start": 563, "end": 587, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we explore whether the representations generated by Sent2Vec encodes features that are useful for idiom token classification. This question is particularly interesting because the Sent2Vec based models only use the sentence containing the phrase as input whereas the baselines systems use full the paragraph surrounding the sentence. We further investigate the construction of a \"general\" classifier that can predict if a sentence contains literal or idiomatic language (independent of the expression) using just the distributed representation of the sentence. This approach contrasts with previous work that has primarily adopted a \"per expression\" classifier approach and has been based on more elaborate context features, such as discourse and lexical cohesion between and sentence and the larger context. We show that our method needs less contextual information than the state-of-the-art method and achieves competitive results, making it an important contribution to a range of applications that do not have access to a full discourse context. We proceed by introducing that previous work in more detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the earliest works on idiom token classification was on Japanese idioms (Hashimoto and Kawahara, 2008) . This work used a set of features, commonly used in Word Sense Disambiguation (WSD) research, that were defined over the text surrounding a phrase, as well as a number of idiom specific features, which were in turn used to train an SVM classifier based on a corpus of sentences tagged as either containing an idiomatic usage or a literal usage of a phrase. Their results indicated that the WSD features worked well on idiom token classification but that their idioms specific features did not help on the task.", "cite_spans": [ { "start": 79, "end": 109, "text": "(Hashimoto and Kawahara, 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Focusing on idiom token classification in English, Fazly et al. (2009) developed the concept of a canonical form (defined in terms of local syntactic and lexical patterns) and argued that for each idiom there is a distinct canonical form (or small set of forms) that mark idiomatic usages of a phrase. Meanwhile Sporleder and Li (2009) proposed a model based on how strongly an expression is linked to the overall cohesive structure of the discourse. Strong links result in a literal classification, otherwise an idiomatic classification is returned. In related work, Li and Sporleder (2010a) experimented with a range of features for idiom token classification models, including: global lexical context, discourse cohesion, syntactic structures based on dependency parsing, and local lexical features such as cue words, occurring just before or after a phrase. An example of a local lexical feature is when the word between occurs directly after break the ice; here this could mark an idiomatic usage of the phrase: it helped to break the ice between Joe and Olivia. The results of this work indicated that features based on global lexical context and discourse cohesion were the best features to use for idiom token classification. The inclusion of syntactic structures in the feature set provided a boost to the performance of the model trained on global lexical context and discourse cohesion. Interestingly, unlike the majority of previous work on idiom token classification Li and Sporleder (2010a) also investigated building general models that could work across multiple expressions. Again they found that global lexical context and discourse cohesion were the best features in their experiments.", "cite_spans": [ { "start": 51, "end": 70, "text": "Fazly et al. (2009)", "ref_id": "BIBREF6" }, { "start": 312, "end": 335, "text": "Sporleder and Li (2009)", "ref_id": "BIBREF20" }, { "start": 568, "end": 592, "text": "Li and Sporleder (2010a)", "ref_id": "BIBREF12" }, { "start": 1480, "end": 1504, "text": "Li and Sporleder (2010a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Continuing work on this topic, Li and Sporleder (2010b) present research based on the assumption that literal and figurative language are generated by two different Gaussians. The model representation is based on semantic relatedness features similar to those used earlier in (Sporleder and Li, 2009) . A Gaussian Mixture Model was trained using an Expectation Maximization method with the classification of instances performed by choosing the category which maximises the probability of fitting either of the Gaussian components. Li and Sporleder (2010b) 's results confirmed the findings from previous work that figurative language exhibits less cohesion with the surrounding context then literal language.", "cite_spans": [ { "start": 31, "end": 55, "text": "Li and Sporleder (2010b)", "ref_id": "BIBREF13" }, { "start": 276, "end": 300, "text": "(Sporleder and Li, 2009)", "ref_id": "BIBREF20" }, { "start": 531, "end": 555, "text": "Li and Sporleder (2010b)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "More recently, Feldman and Peng (2013) describes an approach to idiom token identification that frames the problem as one of outlier detection. The intuition behind this work is that because idiomatic usages of phrases have weak cohesion with the surrounding context they are semantically distant from local topics. As a result, phrases that are semantic outliers with respect to the context are likely to be idioms. Feldman and Peng (2013) explore two different approaches to outlier detection based on principle component analysis (PCA) and linear discriminant analysis (LDA) respectively. Building on this work, Peng et al. (2014) assume that phrases within a given text segment (e.g., a paragraph) that are semantically similar to the main topic of discussion in the segment are likely to be literal usages. They use Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to extract a topic representation, defined as a topic term document matrix, of each text segment within a corpus. They then trained a number of models that classify a phrase in a given text segment as a literal or idiomatic usage by using the topic term document matrix to project the phrase into a topic space representation and label outliers within the topic space as idiomatic. To the best of our knowledge, Peng et al. (2014) is currently the best performing approach to idiom token classification and we use their models as our baseline 1 .", "cite_spans": [ { "start": 417, "end": 440, "text": "Feldman and Peng (2013)", "ref_id": "BIBREF7" }, { "start": 615, "end": 633, "text": "Peng et al. (2014)", "ref_id": "BIBREF17" }, { "start": 855, "end": 874, "text": "(Blei et al., 2003)", "ref_id": "BIBREF1" }, { "start": 1287, "end": 1305, "text": "Peng et al. (2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "While idiom token classification based on long range contexts, such as is explored in a number of the models outlined in the previous section, generally achieve good performance, an NLP system may not always have access to the surrounding context, or may indeed find it challenging to construct a reliable interpretation of that context. Moreover, the construction of classifiers for each individual idiom case is resource intensive, and we argue fails to easily scale to under-resourced languages. In light of this, in our work we are exploring the potential of distributed compositional semantic models to produce reliable estimates of idiom token classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "Skip-Thought Vectors (Sent2Vec) (Kiros et al.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "1 However, it is not possible for us to reproduce their results directly as they \"apply the (modified) Google stop list before extracting the topics\" (Peng et al., 2014 (Peng et al., , p. 2023 ) and, to date, we do not have access to the modified list. So in our experiments we compare our results with the results they report on the same data. 2015) are a recent prominent example of such distributed models. Skip-Thought Vectors are an application of the Encoder/Decoder framework (Sutskever et al., 2014) , a popular architecture for NMT (Bahdanau et al., 2015) based on recurrent neural networks (RNN). The encoder takes an input sentence and maps it into a distributed representation (a vector of real numbers). The decoder is a language model that is conditioned on the distributed representation and, in Sent2Vec, is used to \"predict\" the sentences surrounding the input sentence. Consequently, the Sent2Vec encoder learns (among other things) to encode information about the context of an input sentence without the need of explicit access to it. Figure 1 presents the architecture of Sent2Vec.", "cite_spans": [ { "start": 150, "end": 168, "text": "(Peng et al., 2014", "ref_id": "BIBREF17" }, { "start": 169, "end": 192, "text": "(Peng et al., , p. 2023", "ref_id": null }, { "start": 483, "end": 507, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF22" }, { "start": 541, "end": 564, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 1055, "end": 1063, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "More formally, assume a given tuple (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "s i\u22121 , s i , s i+1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "where s i is the input sentence, s i\u22121 is the previous sentence to s i and s i+1 is the next sentence to s i . Let w t i denote the t-th word for s i and x t i denote its word embedding. We follow Kiros et al. 2015and describe the model in three parts: encoder, decoder and objective function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "Encoder. Given the sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "s i of length N , let w 1 i , . . . , w N i denote the words in s i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "At each timestep t, the encoder (in this case an RNN with Gated Recurrent Units -GRUs (Cho et al., 2014) ) produces a hidden state h t i that represents the sequence w 1 i , . . . , w t i . Therefore, h N i represents the full sentence. Each h N i is produced by iterating the following equations (without the subscript i):", "cite_spans": [ { "start": 86, "end": 104, "text": "(Cho et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "r t = \u03c3(W e r x t + U e r h t\u22121 ) (1) z t = \u03c3(W e z x t + U e z h t\u22121 ) (2) h t = tanh(W e x t + U e (r t h t\u22121 )) (3) h t = (1 \u2212 z t ) h t\u22121 + z t h t (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "where r t is the reset gate, z t is the update gate, h t is the proposed update state at time t and denotes a component-wise product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "Decoder. The decoder is essentially a neural language model conditioned on the input sentence representation h N i . However, two RNNs are used (one for the sentence s i\u22121 and the other for the sentence s i+1 ) with different parameters except the embedding matrix (E), and a new set of matrices (C r , C z and C) are introduced to condition the GRU on h N i . Let h t i+1 denote the hidden state of the decoder of the sentence s i+1 at time t. De- Figure 1 : Picture representing the Encoder/Decoder architecture used in the Sent2Vec as shown in . The gray circles represent the Encoder unfolded in time, the red and the green circles represent the Decoder for the previous and the next sentences respectively also unfolded in time. In this example, the input sentence presented to the Encoder is I could see the cat on the steps. The previous sentence is I got back home and the next sentence is This was strange. Unattached arrows are connected to the encoder output (which is the last gray circle).", "cite_spans": [], "ref_spans": [ { "start": 449, "end": 457, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "coding s i+1 requires iterating the following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "r t = \u03c3(W d r x t + U d r h t\u22121 + C r h N i ) (5) z t = \u03c3(W d z x t + U d z h t\u22121 + C z h N i ) (6) h t = tanh(W d x t + U d (r t h t\u22121 ) + Ch N i ) (7) h t i+1 = (1 \u2212 z t ) h t\u22121 + z t h t (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip-Thought Vectors", "sec_num": "3" }, { "text": "where r t is the reset gate, z t is the update gate, h t is the proposed update state at time t and denotes a component-wise product. An analogous computation is required to decode s i\u22121 . Given h t i+1 , the probability of the word w t i+1 conditioned on the previous w " }, "TABREF3": { "text": "The sizes of the samples for each expression and the split into training and test set. The numbers in parentheses indicates the number of idiomatic labels within the set.", "num": null, "html": null, "type_str": "table", "content": "" } } } }