{ "paper_id": "J13-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:18:11.585777Z" }, "title": "Statistical Metaphor Processing", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "", "affiliation": {}, "email": "ekaterina.shutova@cl.cam.ac.uk" }, { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "", "affiliation": {}, "email": "simone.teufel@cl.cam.ac.uk" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "", "affiliation": {}, "email": "anna.korhonen@cl.cam.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Metaphor is highly frequent in language, which makes its computational processing indispensable for real-world NLP applications addressing semantic tasks. Previous approaches to metaphor modeling rely on task-specific hand-coded knowledge and operate on a limited domain or a subset of phenomena. We present the first integrated open-domain statistical model of metaphor processing in unrestricted text. Our method first identifies metaphorical expressions in running text and then paraphrases them with their literal paraphrases. Such a text-to-text model of metaphor interpretation is compatible with other NLP applications that can benefit from metaphor resolution. Our approach is minimally supervised, relies on the state-of-the-art parsing and lexical acquisition technologies (distributional clustering and selectional preference induction), and operates with a high accuracy.", "pdf_parse": { "paper_id": "J13-2003", "_pdf_hash": "", "abstract": [ { "text": "Metaphor is highly frequent in language, which makes its computational processing indispensable for real-world NLP applications addressing semantic tasks. Previous approaches to metaphor modeling rely on task-specific hand-coded knowledge and operate on a limited domain or a subset of phenomena. We present the first integrated open-domain statistical model of metaphor processing in unrestricted text. Our method first identifies metaphorical expressions in running text and then paraphrases them with their literal paraphrases. Such a text-to-text model of metaphor interpretation is compatible with other NLP applications that can benefit from metaphor resolution. Our approach is minimally supervised, relies on the state-of-the-art parsing and lexical acquisition technologies (distributional clustering and selectional preference induction), and operates with a high accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Our production and comprehension of language is a multi-layered computational process. Humans carry out high-level semantic tasks effortlessly by subconsciously using a vast inventory of complex linguistic devices, while simultaneously integrating their background knowledge, to reason about reality. An ideal computational model of language understanding would also be capable of performing such high-level semantic tasks. With the rapid advances in statistical natural language processing (NLP) and computational lexical semantics, increasingly complex semantic tasks can now be addressed. Tasks that have received much attention so far include, for example, word sense disambiguation (WSD), supervised and unsupervised lexical classification, selectional preference induction, and semantic role labeling. In this article, we take a step further and show that state-of-the-art statistical NLP and computational lexical semantic techniques can be used to successfully model complex meaning transfers, such as metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Metaphors arise when one concept is viewed in terms of the properties of another. Humans often use metaphor to describe abstract concepts through reference to more concrete or physical experiences. Some examples of metaphor include the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(1) How can I kill a process? (Martin 1988) (2) Hillary brushed aside the accusations.", "cite_spans": [ { "start": 30, "end": 43, "text": "(Martin 1988)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(3) I invested myself fully in this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(4) And then my heart with pleasure fills,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "And dances with the daffodils.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(\"I wandered lonely as a cloud,\" William Wordsworth, 1804)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Metaphorical expressions may take a great variety of forms, ranging from conventional metaphors, which we produce and comprehend every day, for example, those in Examples (1)-(3), to poetic and novel ones, such as Example (4). In metaphorical expressions, seemingly unrelated features of one concept are attributed to another concept. In Example (1), a computational process is viewed as something alive and, therefore, its forced termination is associated with the act of killing. In Example (2) Hillary is not literally cleaning the space by sweeping accusations. Instead, the accusations lose their validity in that situation, in other words Hillary rejects them. The verbs brush aside and reject both entail the resulting disappearance of their object, which is the shared salient property that makes it possible for this analogy to be lexically expressed as a metaphor. Characteristic of all areas of human activity (from poetic to ordinary to scientific) and thus of all types of discourse, metaphor becomes an important problem for NLP. As Shutova and Teufel (2010) have shown in an empirical study, the use of conventional metaphor is ubiquitous in natural language text (according to their data, on average every third sentence in general-domain text contains a metaphorical expression). This makes metaphor processing essential for automatic text understanding. For example, an NLP application which is unaware that a \"leaked report\" is a \"disclosed report\" and not, for example, a \"wet report,\" would fail further semantic processing of the piece of discourse in which this phrase appears. A system capable of recognizing and interpreting metaphorical expressions in unrestricted text would become an invaluable component of any real-world NLP application that needs to access semantics (e.g., information retrieval [IR] , machine translation [MT] , question answering [QA] , information extraction [IE] , and opinion mining).", "cite_spans": [ { "start": 1047, "end": 1072, "text": "Shutova and Teufel (2010)", "ref_id": null }, { "start": 1827, "end": 1831, "text": "[IR]", "ref_id": null }, { "start": 1854, "end": 1858, "text": "[MT]", "ref_id": null }, { "start": 1880, "end": 1884, "text": "[QA]", "ref_id": null }, { "start": 1910, "end": 1914, "text": "[IE]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "So far, these applications have not used any metaphor processing techniques and thus often fail to interpret metaphorical data correctly. Consider an example from MT. Figure 1 shows metaphor translation from English into Russian by a state-of-the-art statistical MT system (Google Translate 1 ). For both sentences the MT system produces literal translations of metaphorical terms in English, rather than their literal interpretations. This results in otherwise grammatical sentences being semantically infelicitous, poorly formed, and barely understandable to a native speaker of Russian. The meaning of stir in Figure 1 (1) and spill in Figure 1 (2) would normally be realized in Russian only via their literal interpretation in the given context (provoke and tell), as shown under CORRECT TRANSLATION in Figure 1 . A metaphor processing component could help to avoid such errors. We conducted a pilot study of the importance of metaphor for MT, by running an English-to-Russian MT system (Google Translate) on the sentences from the data set of Shutova (2010) containing single-word verb metaphors. We found that 27 out of 62 sentences (44%) were translated incorrectly due to metaphoricity. Due to the high frequency of metaphor in text according to corpus studies, such a high level of error becomes important for MT.", "cite_spans": [ { "start": 1048, "end": 1062, "text": "Shutova (2010)", "ref_id": "BIBREF93" } ], "ref_spans": [ { "start": 167, "end": 175, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 613, "end": 621, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 639, "end": 647, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 807, "end": 815, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Examples where metaphor understanding is crucial can also be found in opinion mining, that is, detection of the speaker's attitude to what is said and to the topic. Consider the following sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(5) a. Government loosened its strangle-hold on business. (Narayanan 1999) b. Government deregulated business. (Narayanan 1999) Both sentences describe the same fact. The use of the metaphor loosened strangle-hold in Example (5a) suggests that the speaker opposes government control of economy, however, whereas Example (5b) does not imply this. One can infer the speaker's negative attitude via the presence of a negative word strangle-hold. A metaphor processing system would establish the correct meaning of Example (5a) and thus discover the actual fact towards which the speaker has a negative attitude.", "cite_spans": [ { "start": 58, "end": 74, "text": "(Narayanan 1999)", "ref_id": "BIBREF73" }, { "start": 111, "end": 127, "text": "(Narayanan 1999)", "ref_id": "BIBREF73" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Because metaphor understanding requires resolving non-literal meanings via analogical comparisons, the development of a complete and computationally practical account of this phenomenon is a challenging and complex task. Despite the importance of metaphor for NLP systems dealing with semantic interpretation, its automatic processing has received little attention in contemporary NLP, and is far from being a solved problem. The majority of computational approaches to metaphor still exploit ideas articulated two or three decades ago (Wilks 1978; Lakoff and Johnson 1980) . They often rely on task-specific hand-coded knowledge (Martin 1990; Fass 1991; Narayanan 1997 Narayanan , 1999 ; Barnden and Lee 2002; Feldman and Narayanan 2004; Agerri et al. 2007) and reduce the task to reasoning about a limited domain or a subset of phenomena (Gedigian et al. 2006; Krishnakumaran and Zhu 2007) . So far there has been no robust statistical system operating on unrestricted text. State-of-the-art accurate parsing (Klein and Manning 2003; Briscoe, Carroll, and Watson 2006; Clark and Curran 2007) , however, as well as recent work on computational lexical semantics (Schulte im Walde 2006; Mitchell and Lapata 2008; Davidov, Reichart, and Rappoport 2009; Erk and McCarthy 2009; Sun and Korhonen 2009; Abend and Rappoport 2010; \u00d3 S\u00e9aghdha 2010) open up many avenues for the creation of such a system. This is the niche the presented work is intending to fill.", "cite_spans": [ { "start": 536, "end": 548, "text": "(Wilks 1978;", "ref_id": "BIBREF100" }, { "start": 549, "end": 573, "text": "Lakoff and Johnson 1980)", "ref_id": "BIBREF52" }, { "start": 630, "end": 643, "text": "(Martin 1990;", "ref_id": "BIBREF60" }, { "start": 644, "end": 654, "text": "Fass 1991;", "ref_id": null }, { "start": 655, "end": 669, "text": "Narayanan 1997", "ref_id": "BIBREF72" }, { "start": 670, "end": 686, "text": "Narayanan , 1999", "ref_id": "BIBREF73" }, { "start": 689, "end": 710, "text": "Barnden and Lee 2002;", "ref_id": "BIBREF5" }, { "start": 711, "end": 738, "text": "Feldman and Narayanan 2004;", "ref_id": "BIBREF27" }, { "start": 739, "end": 758, "text": "Agerri et al. 2007)", "ref_id": "BIBREF1" }, { "start": 840, "end": 862, "text": "(Gedigian et al. 2006;", "ref_id": "BIBREF30" }, { "start": 863, "end": 891, "text": "Krishnakumaran and Zhu 2007)", "ref_id": "BIBREF48" }, { "start": 1011, "end": 1035, "text": "(Klein and Manning 2003;", "ref_id": "BIBREF43" }, { "start": 1036, "end": 1070, "text": "Briscoe, Carroll, and Watson 2006;", "ref_id": "BIBREF11" }, { "start": 1071, "end": 1093, "text": "Clark and Curran 2007)", "ref_id": "BIBREF18" }, { "start": 1187, "end": 1212, "text": "Mitchell and Lapata 2008;", "ref_id": "BIBREF70" }, { "start": 1213, "end": 1251, "text": "Davidov, Reichart, and Rappoport 2009;", "ref_id": "BIBREF20" }, { "start": 1252, "end": 1274, "text": "Erk and McCarthy 2009;", "ref_id": "BIBREF24" }, { "start": 1275, "end": 1297, "text": "Sun and Korhonen 2009;", "ref_id": "BIBREF94" }, { "start": 1298, "end": 1323, "text": "Abend and Rappoport 2010;", "ref_id": "BIBREF0" }, { "start": 1324, "end": 1340, "text": "\u00d3 S\u00e9aghdha 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Metaphor has traditionally been viewed as an artistic device that lends vividness and distinction to an author's style. This view was first challenged by Lakoff and Johnson (1980) , who claimed that it is a productive phenomenon that operates at the level of mental processes. According to Lakoff and Johnson, metaphor is thus not merely a property of language (i.e., a linguistic phenomenon), but rather a property of thought (i.e., a cognitive phenomenon). This view was subsequently adopted and extended by a multitude of approaches (Grady 1997; Narayanan 1997; Fauconnier and Turner 2002; Feldman 2006; Pinker 2007) and the term conceptual metaphor was coined to describe it.", "cite_spans": [ { "start": 154, "end": 179, "text": "Lakoff and Johnson (1980)", "ref_id": "BIBREF52" }, { "start": 536, "end": 548, "text": "(Grady 1997;", "ref_id": "BIBREF34" }, { "start": 549, "end": 564, "text": "Narayanan 1997;", "ref_id": "BIBREF72" }, { "start": 565, "end": 592, "text": "Fauconnier and Turner 2002;", "ref_id": "BIBREF26" }, { "start": 593, "end": 606, "text": "Feldman 2006;", "ref_id": "BIBREF27" }, { "start": 607, "end": 619, "text": "Pinker 2007)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "What Is Metaphor?", "sec_num": "1.1" }, { "text": "The view postulates that metaphor is not limited to similarity-based meaning extensions of individual words, but rather involves reconceptualization of a whole area of experience in terms of another. Thus metaphor always involves two concepts or conceptual domains: the target (also called the topic or tenor in the linguistics literature) and the source (also called the vehicle). Consider Examples (6) and (7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What Is Metaphor?", "sec_num": "1.1" }, { "text": "(6) He shot down all of my arguments. (Lakoff and Johnson 1980) (7) He attacked every weak point in my argument. (Lakoff and Johnson 1980) According to Lakoff and Johnson, a mapping of the concept of argument to that of war is used in both Examples (6) and (7). The argument, which is the target concept, is viewed in terms of a battle (or a war), the source concept. The existence of such a link allows us to talk about arguments using war terminology, thus giving rise to a number of metaphors. Conceptual metaphor, or source-target domain mapping, is thus a generalization over a set of individual metaphorical expressions that covers multiple cases in which ways of reasoning about the source domain systematically correspond to ways of reasoning about the target.", "cite_spans": [ { "start": 38, "end": 63, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF52" }, { "start": 113, "end": 138, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "What Is Metaphor?", "sec_num": "1.1" }, { "text": "Conceptual metaphor manifests itself in natural language in the form of linguistic metaphor (or metaphorical expressions) in a variety of ways. The most common types of linguistic metaphor are lexical metaphor (i.e., metaphor at the level of a single word sense, as in the Examples (1)-(4)), multi-word metaphorical expressions (e.g., \"whether we go on pilgrimage with Raleigh or put out to sea with Tennyson\"), or extended metaphor, that spans over longer discourse fragments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What Is Metaphor?", "sec_num": "1.1" }, { "text": "Lexical metaphor is by far the most frequent type. In the presence of a certain conceptual metaphor individual words can be used in entirely novel contexts, which results in the formation of new meanings. Consider the following example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What Is Metaphor?", "sec_num": "1.1" }, { "text": "(8) How can we build a 'Knowledge economy' if research is handcuffed? (Barque and Chaumartin 2009) In this sentence the physical verb handcuff is used with an abstract object research and its meaning adapts accordingly. Metaphor is a productive phenomenon (i.e., its novel examples continue to emerge in language). A large number of metaphorical expressions, however, become conventionalized (e.g., \"I cannot grasp his way of thinking\"). Although metaphorical in nature, their meanings are deeply entrenched in everyday use, and are thus cognitively treated as literal terms. Both novel and conventional metaphors are important for text processing, hence our work is concerned with both types. Fixed non-compositional idiomatic expressions (e.g., kick the bucket, rock the boat, put a damper on), however, are left aside, because the mechanisms of their formation are no longer productive in modern language and, as such, they are of little interest for the design of a generalizable computational model of metaphor.", "cite_spans": [ { "start": 70, "end": 98, "text": "(Barque and Chaumartin 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "What Is Metaphor?", "sec_num": "1.1" }, { "text": "Extended metaphor refers to the use of metaphor at the discourse level. A famous example of extended metaphor can be found in William Shakespeare's play As You Like It, where he first compares the world to a stage and then in the following discourse describes its inhabitants as players. Extended metaphor often appears in literature in the form of an allegory or a parable, whereby a whole story from one domain is metaphorically transferred onto another in order to highlight certain attributes of the subject or teach a moral lesson.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What Is Metaphor?", "sec_num": "1.1" }, { "text": "In this article we focus on lexical metaphor and the computational modeling thereof. From an NLP viewpoint, not all metaphorical expressions are equally important. A metaphorical expression is interesting for computational modeling if its metaphorical sense is significantly distinct from its original literal sense and cannot be interpreted directly (e.g., by existing word sense disambiguation techniques using a predefined sense inventory). The identification of highly conventionalized metaphors (e.g., the verb impress, whose meaning originally stems from printing) are not of interest for NLP tasks, because their metaphorical senses have long been dominant in language and their original literal senses may no longer be used. A number of conventionalized metaphors, however, require explicit interpretation in order to be understood by computer (e.g., \"cast doubt,\" \"polish the thesis,\" \"catch a disease\"), as do all novel metaphors. Thus we are concerned with both novel and conventional metaphors, but only consider the cases whereby the literal and metaphorical senses of the word are in clear opposition in common use in contemporary language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "Automatic processing of metaphor can be divided into two subtasks: metaphor identification, or recognition (distinguishing between literal and metaphorical language in text); and metaphor interpretation (identifying the intended literal meaning of a metaphorical expression). An ideal metaphor processing system should address both of these tasks and provide useful information to support semantic interpretation in real-world NLP applications. In order to be directly applicable to other NLP systems, it should satisfy the following criteria:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "r Provide a representation of metaphor interpretation that can be easily integrated with other NLP systems: This criterion places constraints on how the metaphor processing task should be defined. The most universally applicable metaphor interpretation would be in the text-to-text form. This means that a metaphor processing system would take raw text as input and provide a more literal text as output, in which metaphors are interpreted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "r Operate on unrestricted running text: In order to be useful for real-world NLP the system needs to be capable of processing real-world data. Rather than only dealing with individual carefully selected clear-cut examples, the system should be fully implemented and tested on free naturally occurring text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "r Be open-domain: The system needs to cover all domains, genres, and topics. Thus it should not rely on any domain-specific information or focus on individual types of instances (e.g., a hand-chosen limited set of source-target domain mappings).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "r Be unsupervised or minimally supervised: To be easily adaptable to new domains, the system needs to be unsupervised or minimally supervised. This means it should not use any task-specific (i.e., metaphor-specific) hand-coded knowledge. The only acceptable exception might be a multi-purpose general-domain lexicon that is already in existence and does not need to be created in a costly manner, although it would be an advantage if no such resource is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "r Cover all syntactic constructions: To be robust, the system needs to be able to deal with metaphors represented by all word classes and syntactic constructions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "In this article, we address both the metaphor identification and interpretation tasks, resulting in the first integrated domain-independent corpus-based computational model of metaphor. The method is designed with the listed criteria in mind. It takes unrestricted text as input and produces textual output. Metaphor identification and interpretation modules, based on the algorithms of Shutova, Sun, and Korhonen (2010) and Shutova (2010) , are first evaluated independently, and then combined and evaluated together as an integrated system. All components of the method are in principle applicable to all part-of-speech classes and syntactic constructions. In the current experiments, however, we tested the system only on single-word metaphors expressed by a verb. Verbs are frequent in language and central to conceptual metaphor. Cameron (2003) conducted a corpus study of the use of metaphor in educational discourse for all parts of speech. She found that verbs account for around 50% of the data, the rest being shared by nouns, adjectives, adverbs, copula constructions, and multi-word metaphors. This suggests that verb metaphors provide a reliable testbed for both linguistic and computational experiments. Restricting the scope to verbs is a methodological step aimed at testing the main principles of the proposed approach in a well-defined setting. We would, however, expect the presented methods to scale to other parts of speech and to a wide range of syntactic constructions, because they rely on techniques from computational lexical semantics that have been shown to be effective in modeling not only verb meanings, but also those of nouns and adjectives.", "cite_spans": [ { "start": 387, "end": 420, "text": "Shutova, Sun, and Korhonen (2010)", "ref_id": "BIBREF93" }, { "start": 425, "end": 439, "text": "Shutova (2010)", "ref_id": "BIBREF93" }, { "start": 835, "end": 849, "text": "Cameron (2003)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "As opposed to previous approaches that modeled metaphorical reasoning starting from a hand-crafted description and applying it to explain the data, we aim to design a statistical model that captures regular patterns of metaphoricity in a large corpus and thus generalizes to unseen examples. Compared to labor-intensive manual efforts, this approach is more robust and, being nearly unsupervised, cost-effective. In contrast to previous statistical approaches, which addressed metaphors of a specific topic or did not consider linguistic metaphor at all (e.g., Mason 2004) , the proposed method covers all metaphors in principle, can be applied to unrestricted text, and can be adapted to different domains and genres.", "cite_spans": [ { "start": 561, "end": 572, "text": "Mason 2004)", "ref_id": "BIBREF63" } ], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "Our first experiment is concerned with the identification of metaphorical expressions in unrestricted text. Starting from a small set of metaphorical expressions, the system learns the analogies involved in their production in a minimally supervised way. It generalizes over the exemplified analogies by means of verb and noun clustering (i.e., the identification of groups of similar concepts). This generalization allows it to recognize previously unseen metaphorical expressions in text. Consider the following examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "(9) All of this stirred an uncontrollable excitement in her.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "(10) Time and time again he would stare at the ground, hand on hip, and then swallow his anger and play tennis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "Having once seen the metaphor \"stir excitement\" in Example (9) the metaphor identification system successfully concludes that \"swallow anger\" in Example (10) is also used metaphorically. The identified metaphors then need to be interpreted. Ideally, a metaphor interpretation task should be aimed at producing a representation of metaphor understanding that can be directly embedded into other NLP applications that could benefit from metaphor resolution. We define metaphor interpretation as a paraphrasing task and build a system that discovers literal meanings of metaphorical expressions in text and produces their literal paraphrases. For example, for metaphors in Examples (11a) and (12a) the system produces the paraphrases in Examples (11b) and (12b), respectively. The paraphrases for metaphorical expressions are acquired in a data-driven manner from a large corpus. Literal paraphrases are then identified using a selectional preference model. This article first surveys the relevant theoretical and computational work on metaphor, then describes the design of the identification and paraphrasing modules and their independent evaluation, and concludes with the evaluation of the integrated text-to-text metaphor processing system. The evaluations were carried out with the aid of human subjects. In the case of identification, the subjects were asked to judge whether a systemannotated phrase is a metaphor. In case of paraphrasing, they had to decide whether the system-produced paraphrase for the metaphorical expression is correct and literal in the given context. In addition, we created a metaphor paraphrasing gold standard by asking human subjects (not previously exposed to system output) to produce their own literal paraphrases for metaphorical verbs. The system paraphrasing was then also evaluated against this gold standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Modeling of Metaphor", "sec_num": "1.2" }, { "text": "Theorists of metaphor distinguish between two kinds of metaphorical language: novel (or poetic) metaphors (i.e., those that are imaginative), and conventionalized metaphors (i.e., those that are used as a part of an ordinary discourse). According to Nunberg (1987) , all metaphors emerge as novel, but over time they become part of general usage and their rhetorical effect vanishes, resulting in conventionalized metaphors. Following Orwell (1946) , Nunberg calls such metaphors \"dead\" and claims that they are not psychologically distinct from literally used terms. The scheme described by Nunberg demonstrates how metaphorical associations capture patterns governing polysemy, namely, the capacity of a word to have multiple meanings. Over time some of the aspects of the target domain are added to the meaning of a term in the source domain, resulting in a (metaphorical) sense extension of this term. Copestake and Briscoe (1995) discuss sense extension mainly based on metonymic examples and model the phenomenon using lexical rules encoding metonymic patterns. They also suggest that similar mechanisms can be used to account for metaphorical processes. According to Copestake and Briscoe, the conceptual mappings encoded in the sense extension rules would define the limits to the possible shifts in meaning.", "cite_spans": [ { "start": 250, "end": 264, "text": "Nunberg (1987)", "ref_id": "BIBREF74" }, { "start": 435, "end": 448, "text": "Orwell (1946)", "ref_id": "BIBREF76" }, { "start": 906, "end": 934, "text": "Copestake and Briscoe (1995)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor and Polysemy", "sec_num": "2.1" }, { "text": "General-domain lexical resources often include information about metaphorical word senses, although unsystematically and without any accompanying semantic annotation. For example, WordNet 2 (Fellbaum 1998) contains the comprehension sense of grasp, defined as \"get the meaning of something,\" and the reading sense of skim, defined as \"read superficially.\" A great deal of metaphorical senses are absent from the current version of WordNet, however. A number of researchers have advocated the necessity of systematic inclusion and mark-up of metaphorical senses in such generaldomain lexical resources (Alonge and Castelli 2003; L\u00f6nneker and Eilts 2004) and claim that this would be beneficial for the computational modeling of metaphor. Metaphor processing systems could then either use this knowledge or be evaluated against it. L\u00f6nneker (2004) mapped the senses from EuroWordNet 3 to the Hamburg Metaphor Database (L\u00f6nneker 2004; Reining and L\u00f6nneker-Rodman 2007) containing examples of metaphorical expressions in German and French. Currently no explicit information about metaphor is integrated into WordNet for English, however.", "cite_spans": [ { "start": 190, "end": 205, "text": "(Fellbaum 1998)", "ref_id": "BIBREF28" }, { "start": 601, "end": 627, "text": "(Alonge and Castelli 2003;", "ref_id": "BIBREF2" }, { "start": 628, "end": 652, "text": "L\u00f6nneker and Eilts 2004)", "ref_id": null }, { "start": 830, "end": 845, "text": "L\u00f6nneker (2004)", "ref_id": "BIBREF58" }, { "start": 916, "end": 931, "text": "(L\u00f6nneker 2004;", "ref_id": "BIBREF58" }, { "start": 932, "end": 965, "text": "Reining and L\u00f6nneker-Rodman 2007)", "ref_id": "BIBREF85" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor and Polysemy", "sec_num": "2.1" }, { "text": "Although consistent inclusion in WordNet is in principle possible for conventional metaphorical senses, it is not viable for novel contextual sense alternations. Because metaphor is a productive phenomenon, all possible cases of contextual meaning alternations it results in cannot be described via simple sense enumeration (Pustejovsky 1995) . Computational metaphor processing therefore cannot be approached using the standard word sense disambiguation paradigm, whereby the contextual use of a word is classified according to an existing sense inventory. The metaphor interpretation task is inherently more complex and requires generation of new and often uncommon meanings of the metaphorical term based on the context.", "cite_spans": [ { "start": 324, "end": 342, "text": "(Pustejovsky 1995)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor and Polysemy", "sec_num": "2.1" }, { "text": "The following views on metaphor are prominent in linguistics and philosophy: the comparison view (e.g., the Structure-Mapping Theory of Gentner [1983] ), the interaction view (Black 1962; Hesse 1966) , the selectional restrictions violation view (Wilks 1975 (Wilks , 1978 , and conceptual metaphor theory (CMT) (Lakoff and Johnson 1980) . All of these approaches share the idea of an interconceptual mapping that underlies the production of metaphorical expressions. Gentner's Structure-Mapping Theory postulates that the ground for metaphor lies in similar properties and relations shared by the two concepts (the target and the source). Tourangeau and Sternberg (1982) , however, criticize this view by noting that \"everything has some feature or category that it shares with everything else, but we cannot combine just any two things in metaphor\" (Tourangeau and Sternberg 1982, page 226) . The interaction view focuses on the surprise and novelty that metaphor introduces. Its proponents claim that the source concept (or domain) represents a template for seeing the target concept in an entirely new way. The conceptual metaphor theory of Lakoff and Johnson (1980) takes this idea much further by stating that metaphor operates at the level of thought rather than at the level of language, and that it is based on a set of cognitive mappings between source and target domains. Thus Lakoff and Johnson put the emphasis on the structural aspect of metaphor, rather than its decorative function in language that dominated the preceding theories. The selectional restrictions violation view of Wilks (1978) concerns manifestation of metaphor in language. Wilks suggests that metaphor represents a violation of combinatory norms in the linguistic context and that metaphorical expressions can be detected via such violation. 6and (7) provided a good illustration of CMT. Lakoff and Johnson explain them via the conceptual metaphor ARGUMENT IS WAR, which is systematically reflected in language in a variety of expressions.", "cite_spans": [ { "start": 136, "end": 150, "text": "Gentner [1983]", "ref_id": "BIBREF31" }, { "start": 175, "end": 187, "text": "(Black 1962;", "ref_id": "BIBREF10" }, { "start": 188, "end": 199, "text": "Hesse 1966)", "ref_id": "BIBREF37" }, { "start": 246, "end": 257, "text": "(Wilks 1975", "ref_id": "BIBREF99" }, { "start": 258, "end": 271, "text": "(Wilks , 1978", "ref_id": "BIBREF100" }, { "start": 311, "end": 336, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF52" }, { "start": 639, "end": 670, "text": "Tourangeau and Sternberg (1982)", "ref_id": "BIBREF96" }, { "start": 850, "end": 891, "text": "(Tourangeau and Sternberg 1982, page 226)", "ref_id": null }, { "start": 1144, "end": 1169, "text": "Lakoff and Johnson (1980)", "ref_id": "BIBREF52" }, { "start": 1595, "end": 1607, "text": "Wilks (1978)", "ref_id": "BIBREF100" } ], "ref_spans": [], "eq_spans": [], "section": "Theoretical Views on Metaphor", "sec_num": "2.2" }, { "text": "(13) Your claims are indefensible. (Lakoff and Johnson 1980) (14) I demolished his argument. (Lakoff and Johnson 1980) (15) I've never won an argument with him. (Lakoff and Johnson 1980) (16) You disagree? Okay, shoot! (Lakoff and Johnson 1980) According to CMT, we conceptualize and structure arguments in terms of battle, which systematically influences the way we talk about arguments within our culture. In other words, the conceptual structure behind battle (i.e., that one can shoot, demolish, devise a strategy, win, and so on), is metaphorically transferred onto the domain of argument.", "cite_spans": [ { "start": 35, "end": 60, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF52" }, { "start": 93, "end": 118, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF52" }, { "start": 161, "end": 186, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF52" }, { "start": 219, "end": 244, "text": "(Lakoff and Johnson 1980)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Conceptual Metaphor Theory. Examples", "sec_num": "2.2.1" }, { "text": "Manifestations of conceptual metaphor are ubiquitous in language and communication. Here are a few other examples of common metaphorical mappings. r LIFE IS A JOURNEY (e.g., \"He arrived at the end of his life with very little emotional baggage\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conceptual Metaphor Theory. Examples", "sec_num": "2.2.1" }, { "text": "CMT produced a significant resonance in the fields of philosophy, linguistics, cognitive science, and artificial intelligence, including NLP. It inspired novel research (Martin 1990 (Martin , 1994 Narayanan 1997 Narayanan , 1999 Barnden and Lee 2002; Feldman and Narayanan 2004; Mason 2004; Martin 2006; Agerri et al. 2007) , but was also criticized for the lack of consistency and empirical verification (Murphy 1996; Shalizi 2003; Pinker 2007) . The sole evidence with which Lakoff and Johnson (1980) supported their theory was a set of carefully selected examples. Such examples, albeit clearly illustrating the main tenets of the theory, are not representative. They cannot possibly capture the whole spectrum of metaphorical expressions, and thus do not provide evidence that the theory can adequately explain the majority of metaphors in real-world texts. Aiming to verify the latter, Shutova and Teufel (2010) conducted a corpus-based analysis of conceptual metaphor in the data from the British National Corpus (BNC) (Burnard 2007) . In their study three independent participants annotated both linguistic metaphors and the underlying source-target domain mappings. Their results show that although the annotators reach some overall agreement on the annotation of interconceptual mappings, they experienced a number of difficulties, one of which was the problem of finding the right level of abstraction for the source and target domain categories. The difficulties in category assignment for conceptual metaphor suggest that it is hard to consistently assign explicit labels to source and target domains, even though the interconceptual associations exist in some sense and are intuitive to humans.", "cite_spans": [ { "start": 169, "end": 181, "text": "(Martin 1990", "ref_id": "BIBREF60" }, { "start": 182, "end": 196, "text": "(Martin , 1994", "ref_id": "BIBREF61" }, { "start": 197, "end": 211, "text": "Narayanan 1997", "ref_id": "BIBREF72" }, { "start": 212, "end": 228, "text": "Narayanan , 1999", "ref_id": "BIBREF73" }, { "start": 229, "end": 250, "text": "Barnden and Lee 2002;", "ref_id": "BIBREF5" }, { "start": 251, "end": 278, "text": "Feldman and Narayanan 2004;", "ref_id": "BIBREF27" }, { "start": 279, "end": 290, "text": "Mason 2004;", "ref_id": "BIBREF63" }, { "start": 291, "end": 303, "text": "Martin 2006;", "ref_id": "BIBREF62" }, { "start": 304, "end": 323, "text": "Agerri et al. 2007)", "ref_id": "BIBREF1" }, { "start": 405, "end": 418, "text": "(Murphy 1996;", "ref_id": "BIBREF71" }, { "start": 419, "end": 432, "text": "Shalizi 2003;", "ref_id": "BIBREF91" }, { "start": 433, "end": 445, "text": "Pinker 2007)", "ref_id": "BIBREF80" }, { "start": 477, "end": 502, "text": "Lakoff and Johnson (1980)", "ref_id": "BIBREF52" }, { "start": 1025, "end": 1039, "text": "(Burnard 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Conceptual Metaphor Theory. Examples", "sec_num": "2.2.1" }, { "text": "Lakoff and Johnson do not discuss how metaphors can be recognized in linguistic data. To date, the most influential account of this issue is that of Wilks (1975 Wilks ( , 1978 . According to Wilks, metaphors represent a violation of selectional restrictions (or preferences) in a given context. Selectional restrictions are the semantic constraints that a predicate places onto its arguments. Consider the following example.", "cite_spans": [ { "start": 149, "end": 160, "text": "Wilks (1975", "ref_id": "BIBREF99" }, { "start": 161, "end": 175, "text": "Wilks ( , 1978", "ref_id": "BIBREF100" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Restrictions Violation View.", "sec_num": "2.2.2" }, { "text": "(17) a. My aunt always drinks her tea on the terrace. b. My car drinks gasoline. (Wilks 1978) The verb drink normally requires a grammatical subject of type ANIMATE and a grammatical object of type LIQUID, as in Example (17a). Therefore, drink taking a car as a subject in (17b) is an anomaly, which, according to Wilks, indicates a metaphorical use of drink.", "cite_spans": [ { "start": 81, "end": 93, "text": "(Wilks 1978)", "ref_id": "BIBREF100" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Restrictions Violation View.", "sec_num": "2.2.2" }, { "text": "Although Wilks's idea inspired a number of computational experiments on metaphor recognition (Fass and Wilks 1983; Fass 1991; Krishnakumaran and Zhu 2007) , it is important to note that in practice this approach has a number of limitations. Firstly, there are other kinds of non-literalness or anomaly in language that cause a violation of semantic norm, such as metonymies. Thus the method would overgenerate. Secondly, there are kinds of metaphor that do not represent a violation of selectional restrictions (i.e., the approach may also undergenerate). This would happen, for example, when highly conventionalized metaphorical word senses are more frequent than the original literal senses. Due to their frequency, selectional preference distributions of such words in real-world data would be skewed towards the metaphorical senses (e.g., capture may select for ideas rather than captives according to the data). As a result, no selectional preferences violation can be detected in the use of such verbs. Another case where the method does not apply is copula constructions, such as \"All the world's a stage.\" And finally, the method does not take into account the fact that interpretation (of metaphor as well as other linguistic phenomena) is always context-dependent. For example, the phrase \"All men are animals\" uttered by a biology professor or a feminist would have entirely different interpretations, the latter clearly metaphorical, but without any violation of selectional restrictions.", "cite_spans": [ { "start": 93, "end": 114, "text": "(Fass and Wilks 1983;", "ref_id": "BIBREF25" }, { "start": 115, "end": 125, "text": "Fass 1991;", "ref_id": null }, { "start": 126, "end": 154, "text": "Krishnakumaran and Zhu 2007)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Restrictions Violation View.", "sec_num": "2.2.2" }, { "text": "2.3.1 Automatic Metaphor Recognition. One of the first attempts to automatically identify and interpret metaphorical expressions in text is the approach of Fass (1991) . It originates in the idea of Wilks (1978) and utilizes hand-coded knowledge. Fass developed a system called met*, which is capable of discriminating between literalness, metonymy, metaphor, and anomaly. It does this in three stages. First, literalness is distinguished from non-literalness using selectional preference violation as an indicator. In the case that non-literalness is detected, the respective phrase is tested for being metonymic using hand-coded patterns (such as CONTAINER-FOR-CONTENT). If the system fails to recognize metonymy, it proceeds to search the knowledge base for a relevant analogy in order to discriminate metaphorical relations from anomalous ones. For example, the sentence in Example (17b) would be represented in this framework as (car,drink,gasoline), which does not satisfy the preference (animal,drink,liquid), as car is not a hyponym of animal. met* then searches its knowledge base for a triple containing a hypernym of both the actual argument and the desired argument and finds (thing,use,energy source), which represents the metaphorical interpretation. Goatly (1997) identifies a set of linguistic cues, namely, lexical patterns indicating the presence of a metaphorical expression in running text, such as metaphorically speaking, utterly, completely, so to speak, and literally. This approach, however, is likely to find only a small proportion of metaphorical expressions, as the vast majority of them appear without any signaling context. We conducted a corpus study in order to investigate the effectiveness of linguistic cues as metaphor indicators. For each cue suggested by Goatly (1997) , we randomly sampled 50 sentences from the BNC containing it and manually annotated them for metaphoricity. The results are presented in Table 1 . The average precision (i.e., the proportion of identified expressions that were metaphorical) of the linguistic cue method according to these data is 0.40, which suggests that the set of metaphors that this method generates contains a great deal of noise. Thus the cues are unlikely to be sufficient for metaphor extraction on their own, but together with some additional filters, they could contribute to a more complex system.", "cite_spans": [ { "start": 156, "end": 167, "text": "Fass (1991)", "ref_id": null }, { "start": 199, "end": 211, "text": "Wilks (1978)", "ref_id": "BIBREF100" }, { "start": 1265, "end": 1278, "text": "Goatly (1997)", "ref_id": "BIBREF33" }, { "start": 1794, "end": 1807, "text": "Goatly (1997)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 1946, "end": 1953, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Computational Approaches to Metaphor", "sec_num": "2.3" }, { "text": "The work of Peters and Peters (2000) concentrates on detecting figurative language in lexical resources. They mine WordNet (Fellbaum 1998 ) for examples of systematic polysemy, which allows them to capture metonymic and metaphorical relations. Their system searches for nodes that are relatively high in the WordNet hierarchy (i.e., are relatively general) and that share a set of common word forms among their descendants. Peters and Peters found that such nodes often happen to be in a metonymic (e.g., publisher -publication) or a metaphorical (e.g., theory -supporting structure) relation. The CorMet system (Mason 2004 ) is the first attempt at discovering sourcetarget domain mappings automatically. It does this by finding systematic variations in domain-specific selectional preferences, which are inferred from texts on the Web. For example, Mason collects texts from the LAB domain and the FINANCE domain, in both of which pour would be a characteristic verb. In the LAB domain pour has a strong selectional preference for objects of type liquid, whereas in the FINANCE domain it selects for money. From this Mason's system infers the domain mapping FINANCE -LAB and the concept mapping MONEY IS LIQUID. He compares the output of his system against the Master Metaphor List (MML; Lakoff, Espenson, and Schwartz 1991) and reports a performance of 77% in terms of accuracy (i.e., proportion of correctly induced mappings). Birke and Sarkar (2006) present a sentence clustering approach for non-literal language recognition, implemented in the TroFi system (Trope Finder). The idea behind their system originates from a similarity-based word sense disambiguation method developed by Karov and Edelman (1998) . The latter uses a set of seed sentences annotated with respect to word sense. The system computes similarity between the sentence containing the word to be disambiguated and all of the seed sentences and selects the sense corresponding to the annotation in the most similar seed sentences. Birke and Sarkar adapt this algorithm to perform a two-way classification (literal vs. non-literal), not aiming to distinguish between specific kinds of tropes. An example for the verb pour in their database is shown in Figure 2 . They attain a performance of 0.54 in terms of F-measure (van Rijsbergen 1979).", "cite_spans": [ { "start": 12, "end": 36, "text": "Peters and Peters (2000)", "ref_id": "BIBREF79" }, { "start": 123, "end": 137, "text": "(Fellbaum 1998", "ref_id": "BIBREF28" }, { "start": 612, "end": 623, "text": "(Mason 2004", "ref_id": "BIBREF63" }, { "start": 1290, "end": 1326, "text": "Lakoff, Espenson, and Schwartz 1991)", "ref_id": "BIBREF51" }, { "start": 1431, "end": 1454, "text": "Birke and Sarkar (2006)", "ref_id": "BIBREF9" }, { "start": 1690, "end": 1714, "text": "Karov and Edelman (1998)", "ref_id": null } ], "ref_spans": [ { "start": 2227, "end": 2235, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Computational Approaches to Metaphor", "sec_num": "2.3" }, { "text": "The method of Gedigian et al. (2006) discriminates between literal and metaphorical use. The authors trained a maximum entropy classifier for this purpose. They collected their data using FrameNet (Fillmore, Johnson, and Petruck 2003) and PropBank (Kingsbury and Palmer 2002) annotations. FrameNet is a lexical resource for English containing information on words' semantic and syntactic combinatory possibilities, or valencies, in each of their senses. PropBank is a corpus annotated with verbal propositions and their arguments. Gedigian et al. (2006) extracted the lexical items whose frames are related to MOTION and CURE from FrameNet, then searched the PropBank Wall Street Journal corpus (Kingsbury and Palmer 2002) for sentences containing such lexical items and annotated them with respect to metaphoricity. For example, the verb run in the sentence \"Texas Air has run into difficulty\" was annotated as metaphorical, and in \"I was doing the laundry and nearly broke my neck running upstairs to see\" as literal. Gedigian et al. used PropBank annotation (arguments and their semantic pour *nonliteral cluster* wsj04:7878 N As manufacturers get bigger, they are likely to pour more money into the battle for shelf space, raising the ante for new players. wsj25:3283 N Salsa and rap music pour out of the windows. wsj06:300 U Investors hungering for safety and high yields are pouring record sums into singlepremium, interest-earning annuities. *literal cluster* wsj59:3286 L Custom demands that cognac be poured from a freshly opened bottle.", "cite_spans": [ { "start": 14, "end": 36, "text": "Gedigian et al. (2006)", "ref_id": "BIBREF30" }, { "start": 197, "end": 234, "text": "(Fillmore, Johnson, and Petruck 2003)", "ref_id": "BIBREF29" }, { "start": 248, "end": 275, "text": "(Kingsbury and Palmer 2002)", "ref_id": "BIBREF42" }, { "start": 531, "end": 553, "text": "Gedigian et al. (2006)", "ref_id": "BIBREF30" }, { "start": 695, "end": 722, "text": "(Kingsbury and Palmer 2002)", "ref_id": "BIBREF42" }, { "start": 1020, "end": 1040, "text": "Gedigian et al. used", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Computational Approaches to Metaphor", "sec_num": "2.3" }, { "text": "An example of the data of Birke and Sarkar (2006) . types) as features to train the classifier, and report an accuracy of 95.12%. This result is, however, only 2.22 percentage points higher than the performance of the naive baseline assigning majority class to all instances (92.90%). Such high performance of their system can be explained by the fact that 92.90% of the verbs of MOTION and CURE in their data are used metaphorically, thus making the data set unbalanced with respect to target categories and making the task easier.", "cite_spans": [ { "start": 26, "end": 49, "text": "Birke and Sarkar (2006)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Both Birke and Sarkar (2006) and Gedigian et al. (2006) focus only on metaphors expressed by a verb. The approach of Krishnakumaran and Zhu (2007) additionally covers metaphors expressed by nouns and adjectives. Krishnakumaran and Zhu use hyponymy relation in WordNet and word bigram counts to predict metaphors at a sentence level. Given a metaphor in copula constructions, or an IS-A metaphor (e.g., the famous quote by William Shakespeare \"All the world's a stage\") they verify if the two nouns involved are in hyponymy relation in WordNet, otherwise this sentence is tagged as containing a metaphor. They also treat expressions containing a verb or an adjective used metaphorically (e.g., \"He planted good ideas in their minds\" or \"He has a fertile imagination\"). For those cases, they calculate bigram probabilities of verb-noun and adjective-noun pairs (including the hyponyms/hypernyms of the noun in question). If the combination is not observed in the data with sufficient frequency, the system tags the sentence as metaphorical. This idea is a modification of the selectional preference view of Wilks, although applied at the bigram level. Alternatively, one could extract verb-object relations from parsed text. Compared to the latter, Krishnakumaran and Zhu (2007) lose a great deal of information. The authors evaluated their system on a set of example sentences compiled from the Master Metaphor List, whereby highly conventionalized metaphors are taken to be negative examples. Thus they do not deal with literal examples as such. Essentially, the distinction Krishnakumaran and Zhu are making is between the senses included in WordNet, even if they are conventional metaphors (e.g., \"capture an idea\"), and those not included in WordNet (e.g., \"planted good ideas\").", "cite_spans": [ { "start": 5, "end": 28, "text": "Birke and Sarkar (2006)", "ref_id": "BIBREF9" }, { "start": 33, "end": 55, "text": "Gedigian et al. (2006)", "ref_id": "BIBREF30" }, { "start": 117, "end": 146, "text": "Krishnakumaran and Zhu (2007)", "ref_id": "BIBREF48" }, { "start": 1247, "end": 1276, "text": "Krishnakumaran and Zhu (2007)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "One of the first computational accounts of metaphor interpretation is that of Martin (1990) . In his metaphor interpretation, denotation and acquisition system (MIDAS), Martin models the hierarchical organization of conventional metaphors. The main assumption underlying this approach is that more specific conventional metaphors (e.g., COMPUTATIONAL PROCESS viewed as a LIVING BEING in \"How can I kill a process?\") descend from more general ones (e.g., PROCESS [general, as a sequence of events] is a LIVING BEING). Given an example of a metaphorical expression, MIDAS searches its database for a corresponding conceptual metaphor that would explain the anomaly. If it does not find any, it abstracts from the example to more general concepts and repeats the search. If a suitable general metaphor is found, it creates a new mapping for its descendant, a more specific metaphor, based on this example. This is also how novel conceptual metaphors are acquired by the system. The metaphors are then organized into a resource called MetaBank (Martin 1994) . The knowledge is represented in MetaBank in the form of metaphor maps (Martin 1988) containing detailed information about source-target concept mappings and empirically derived examples. MIDAS has been integrated with Unix Consultant, a system that answers users' questions about Unix. The system first tries to find a literal answer to the question. If it is not able to, it calls MIDAS, which detects metaphorical expressions via selectional preference violation and searches its database for a metaphor explaining the anomaly in the question.", "cite_spans": [ { "start": 78, "end": 91, "text": "Martin (1990)", "ref_id": "BIBREF60" }, { "start": 1040, "end": 1053, "text": "(Martin 1994)", "ref_id": "BIBREF61" }, { "start": 1126, "end": 1139, "text": "(Martin 1988)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Metaphor Interpretation.", "sec_num": "2.3.2" }, { "text": "Another cohort of approaches aims to perform inference about entities and events in the source and target domains for the purpose of metaphor interpretation. These include the KARMA system (Narayanan 1997 (Narayanan , 1999 Feldman and Narayanan 2004) and the ATT-Meta project (Barnden and Lee 2002; Agerri et al. 2007) . Within both systems the authors developed a metaphor-based reasoning framework in accordance with CMT. The reasoning process relies on manually coded knowledge about the world and operates mainly in the source domain. The results are then projected onto the target domain using the conceptual mapping representation. The ATT-Meta project concerns metaphorical and metonymic description of mental states; and reasoning about mental states is performed using first order logic. Their system, however, does not take natural language sentences as input, but hand-coded logical expressions that are representations of small discourse fragments. KARMA in turn deals with a broad range of abstract actions and events and takes parsed text as input. Veale and Hao (2008) derive a \"fluid knowledge representation for metaphor interpretation and generation\" called Talking Points. Talking Points is a set of characteristics of concepts belonging to source and target domains and related facts about the world which are acquired automatically from WordNet and from the Web. Talking Points are then organized in Slipnet, a framework that allows for a number of insertions, deletions, and substitutions in definitions of such characteristics in order to establish a connection between the target and the source concepts. This work builds on the idea of slippage in knowledge representation for understanding analogies in abstract domains (Hofstadter and Mitchell 1994; Hofstadter 1995 ). The following is an example demonstrating how slippage operates to explain the metaphor Make-up is a Western burqa.", "cite_spans": [ { "start": 189, "end": 204, "text": "(Narayanan 1997", "ref_id": "BIBREF72" }, { "start": 205, "end": 222, "text": "(Narayanan , 1999", "ref_id": "BIBREF73" }, { "start": 223, "end": 250, "text": "Feldman and Narayanan 2004)", "ref_id": "BIBREF27" }, { "start": 276, "end": 298, "text": "(Barnden and Lee 2002;", "ref_id": "BIBREF5" }, { "start": 299, "end": 318, "text": "Agerri et al. 2007)", "ref_id": "BIBREF1" }, { "start": 1063, "end": 1083, "text": "Veale and Hao (2008)", "ref_id": "BIBREF98" }, { "start": 1746, "end": 1776, "text": "(Hofstadter and Mitchell 1994;", "ref_id": "BIBREF40" }, { "start": 1777, "end": 1792, "text": "Hofstadter 1995", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Metaphor Interpretation.", "sec_num": "2.3.2" }, { "text": "\u2261 typically worn by women \u2248 expected to be worn by women \u2248 must be worn by women \u2248 must be worn by Muslim women Burqa <= By doing insertions and substitutions, the system arrives from the definition \"typically worn by women\" to that of \"must be worn by Muslim women.\" Thus it establishes a link between the concepts of make-up and burqa. Veale and Hao, however, did not evaluate to what extent their system is able to interpret metaphorical expressions in real-world text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Make-up =>", "sec_num": null }, { "text": "The next sections of the paper are devoted to our own experiments on metaphor identification and interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Make-up =>", "sec_num": null }, { "text": "The first task for metaphor processing within NLP is its identification in text. As discussed earlier, previous approaches to this problem either utilize hand-coded knowledge (Fass 1991; Krishnakumaran and Zhu 2007) or reduce the task to searching for metaphors of a specific domain defined a priori (e.g., MOTION metaphors) in a specific type of discourse (e.g., the Wall Street Journal [Gedigian et al. 2006] ). In contrast, the search space in our experiments is the entire BNC and the domain of the expressions identified is unrestricted. In addition, the developed technique does not rely on any hand-crafted lexical or world knowledge, but rather captures metaphoricity by means of verb and noun clustering in a data-driven manner.", "cite_spans": [ { "start": 175, "end": 186, "text": "(Fass 1991;", "ref_id": null }, { "start": 187, "end": 215, "text": "Krishnakumaran and Zhu 2007)", "ref_id": "BIBREF48" }, { "start": 388, "end": 410, "text": "[Gedigian et al. 2006]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "The motivation behind the use of clustering methods for the metaphor identification task lies in CMT. The patterns of conceptual metaphor (e.g., FEELINGS ARE LIQUIDS) always operate on semantic classes, that is, groups of related concepts, defined by Lakoff and Johnson as conceptual domains (FEELINGS include love, anger, hatred, etc.; LIQUIDS include water, tea, petrol, beer, etc.) . Thus modeling metaphorical mechanisms in accordance with CMT would involve capturing such semantic classes automatically. Previous research on corpus-based lexical semantics has shown that it is possible to automatically induce semantic word classes from corpus data via clustering of contextual cues (Pereira, Tishby, and Lee 1993; Lin 1998; Schulte im Walde 2006) . The current consensus is that the lexical items showing similar behavior in a large body of text most likely have related meanings.", "cite_spans": [ { "start": 292, "end": 384, "text": "(FEELINGS include love, anger, hatred, etc.; LIQUIDS include water, tea, petrol, beer, etc.)", "ref_id": null }, { "start": 688, "end": 719, "text": "(Pereira, Tishby, and Lee 1993;", "ref_id": "BIBREF78" }, { "start": 720, "end": 729, "text": "Lin 1998;", "ref_id": "BIBREF56" }, { "start": 730, "end": 752, "text": "Schulte im Walde 2006)", "ref_id": "BIBREF89" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "The second reason for the use of unsupervised and weakly supervised methods is suggested by the results of corpus-based studies of conceptual metaphor. The analysis of conceptual mappings in unrestricted text, conducted by Shutova and Teufel (2010) , although confirming some aspects of CMT, uncovered a number of fundamental difficulties. One of these is the choice of the level of abstraction and granularity of categories (i.e., labels for source and target domains). This suggests that it is hard to define a comprehensive inventory of labels for source and target domains. Thus a computational model of metaphorical associations should not rely on explicit domain labels. Unsupervised methods allow us to recover patterns in data without assigning any explicit labels to concepts, and thus to model interconceptual mappings implicitly.", "cite_spans": [ { "start": 223, "end": 248, "text": "Shutova and Teufel (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "The method behind our metaphor identification system relies on distributional clustering. Noun clustering, specifically, is central to the approach. It is traditionally assumed that noun clusters produced using distributional clustering contain concepts that are similar to each other. This is true only in part, however. There exist two types of concepts: concrete, those concepts denoting physical entities or physical experiences (e.g., chair, apple, house, rain) and abstract, those concepts that do not physically exist at any particular time or place, but rather exist as a type of thing or as an idea (e.g., justice, love, democracy). It is the abstract concepts that tend to be described metaphorically, rather than concrete concepts. Humans use metaphor attempting to gain a better understanding of an abstract concept by comparing it to their physical experiences. As a result, abstract concepts expose different distributional behavior in a corpus. This in turn affects the application of clustering techniques and the obtained clusters for concrete and abstract concepts would be structured differently. Consider the example in Figure 3 . The figure shows a cluster containing concrete concepts (on the right) that are various kinds of mechanisms; a cluster containing verbs co-occurring with mechanisms in the corpus (at the bottom); and a cluster containing abstract concepts (on the left) that tend to co-occur with these verbs. Such abstract concepts, albeit having quite distinct meanings (e.g., marriage and democracy), are observed in similar lexico-syntactic environments. This is due to the fact that they are systematically used metaphorically with the verbs from the domain of MECHANISM. Hence, they are automatically assigned to the same cluster. The following examples illustrate this phenomenon in textual data.", "cite_spans": [], "ref_spans": [ { "start": 1140, "end": 1148, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "(18) Our relationship is not really working. Such a structure of the abstract clusters can be explained by the fact that relationships, marriages, collaborations, and political systems are all cognitively mapped to the same source domain of MECHANISM. In contrast to concrete concepts, such as tea, water, coffee, beer, drink, liquid, that are clustered together when they have similar meanings, abstract concepts tend to be clustered together if they are associated with the same source domain. We define this phenomenon as clustering by association and it becomes central to the system design. The expectation is that clustering by association would allow the harvesting of new target domains that are associated with the same source domain, and thus identify new metaphors. The metaphor identification system starts from a small set of seed metaphorical expressions, that is, annotated metaphors (such as those in Examples (18) or (19)), which serve as training data. Note that seed annotation only concerns linguistic metaphors; metaphorical mappings are not annotated. The system then (1) creates source domains describing these examples by means of verb clustering (such as the verb cluster in Figure 3 ); (2) identifies new target domains associated with the same source domain by means of noun clustering (see, e.g., ABSTRACT cluster in Figure 3 ), and (3) establishes a link between the source and the target clusters based on the seed examples.", "cite_spans": [], "ref_spans": [ { "start": 1200, "end": 1208, "text": "Figure 3", "ref_id": "FIGREF4" }, { "start": 1345, "end": 1353, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "Thus the system captures metaphorical associations implicitly. It generalizes over the associated domains by means of verb and noun clustering. The obtained clusters then represent source and target concepts between which metaphorical associations hold. The knowledge of such associations is then used to identify new metaphorical expressions in a large corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "In addition to this, we build a selectional preference-based metaphor filter. This idea stems from the view of Wilks (1978) , but is, however, a modification of it. The filter assumes that the verbs exhibiting weak selectional preferences, namely, verbs cooccurring with any argument class in linguistic data (remember, influence, etc.) generally have no or only weak potential for being a metaphor. It has been previously shown that it is possible to quantify verb selectional preferences on the basis of corpus data, using, for example, a measure defined by Resnik (1993) . Once the candidate metaphors are identified in the corpus using clustering methods, those displaying weak selectional preferences can be filtered out.", "cite_spans": [ { "start": 111, "end": 123, "text": "Wilks (1978)", "ref_id": "BIBREF100" }, { "start": 560, "end": 573, "text": "Resnik (1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "Figures 4 and 5 depict the metaphor identification pipeline: first, the identification of metaphorical associations and then that of metaphorical expressions in text. In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Identification Method and Experiments", "sec_num": "3." }, { "text": "Learning metaphorical associations by means of verb and noun clustering and using the seed set. summary, the system (1) starts from a seed set of metaphorical expressions exemplifying a range of source-target domain mappings; (2) performs noun clustering in order to harvest various target concepts associated with the same source domain; (3) creates a source domain verb lexicon by means of verb clustering; (4) searches the corpus for metaphorical expressions describing the target domain concepts using the verbs from the source domain lexicon; and (5) filters out the candidates exposing weak selectional preference strength as non-metaphorical. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "The identification system takes a list of seed phrases as input. Seed phrases contain manually annotated linguistic metaphors. The system generalizes from these linguistic metaphors to the respective conceptual metaphors by means of clustering. This generalization is then used to harvest a large number of new metaphorical expressions in unseen text. Thus the data needed for the identification experiment consist of a seed set, data sets of verbs and nouns that are subsequently clustered, and an evaluation corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Data", "sec_num": "3.1" }, { "text": "3.1.1 Metaphor Corpus and Seed Phrases. The data to test the identification module were extracted from the metaphor corpus created by Shutova and Teufel (2010) . Their corpus is a subset of the BNC (Burnard 2007) and, as such, it provides a suitable platform for testing the metaphor processing system on real-world general-domain expressions in contemporary English. Our data set consists of verb-subject and verb-direct object metaphorical expressions. In order to avoid extra noise, we enforced some additional selection criteria. All phrases were included unless they fell in one of the following categories:", "cite_spans": [ { "start": 134, "end": 159, "text": "Shutova and Teufel (2010)", "ref_id": null }, { "start": 198, "end": 212, "text": "(Burnard 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Data", "sec_num": "3.1" }, { "text": "r Phrases where the subject or object referent is unknown (e.g., containing pronouns such as in \"in which they [changes] operated\") or represented by a named entity (e.g., \"Then Hillary leapt into the conversation\"). These cases were excluded from the data set because their processing would involve the use of additional modules for coreference resolution and named entity recognition, which in turn may introduce additional errors into the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Data", "sec_num": "3.1" }, { "text": "r Phrases whose metaphorical meaning is realized solely in passive constructions (e.g., \"sociologists have been inclined to [..]\"). These cases were excluded because for many such examples it was hard for humans to produce a literal paraphrase realized in the form of the same syntactic construction. Thus their paraphrasing was deemed to be an unfairly hard task for the system. r Multiword metaphors (e.g., \"whether we go on pilgrimage with Raleigh or put out to sea with Tennyson\"). The current system is designed to identify and paraphrase single-word, lexical metaphors. In the future the system needs to be modified to process multiword metaphorical expressions; this is, however, outside the scope of the current experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Data", "sec_num": "3.1" }, { "text": "The resulting data set consists of 62 phrases that are different single-word metaphors representing verb-subject and verb-direct object relations, where a verb is used metaphorically. The phrases include, for instance, \"stir excitement,\" \"reflect enthusiasm,\" \"grasp theory,\" \"cast doubt,\" \"suppress memory,\" \"throw remark\" (verb-direct object constructions); and \"campaign surged,\" \"factor shaped [...] ,\" \"tension mounted,\" \"ideology embraces,\" \"example illustrates\" (subject-verb constructions). This data set was used as a seed set in the identification experiments. The phrases in the data set were manually annotated for grammatical relations.", "cite_spans": [ { "start": 398, "end": 403, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Data", "sec_num": "3.1" }, { "text": "The noun data set used for clustering consists of the 2,000 most frequent nouns in the BNC. The 2,000 most frequent nouns cover most common target categories and their linguistic realizations. BNC represents a suitable source for such nouns because the corpus is balanced with respect to genre, style, and theme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb and Noun Data Sets.", "sec_num": "3.1.2" }, { "text": "The verb data set is a subset of VerbNet (Kipper et al. 2006) . VerbNet is the largest resource for general-domain verbs organized into semantic classes as proposed by Levin (1993) . The data set includes all the verbs in VerbNet with the exception of highly infrequent ones. The frequency of the verbs was estimated from the data collected by Korhonen, Krymolowski, and Briscoe (2006) for the construction of the VALEX lexicon, which to date is one of the largest automatically created verb resources. The verbs from VerbNet that appear less than 150 times in this data were excluded. The resulting data set consists of 1,610 general-domain verbs.", "cite_spans": [ { "start": 41, "end": 61, "text": "(Kipper et al. 2006)", "ref_id": "BIBREF43" }, { "start": 168, "end": 180, "text": "Levin (1993)", "ref_id": "BIBREF55" }, { "start": 344, "end": 385, "text": "Korhonen, Krymolowski, and Briscoe (2006)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Verb and Noun Data Sets.", "sec_num": "3.1.2" }, { "text": "The evaluation data for metaphor identification was the BNC parsed by the RASP parser (Briscoe, Carroll, and Watson 2006) . We used the grammatical relation (GR) output of RASP for the BNC created by Andersen et al. (2008) . The system searched the corpus for the source and target domain vocabulary within a particular grammatical relation (verb-direct object or verb-subject).", "cite_spans": [ { "start": 86, "end": 121, "text": "(Briscoe, Carroll, and Watson 2006)", "ref_id": "BIBREF11" }, { "start": 200, "end": 222, "text": "Andersen et al. (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Corpus.", "sec_num": "3.1.3" }, { "text": "The main components of the method include (1) distributional clustering of verbs and nouns, (2) search through the parsed corpus, and (3) selectional preference-based filtering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.2" }, { "text": "Method. The metaphor identification system relies on the clustering method of Sun and . They use a rich set of syntactic and semantic features (GRs, verb subcategorization frames [SCFs] , and selectional preferences) and spectral clustering, a method particularly suitable for the resulting high dimensional feature space. This algorithm has proved to be effective in previous verb clustering experiments (Brew and Schulte im Walde 2002) and in other NLP tasks involving high dimensional data (Chen et al. 2006) .", "cite_spans": [ { "start": 179, "end": 185, "text": "[SCFs]", "ref_id": null }, { "start": 493, "end": 511, "text": "(Chen et al. 2006)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Verb and Noun Clustering", "sec_num": "3.2.1" }, { "text": "Spectral clustering partitions objects relying on their similarity matrix. Given a set of data points, the similarity matrix records similarities between all pairs of points. The system of Sun and constructs similarity matrices using the Jensen-Shannon divergence as a measure. Jensen-Shannon divergence between two feature vectors w i and w j is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb and Noun Clustering", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "JSD(w i , w j ) = 1 2 D(w i ||m) + 1 2 D(w j ||m)", "eq_num": "( 1 )" } ], "section": "Verb and Noun Clustering", "sec_num": "3.2.1" }, { "text": "where D is the Kullback-Leibler divergence, and m is the average of the w i and w j . Spectral clustering can be viewed in abstract terms as the partitioning of a graph G over a set of words W. The weights on the edges of G are the similarities S ij . The similarity matrix S thus represents the adjacency matrix for G. The clustering problem is then defined as identifying the optimal partition, or cut, of the graph into clusters, such that the intra-cluster weights are high and the inter-cluster weights are low. The system of Sun and uses the MNCut algorithm of Meila and Shi (2001) for this purpose.", "cite_spans": [ { "start": 567, "end": 587, "text": "Meila and Shi (2001)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Verb and Noun Clustering", "sec_num": "3.2.1" }, { "text": "Sun and evaluated their clustering approach on 204 verbs from 17 Levin classes and obtained an F-measure of 80.4, which is the state-of-the-art performance level. The metaphor identification system uses the method of Sun and Korhonen to cluster both verbs and nouns (separately), however, significantly extending its coverage to unrestricted general-domain data and applying the method to a considerably larger data set of 1,610 verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb and Noun Clustering", "sec_num": "3.2.1" }, { "text": "Experiments. For verb clustering, the best performing features from Sun and were adopted. These include automatically acquired verb SCFs parameterized by their selectional preferences. These features were obtained using the SCF acquisition system of Preiss, Briscoe, and Korhonen (2007) . The system tags and parses corpus data using the RASP parser (Briscoe, Carroll, and Watson 2006) and extracts SCFs from the produced grammatical relations using a rule-based classifier which identifies 168 SCF types for English verbs. It produces a lexical entry for each verb and SCF combination occurring in corpus data. The selectional preference classes were obtained by clustering nominal arguments appearing in the subject and object slots of verbs in the resulting lexicon.", "cite_spans": [ { "start": 250, "end": 286, "text": "Preiss, Briscoe, and Korhonen (2007)", "ref_id": "BIBREF82" }, { "start": 350, "end": 385, "text": "(Briscoe, Carroll, and Watson 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction and Clustering", "sec_num": "3.2.2" }, { "text": "Following previous works on semantic noun classification (Pantel and Lin 2002; Bergsma, Lin, and Goebel 2008) , grammatical relations were used as features for noun clustering. More specifically, the frequencies of nouns and verb lemmas appearing in the subject, direct object, and indirect object relations in the RASP-parsed BNC were included in the feature vectors. For example, the feature vector for banana would contain the following entries:", "cite_spans": [ { "start": 57, "end": 78, "text": "(Pantel and Lin 2002;", "ref_id": "BIBREF77" }, { "start": 79, "end": 109, "text": "Bergsma, Lin, and Goebel 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction and Clustering", "sec_num": "3.2.2" }, { "text": "{eat-dobj n 1 , fry-dobj n 2 , sell-dobj n 3 ,..., eat with-iobj n i , look at-iobj n i+1 ,..., rot-subj n k , grow-subj n k+1 ,...}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction and Clustering", "sec_num": "3.2.2" }, { "text": "We experimented with different clustering granularities, subjectively examined the obtained clusters, and determined that the number of clusters set to 200 is the most suitable setting for both nouns and verbs in our task. This was done by means of qualitative analysis of the clusters as representations of source and target domains-that is, by judging how complete and homogeneous the verb clusters were as lists of potential source domain vocabulary and how many new target domains associated with the same source domain were found correctly in the noun clusters. This analysis was performed on a randomly selected set of 10 clusters taken from different granularity settings and none of the seed expressions were used for it. Examples of such clusters are shown in Figures 6 (nouns) and 7 (verbs), respectively. The noun clusters represent target concepts associated with the same source concept (some suggested source concepts are given in Figure 6 , although the system only captures those implicitly). The verb clusters contain lists of source domain vocabulary.", "cite_spans": [], "ref_spans": [ { "start": 945, "end": 953, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Feature Extraction and Clustering", "sec_num": "3.2.2" }, { "text": "Once the clusters have been obtained, the system proceeds to search the corpus for source and target domain terms within verb-object (both direct and indirect) and verb-subject relations. For each seed expression, a cluster is retrieved for the verb to form the source concept, and a cluster is retrieved for the noun to form a list of target concepts. The retrieved verb and noun clusters are then linked, and such links represent metaphorical associations. The system then classifies grammatical relations in the corpus as metaphorical if the lexical items in the grammatical relation appear in the linked source (verb) and target (noun) clusters. This search is performed on the BNC parsed by RASP. Consider the following example sentence extracted from the BNC (the BNC text ID is given in brackets, followed by the hypothetical conceptual metaphor):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Search.", "sec_num": "3.2.3" }, { "text": "(21) Few would deny that in the nineteenth century change was greatly accelerated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Search.", "sec_num": "3.2.3" }, { "text": "(ACA) -CHANGE IS MOTION Source: MECHANISM Target Cluster: consensus relation tradition partnership resistance foundation alliance friendship contact reserve unity link peace bond myth identity hierarchy relationship connection balance marriage democracy defense faith empire distinction coalition regime division Source: PHYSICAL OBJECT; LIVING BEING; STRUCTURE Target Cluster: view conception theory concept ideal belief doctrine logic hypothesis interpretation proposition thesis assumption idea argument ideology conclusion principle notion philosophy Source: STORY; JOURNEY Target Cluster: politics practice trading reading occupation profession sport pursuit affair career thinking life Source: LIQUID Target Cluster: disappointment rage concern desire hostility excitement anxiety passion doubt panic delight anger fear curiosity shock terror surprise pride happiness pain enthusiasm alarm hope memory love satisfaction sympathy spirit frustration impulse instinct warmth beauty ambition thought guilt emotion sensation horror feeling laughter suspicion pleasure Source: LIVING BEING; END Target Cluster: defeat fall death tragedy loss collapse decline disaster destruction fate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Search.", "sec_num": "3.2.3" }, { "text": "Clustered nouns (the associated source domain labels are suggested by the authors for clarity; the system does not assign any labels, but models source and target domains implicitly). The relevant GRs identified by the parser are presented in Figure 8 . The relation between the verb accelerate and its semantic object change in Example (21) is expressed in the passive voice and is, therefore, tagged by RASP as an ncsubj GR. Because this GR contains terminology from associated source (MOTION) and target (CHANGE) domains, it is marked as metaphorical and so is the term accelerate, which belongs to the source domain of MOTION.", "cite_spans": [], "ref_spans": [ { "start": 243, "end": 251, "text": "Figure 8", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "Filter. In the previous step a set of candidate verb metaphors and the associated grammatical relations were extracted from the BNC. These now need to be filtered based on selectional preference strength. To do this, we automatically acquire selectional preference distributions for verb-subject and verbdirect object relations from the RASP-parsed BNC. The noun clusters obtained using Sun and Korhonen's method as described earlier form the selectional preference classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "To quantify selectional preferences, we adopt the selectional preference strength (SPS) measure of Resnik (1993) . Resnik models selectional preferences of a verb in probabilistic terms as the difference between the posterior distribution of noun classes in a particular relation with the verb and their prior distribution in that syntactic position irrespective of the identity of the verb. He quantifies this difference using the Kullback-Leibler divergence and defines selectional preference strength as follows:", "cite_spans": [ { "start": 99, "end": 112, "text": "Resnik (1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S R (v) = D(P(c|v)||P(c)) = c P(c|v) log P(c|v) P(c)", "eq_num": "(2)" } ], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "where P(c) is the prior probability of the noun class, P(c|v) is the posterior probability of the noun class given the verb, and R is the grammatical relation in question. In order to quantify how well a particular argument class fits the verb, Resnik defines another measure called selectional association:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A R (v, c) = 1 S R (v) P(c|v) log P(c|v) P(c)", "eq_num": "(3)" } ], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "which stands for the contribution of a particular argument class to the overall selectional preference strength of a verb. The probabilities P(c|v) and P(c) were estimated from the corpus data as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(c|v) = f (v, c) k f (v, c k )", "eq_num": "(4)" } ], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(c) = f (c) k f (c k )", "eq_num": "(5)" } ], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "where f (v, c) is the number of times the predicate v co-occurs with the argument class c in the relation R, and f (c) is the number of times the argument class occurs in the relation R regardless of the identity of the predicate. Thus for each verb, its SPS can be calculated for specific grammatical relations. This measure was used to filter out the verbs with weak selectional preferences. The expectation is that such verbs are unlikely to be used metaphorically. The optimal selectional preference strength threshold was set experimentally for both verb-subject and verb-object relations on a small held-out data set (via qualitative analysis of the data). It approximates to 1.32. The system excludes expressions containing the verbs with preference strength below this threshold from the set of candidate metaphors. Examples of verbs with weak and strong direct object SPs are shown in Tables 2 and 3, respectively. Given the SPS threshold of 1.32, the filter discards 31% of candidate expressions initially identified in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selectional Preference Strength", "sec_num": "3.2.4" }, { "text": "In order to show that the described metaphor identification method generalizes well over the seed set and that it operates beyond synonymy, its output was compared to that of a baseline using WordNet. In the baseline system, WordNet synsets represent source and target domains. The quality of metaphor identification for both the system and the baseline was evaluated in terms of precision with the aid of human judges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "To compare the coverage of the system to that of the baseline in quantitative terms we assessed how broadly they expand on the seed set. To do this, we estimated the number of word senses captured by the two systems and the proportion of identified metaphors that are not synonymous with any of those seen in the seed set, according to WordNet. This type of evaluation assesses how well clustering methods are suited to identify new metaphors not directly related to those in the seed set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "Baseline. The baseline system was implemented using synonymy information from WordNet to expand on the seed set. Source and target domain vocabularies were thus represented as sets of synonyms of verbs and nouns in seed expressions. The baseline system then searched the corpus for phrases composed of lexical items belonging to those vocabularies. For example, given a seed expression \"stir excitement,\" the baseline finds phrases such as \"arouse fervour, stimulate agitation, stir turmoil,\" and so forth. It is not able to generalize over the concepts to broad semantic classes, however-for example, it does not find other FEELINGS such as rage, fear, anger, pleasure. This, however, is necessary to fully characterize the target domain. Similarly, in the source domain, the system only has access to direct synonyms of stir, rather than to other verbs characteristic of the domain of LIQUIDS (pour, flow, boil, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with WordNet", "sec_num": "3.3.1" }, { "text": "To compare the coverage achieved by the system using clustering to that of the baseline in quantitative terms, we estimated the number of WordNet synsets, that is, different word senses, in the metaphorical expressions captured by the two systems. We found that the baseline system covers only 13% of the data identified using clustering. This is due to the fact that it does not reach beyond the concepts present in the seed set. In contrast, most metaphors tagged by the clustering method (87%) are non-synonymous to those in the seed set and some of them are novel. Together, these metaphors represent a considerably wider range of meanings. Given the seed metaphors \"stir excitement, throw remark, cast doubt,\" the system identifies previously unseen expressions \"swallow anger, hurl comment, spark enthusiasm,\" and so on, as metaphorical. Tables 4 and 5 show examples of how the system and the baseline expand on the seed set, respectively. Full sentences containing metaphors annotated by the system are shown in Figure 9 . Twenty-one percent of the expressions identified by the system do not have their corresponding metaphorical senses included in WordNet, such as \"spark enthusiasm\"; the remaining 79% are, however, more common conventional metaphors. Starting with a seed set of only 62 examples, the system expands significantly on the seed set and identifies a total of 4,456 metaphorical expressions in the BNC. This suggests that the method has the potential to attain a broad coverage of the corpus given a large and representative seed set.", "cite_spans": [], "ref_spans": [ { "start": 844, "end": 858, "text": "Tables 4 and 5", "ref_id": "TABREF4" }, { "start": 1019, "end": 1027, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Comparison with WordNet", "sec_num": "3.3.1" }, { "text": "In order to assess the quality of metaphor identification by both systems, their output was assessed by human judgments. For this purpose, we randomly sampled sentences containing metaphorical expressions as annotated by the system and by the baseline and asked human annotators to decide whether these were metaphorical or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Against Human Judgments.", "sec_num": "3.3.2" }, { "text": "Participants Five volunteers participated in the experiment. They were all native speakers of English and had no formal training in linguistics. Materials The subjects were presented with a set of 78 randomly sampled sentences annotated by the two systems. Fifty percent of the data set were the sentences annotated by the identification system and the remaining 50% were annotated by the baseline; and the sentences were randomized. The annotation was done electronically in Microsoft Word. An example of annotated sentences is given in Figure 10 . Task and guidelines The subjects were asked to mark which of the expressions were metaphorical in their judgment. The participants were encouraged to rely on their own intuition of what a metaphor is in the annotation process. Additional guidance, however, in the form of the following definition of metaphor (Pragglejaz Group 2007) was also provided:", "cite_spans": [], "ref_spans": [ { "start": 538, "end": 547, "text": "Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation Against Human Judgments.", "sec_num": "3.3.2" }, { "text": "1. For each verb establish its meaning in context and try to imagine a more basic meaning of this verb in other contexts. Basic meanings normally are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Against Human Judgments.", "sec_num": "3.3.2" }, { "text": "(1) more concrete;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Against Human Judgments.", "sec_num": "3.3.2" }, { "text": "(2) related to bodily action; (3) more precise (as opposed to vague); (4) historically older.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Against Human Judgments.", "sec_num": "3.3.2" }, { "text": "If you can establish a basic meaning that is distinct from the meaning of the verb in this context, the verb is likely to be used metaphorically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "CKM 391 Time and time again he would stare at the ground, hand on hip, if he thought he had received a bad call, and then swallow his anger and play tennis. AD9 3205 He tried to disguise the anxiety he felt when he found the comms system down, but Tammuz was nearly hysterical by this stage. AMA 349 We will halt the reduction in NHS services for long-term care and community health services which support elderly and disabled patients at home. ADK 634 Catch their interest and spark their enthusiasm so that they begin to see the product's potential. K2W 1771 The committee heard today that gangs regularly hurled abusive comments at local people, making an unacceptable level of noise and leaving litter behind them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Sentences tagged by the system (metaphors in bold).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "Evaluation of metaphor identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "Interannotator agreement Reliability was measured at \u03ba = 0.63 (n = 2, N = 78, k = 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "The data suggest that the main source of disagreement between the annotators was the presence of conventional metaphors (e.g., verbs such as adopt, convey, decline).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "The system performance was then evaluated against the elicited judgments in terms of precision. The system output was compared to the gold standard constructed by merging the judgments, whereby the expressions tagged as metaphorical by at least three annotators were considered to be correct. This resulted in P = 0.79, with the baseline attaining P = 0.44. In addition, the system tagging was compared to that of each annotator pairwise, yielding an average P = 0.74 for the system and P = 0.41 for the baseline. In order to compare system performance to the human ceiling, pairwise agreement was additionally calculated in terms of precision between the majority gold standard and each judge. This corresponds to an average of P = 0.94.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "To show that the system performance is significantly different from that of the baseline, we annotated additional 150 instances identified by both systems for correctness and conducted a one-tailed t-test for independent samples. The difference is statistically significant with t = 4.11 (df = 148, p < 0.0005).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "We have shown that the method leads to a considerable expansion on the seed set and operates with high precision-namely, it produces high quality annotations, and identifies fully novel metaphorical expressions relying only on the knowledge of sourcetarget domain mappings that it learns automatically. By comparing its coverage to that of a WordNet baseline, we showed that the method reaches beyond synonymy and generalizes well over the source and target domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "The observed discrepancy in precision between the clustering approach and the baseline can be explained by the fact that a large number of metaphorical senses are included in WordNet. This means that in WordNet synsets source domain verbs appear together with more abstract terms. For instance, the metaphorical sense of shape in the phrase \"shape opinion\" is part of the synset \"(determine, shape, mold, influence, regulate).\" This results in the low precision of the baseline system, because it tags literal expressions (e.g., influence opinion) as metaphorical, assuming that all verbs from the synset belong to the source domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "To perform a more comprehensive error analysis, we examined a larger subset of the metaphorical expressions identified by the system (200 sentences, equally covering verb-subject and verb-object constructions). System precision against the additional judgments by one of the authors was measured at 76% (48 instances were tagged incorrectly according to the judgments). The classification of system errors by type is presented in Table 6 . Precision errors in the output of the system were also concentrated around the problem of conventionality of some metaphorical verbs, such as those in \"hold views, adopt traditions, tackle a problem.\" This conventionality is reflected in the data in that such verbs are frequently used in their \"metaphorical\" contexts. As a result, they are clustered together with literally used terms. For instance, the verb tackle is found in a cluster with solve, resolve, handle, confront, face, and so forth. This results in the system tagging \"resolve a problem\" as metaphorical if it has previously seen \"tackle a problem.\" A number of system errors affecting its precision are also due to cases of general polysemy and homonymy of both verbs and nouns. For example, the noun passage can mean both \"the act of passing from one state or place to the next\" and \"a section of text; particularly a section of medium length,\" as defined in WordNet. Sun and method performs hard clustering, that is, it does not distinguish between different word senses. Hence the noun passage occurred in only one cluster, containing concepts like thought, word, sentence, expression, reference, address, description, and so on. This cluster models the \"textual\" meaning of passage. As a result of sense ambiguity within the cluster, given the seed phrase \"she blocked the thought,\" the system tags such expressions as \"block passage,\" \"impede passage,\" \"obstruct passage,\" and \"speed passage\" as metaphorical.", "cite_spans": [], "ref_spans": [ { "start": 430, "end": 437, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "The errors that may cause low recall of the system are of a different nature. Whereas noun clustering considerably expands the seed set by identifying new associated target concepts (e.g., given the seed metaphor \"sell soul\" it identifies \"sell skin\" and \"launch pulse\" as metaphorical), the verb clusters sometimes miss a certain proportion of source domain vocabulary. For instance, given the seed metaphor \"example illustrates,\" the system identifies the following expressions: \"history illustrates,\" \"episode illustrates,\" \"tale illustrates,\" \"combination illustrates,\" \"event illustrates,\" and so forth. It does not, however, capture obvious verb-based expansions, such as \"episode portrays,\" present in the BNC. This is one of the problems that could lead to a lower recall of the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "Nevertheless, in many cases the system benefits not only from dissimilar concepts within the noun clusters used to detect new target domains, but also from dissimilar concepts in the verb clusters. Verb clusters produced automatically relying on contextual features may contain lexical items with distinct, or even opposite meanings (e.g., throw and catch, take off and land). They tend to belong to the same semantic domain, however (e.g., verbs of dealing with LIQUIDS, verbs describing a FIGHT) It is the diversity of verb meanings within the domain cluster that allows the generalization from a limited number of seed expressions to a broader spectrum of previously unseen and novel metaphors, non-synonymous with those in the seed set. The fact that the approach is seed-dependent is one of its possible limitations, affecting the coverage of the system. Wide coverage is essential for the practical use of the system. At this stage, however, it was impossible for us to reliably measure the recall of the system, because there is no large corpus annotated for metaphor available. In addition, because the current system was only tested with very few seeds (again, due to the lack of metaphor-annotated data), we expect the current overall recall of the system to be relatively low. In order to obtain a full coverage of the corpus, a large and representative seed set is necessary. Although it is hard to capture the whole variety of metaphorical language in a limited set of examples, it is possible to compile a seed set representative of all common source-target domain mappings. The learning capabilities of the system can then be used to expand on those to the whole range of conventional and novel metaphorical mappings and expressions. In addition, because the precision of the system was measured on the data set produced by expanding individual seed expressions, we would expect the expansion of other, new seed expressions to yield a comparable quality of annotations. Incorporating new seed expressions is thus likely to result in increasing recall without a significant loss in precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "The current system harvests a large and relatively clean set of metaphorical expressions from the corpus. These annotations could provide a new platform for the development and testing of other metaphor systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "As is the case in metaphor identification, the majority of existing approaches to metaphor interpretation also rely on task-specific hand-coded knowledge (Martin 1990; Fass 1991; Narayanan 1997 Narayanan , 1999 Barnden and Lee 2002; Feldman and Narayanan 2004; Agerri et al. 2007) and produce interpretations in a non-textual format (Veale and Hao 2008) . The ultimate objective of automatic metaphor processing, however, is a type of interpretation that can be directly embedded into other systems to enhance their performance. We thus define metaphor interpretation as a paraphrasing task and build a system that automatically derives literal paraphrases for metaphorical expressions in unrestricted text. Our method is also distinguished from previous work in that it does not rely on any hand-crafted knowledge about metaphor, but in contrast is corpus-based and uses automatically induced selectional preferences.", "cite_spans": [ { "start": 154, "end": 167, "text": "(Martin 1990;", "ref_id": "BIBREF60" }, { "start": 168, "end": 178, "text": "Fass 1991;", "ref_id": null }, { "start": 179, "end": 193, "text": "Narayanan 1997", "ref_id": "BIBREF72" }, { "start": 194, "end": 210, "text": "Narayanan , 1999", "ref_id": "BIBREF73" }, { "start": 211, "end": 232, "text": "Barnden and Lee 2002;", "ref_id": "BIBREF5" }, { "start": 233, "end": 260, "text": "Feldman and Narayanan 2004;", "ref_id": "BIBREF27" }, { "start": 261, "end": 280, "text": "Agerri et al. 2007)", "ref_id": "BIBREF1" }, { "start": 333, "end": 353, "text": "(Veale and Hao 2008)", "ref_id": "BIBREF98" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "The metaphor paraphrasing task can be divided into two subtasks: (1) generating paraphrases, that is, other ways of expressing the same meaning in a given context, and (2) discriminating between literal and metaphorical paraphrases. Consequently, the proposed approach is theoretically grounded in two ideas underlying each of these subtasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "r The meaning of a word in context emerges through interaction with the meaning of the words surrounding it. This assumption is widely accepted in lexical semantics theory (Pustejovsky 1995; Hanks and Pustejovsky 2005) and has been exploited for lexical acquisition (Schulte im Walde 2006; Sun and . It suggests that the context itself imposes certain semantic restrictions on the words which can occur within it. Given a large amount of linguistic data, it is possible to model these semantic restrictions in probabilistic terms (Lapata 2001) . This can be done by deriving a ranking scheme for possible paraphrases that fit or do not fit in a specific context based on word co-occurrence evidence. This is how initial paraphrases are generated within the metaphor paraphrasing module.", "cite_spans": [ { "start": 172, "end": 190, "text": "(Pustejovsky 1995;", "ref_id": "BIBREF84" }, { "start": 191, "end": 218, "text": "Hanks and Pustejovsky 2005)", "ref_id": "BIBREF35" }, { "start": 530, "end": 543, "text": "(Lapata 2001)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "r Literalness can be detected via strong selectional preference. This idea is a mirror-image of the selectional preference violation view of Wilks (1978) , who suggested that a violation of selectional preferences indicates a metaphor. The key information that selectional preferences provide is whether there is an association between the predicate and its potential argument and how strong that association is. A literal paraphrase would normally come from the target domain (e.g., \"understand the explanation\") and be strongly associated with the target concept, whereas a metaphorical paraphrase would belong to the source domain (e.g., \"grasp the explanation\") and be associated with the concepts from this source domain more strongly than with the target concept. Hence we use a selectional preference model to measure the semantic fit of the generated paraphrases into the given context as opposed to all other contexts. The highest semantic fit then indicates the most literal paraphrase.", "cite_spans": [ { "start": 141, "end": 153, "text": "Wilks (1978)", "ref_id": "BIBREF100" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "Thus the context-based probabilistic model is used for paraphrase generation and the selectional preference model for literalness detection. The key difference between the two models is that the former favors the paraphrases that co-occur with the words in the context more frequently than other paraphrases do, and the latter favors the paraphrases that co-occur with the words from the context more frequently than with any other lexical items in the corpus. This is the main intuition behind our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "The system thus incorporates the following components:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "r a context-based probabilistic model that acquires paraphrases for metaphorical expressions from a large corpus;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "r a WordNet similarity component that filters out the irrelevant paraphrases based on their similarity to the metaphorical term (similarity is defined as sharing a common hypernym within three levels in the WordNet hierarchy);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "r a selectional preference model that discriminates literal paraphrases from the metaphorical ones. It re-ranks the paraphrases, de-emphasizing the metaphorical ones and emphasizing the literal ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "In addition, the system disambiguates the sense of the paraphrases using the WordNet inventory of senses. The context-based model together with the WordNet filter constitute a metaphor paraphrasing baseline. By comparing the final system to this baseline, we demonstrate that simple context-based substitution, even supplied by extensive knowledge contained in lexical resources, is not sufficient for metaphor interpretation and that a selectional preference model is needed to establish the literalness of the paraphrases. This section first provides an overview of paraphrasing and lexical substitution and relates these tasks to the problem of metaphor interpretation. It then describes the experimental data used to develop and test the paraphrasing system and the method itself, and finally, concludes with the system evaluation and the presentation of results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphor Interpretation Method and Experiments", "sec_num": "4." }, { "text": "Paraphrasing can be viewed as a text-to-text generation problem, whereby a new piece of text is produced conveying the same meaning as the original text. Paraphrasing can be carried out at multiple levels (sentence-, phrase-, and word-levels), and may involve both syntactic and lexical transformations. Paraphrasing by replacing individual words in a sentence is known as lexical substitution (McCarthy 2002) . Because, in this article, we address the phenomenon of metaphor at a single-word level, our task is close in nature to lexical substitution. The task of lexical substitution originates from word sense disambiguation (WSD). The key difference between the two is that whereas WSD makes use of a predefined sense-inventory to characterize the meaning of a word in context, lexical substitution is aimed at automatic induction of meanings. Thus the goal of lexical substitution is to generate the set of semantically valid substitutes for the word. Consider the following sentences from Preiss, Coonce, and Baker (2009) . 22His parents felt that he was a bright boy.", "cite_spans": [ { "start": 394, "end": 409, "text": "(McCarthy 2002)", "ref_id": "BIBREF64" }, { "start": 995, "end": 1027, "text": "Preiss, Coonce, and Baker (2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing and Lexical Substitution", "sec_num": "4.1" }, { "text": "(23) Our sun is a bright star.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing and Lexical Substitution", "sec_num": "4.1" }, { "text": "Bright in Example (22) can be replaced by the word intelligent. The same replacement in the context of Example (23) will not produce an appropriate sentence. A lexical substitution system needs to (1) find a set of candidate synonyms for the word and (2) select the candidate that matches the context of the word best.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing and Lexical Substitution", "sec_num": "4.1" }, { "text": "Both sentence-or phrase-level paraphrasing and lexical substitution find a wide range of applications in NLP. These include summarization (Knight and Marcu 2000; Zhou et al. 2006) , information extraction (Shinyama and Sekine 2003) , machine translation (Kurohashi 2001; Callison-Burch, Koehn, and Osborne 2006) , text simplification (Carroll et al. 1999) , question answering (McKeown 1979; Lin and Pantel 2001) and textual entailment (Sekine et al. 2007) . Consequently, there has been a plethora of NLP approaches to paraphrasing (McKeown 1979; Meteer and Shaked 1988; Dras 1999; Barzilay and McKeown 2001; Lin and Pantel 2001; Barzilay and Lee 2003; Bolshakov and Gelbukh 2004; Quirk, Brockett, and Dolan 2004; Kauchak and Barzilay 2006; Zhao et al. 2009 ; Kok and Brockett 2010) and lexical substitution (McCarthy and Navigli 2007, 2009; Erk and Pad\u00f3 2009; Preiss, Coonce, and Baker 2009; Toral 2009; McCarthy, Keller, and Navigli 2010) .", "cite_spans": [ { "start": 138, "end": 161, "text": "(Knight and Marcu 2000;", "ref_id": null }, { "start": 162, "end": 179, "text": "Zhou et al. 2006)", "ref_id": "BIBREF105" }, { "start": 205, "end": 231, "text": "(Shinyama and Sekine 2003)", "ref_id": "BIBREF92" }, { "start": 254, "end": 270, "text": "(Kurohashi 2001;", "ref_id": null }, { "start": 271, "end": 311, "text": "Callison-Burch, Koehn, and Osborne 2006)", "ref_id": "BIBREF14" }, { "start": 334, "end": 355, "text": "(Carroll et al. 1999)", "ref_id": "BIBREF16" }, { "start": 377, "end": 391, "text": "(McKeown 1979;", "ref_id": "BIBREF68" }, { "start": 392, "end": 412, "text": "Lin and Pantel 2001)", "ref_id": "BIBREF57" }, { "start": 436, "end": 456, "text": "(Sekine et al. 2007)", "ref_id": "BIBREF90" }, { "start": 533, "end": 547, "text": "(McKeown 1979;", "ref_id": "BIBREF68" }, { "start": 548, "end": 571, "text": "Meteer and Shaked 1988;", "ref_id": null }, { "start": 572, "end": 582, "text": "Dras 1999;", "ref_id": "BIBREF22" }, { "start": 583, "end": 609, "text": "Barzilay and McKeown 2001;", "ref_id": "BIBREF7" }, { "start": 610, "end": 630, "text": "Lin and Pantel 2001;", "ref_id": "BIBREF57" }, { "start": 631, "end": 653, "text": "Barzilay and Lee 2003;", "ref_id": "BIBREF7" }, { "start": 654, "end": 681, "text": "Bolshakov and Gelbukh 2004;", "ref_id": "BIBREF10" }, { "start": 682, "end": 714, "text": "Quirk, Brockett, and Dolan 2004;", "ref_id": "BIBREF84" }, { "start": 715, "end": 741, "text": "Kauchak and Barzilay 2006;", "ref_id": "BIBREF41" }, { "start": 742, "end": 758, "text": "Zhao et al. 2009", "ref_id": "BIBREF104" }, { "start": 809, "end": 822, "text": "(McCarthy and", "ref_id": "BIBREF64" }, { "start": 823, "end": 842, "text": "Navigli 2007, 2009;", "ref_id": null }, { "start": 843, "end": 861, "text": "Erk and Pad\u00f3 2009;", "ref_id": "BIBREF24" }, { "start": 862, "end": 893, "text": "Preiss, Coonce, and Baker 2009;", "ref_id": null }, { "start": 894, "end": 905, "text": "Toral 2009;", "ref_id": "BIBREF95" }, { "start": 906, "end": 941, "text": "McCarthy, Keller, and Navigli 2010)", "ref_id": "BIBREF65" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing and Lexical Substitution", "sec_num": "4.1" }, { "text": "Among paraphrasing methods one can distinguish (1) rule-based approaches, which rely on a set of hand-crafted (McKeown 1979; Zong, Zhang, and Yamamoto 2001) or automatically learned (Lin and Pantel 2001; Barzilay and Lee 2003; Zhao et al. 2008) paraphrasing patterns; (2) thesaurus-based approaches, which generate paraphrases by substituting words in the sentence by their synonyms (Bolshakov and Gelbukh 2004; Kauchak and Barzilay 2006) ; (3) natural language generation-based approaches (Kozlowski, McCoy, and Vijay-Shanker 2003; Power and Scott 2005) , which transform a sentence into its semantic representation and generate a new sentence from it; and (4) SMT-based methods (Quirk, Brockett, and Dolan 2004) , operating as monolingual MT. A number of approaches to lexical substitution rely on manually constructed thesauri to find sets of candidate synonyms (McCarthy and Navigli 2007) , whereas others address the task in a fully unsupervised fashion. In order to derive and rank candidate substitutes, the latter systems make use of distributional similarity measures (Pucci et al. 2009; McCarthy, Keller, and Navigli 2010) , vector space models of word meaning (De Cao and Basili 2009; Erk and Pad\u00f3 2009) or statistical learning techniques, such as hidden Markov models and n-grams (Preiss, Coonce, and Baker 2009) .", "cite_spans": [ { "start": 110, "end": 124, "text": "(McKeown 1979;", "ref_id": "BIBREF68" }, { "start": 125, "end": 156, "text": "Zong, Zhang, and Yamamoto 2001)", "ref_id": "BIBREF106" }, { "start": 182, "end": 203, "text": "(Lin and Pantel 2001;", "ref_id": "BIBREF57" }, { "start": 204, "end": 226, "text": "Barzilay and Lee 2003;", "ref_id": "BIBREF7" }, { "start": 227, "end": 244, "text": "Zhao et al. 2008)", "ref_id": "BIBREF103" }, { "start": 412, "end": 438, "text": "Kauchak and Barzilay 2006)", "ref_id": "BIBREF41" }, { "start": 490, "end": 532, "text": "(Kozlowski, McCoy, and Vijay-Shanker 2003;", "ref_id": null }, { "start": 533, "end": 554, "text": "Power and Scott 2005)", "ref_id": null }, { "start": 680, "end": 713, "text": "(Quirk, Brockett, and Dolan 2004)", "ref_id": "BIBREF84" }, { "start": 865, "end": 892, "text": "(McCarthy and Navigli 2007)", "ref_id": "BIBREF66" }, { "start": 1077, "end": 1096, "text": "(Pucci et al. 2009;", "ref_id": "BIBREF83" }, { "start": 1097, "end": 1132, "text": "McCarthy, Keller, and Navigli 2010)", "ref_id": "BIBREF65" }, { "start": 1175, "end": 1195, "text": "Cao and Basili 2009;", "ref_id": "BIBREF21" }, { "start": 1196, "end": 1214, "text": "Erk and Pad\u00f3 2009)", "ref_id": "BIBREF24" }, { "start": 1292, "end": 1324, "text": "(Preiss, Coonce, and Baker 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing and Lexical Substitution", "sec_num": "4.1" }, { "text": "The metaphor interpretation task is different from the WSD task, because it is impossible to predefine a set of senses of metaphorical words, in particular for novel metaphors. Instead, the correct substitute for the metaphorical term needs to be generated in a data-driven manner, as for lexical substitution. The metaphor paraphrasing task, however, also differs from lexical substitution in the following two ways. Firstly, a suitable substitute needs to be used literally in the target context, or at least more conventionally than the original word. Secondly, by definition, the substitution is not required to be a synonym of the metaphorical word. Moreover, for our task this is not even desired, because there is the danger that synonymous paraphrasing may result in another metaphorical expression, rather than the literal interpretation of the original one. Metaphor paraphrasing therefore presents an additional challenge in comparison to lexical substitution, namely, that of discriminating between literal and metaphorical substitutes. This second, harder, and not previously addressed task is the main focus of the work presented in this section. The remainder of the section is devoted to the description of the metaphor paraphrasing experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing and Lexical Substitution", "sec_num": "4.1" }, { "text": "The paraphrasing system is first tested individually on a set of metaphorical expressions extracted from a manually annotated metaphor corpus of Shutova and Teufel (2010) . This is the same data set as the one used for seeding the identification module (see Section 3.1.1 for description). Because the paraphrasing evaluation described in this section is conducted independently from the identification experiment, and no part of the paraphrasing system relies on the output of the identification system and vice versa, the use of the same data set does not give any unfair advantage to the systems. In the later experiment (Section 5) when the identification and paraphrasing system are evaluated jointly, again the same seed set will be used for identification; paraphrasing, however, will be performed on the output of the identification system (i.e., the new identified metaphors) and both the identified metaphors and their paraphrases will be evaluated by human judges not used in the previous and the current experiments.", "cite_spans": [ { "start": 145, "end": 170, "text": "Shutova and Teufel (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Data", "sec_num": "4.2" }, { "text": "The system takes phrases containing annotated single-word metaphors as input; where a verb is used metaphorically, its context is used literally. It generates a list of possible paraphrases of the verb that can occur in the same context and ranks them according to their likelihood, as derived from the corpus. It then identifies shared features of the paraphrases and the metaphorical verb using the WordNet hierarchy and removes unrelated concepts. It then identifies the literal paraphrases among the remaining candidates based on the verb's automatically induced selectional preferences and the properties of the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4.3" }, { "text": "Model. Terms replacing the metaphorical verb v will be called its interpretations i. We model the likelihood L of a particular paraphrase as a joint probability of the following events: the interpretation i co-occurring with the other lexical items from its context w 1 , ..., w N in syntactic relations r 1 , ..., r N , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i = P(i, (w 1 , r 1 ), (w 2 , r 2 ), ..., (w N , r N ))", "eq_num": "(6)" } ], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "where w 1 , ..., w N and r 1 , ..., r N represent the fixed context of the term used metaphorically in the sentence. In the system output, the context w 1 , ..., w N will be preserved, and the verb v will be replaced by the interpretation i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "We assume statistical independence between the relations of the terms in a phrase. For instance, for a verb that stands in a relation with both a subject and an object, the verb-subject and verb-direct object relations are considered to be independent events within the model. The likelihood of an interpretation is then calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(i, (w 1 , r 1 ), (w 2 , r 2 ), ..., (w N , r N )) = P(i) \u2022 P((w 1 , r 1 )|i) \u2022 ... \u2022 P((w N , r N )|i)", "eq_num": "(7)" } ], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "The probabilities can be calculated using maximum likelihood estimation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(i) = f (i) k f (i k )", "eq_num": "(8)" } ], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "P(w n , r n |i) = f (w n , r n , i) f (i) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "where f (i) is the frequency of the interpretation irrespective of its arguments, k f (i k ) is the number of times its part of speech class is attested in the corpus, and f (w n , r n , i) is the number of times the interpretation co-occurs with context word w n in relation r n . By performing appropriate substitutions into Equation (7) one obtains", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "P(i, (w 1 , r 1 ), (w 2 , r 2 ), ..., (w N , r N )) = f (i) k f (i k ) \u2022 f (w 1 , r 1 , i) f (i) \u2022 ... \u2022 f (w N , r N , i) f (i) = N n=1 f (w n , r n , i) ( f (i)) N\u22121 \u2022 k f (i k ) (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "This model is then used to rank the possible replacements of the term used metaphorically in the fixed context according to the data. The parameters of the model were estimated from the RASP-parsed BNC using the grammatical relations output created by Andersen et al. (2008) .", "cite_spans": [ { "start": 252, "end": 274, "text": "Andersen et al. (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Context-based Paraphrase Ranking", "sec_num": "4.3.1" }, { "text": "The context-based model described in Section 4.3.1 overgenerates and hence there is a need to further narrow down the results. It is acknowledged in the linguistics community that metaphor is, to a great extent, based on similarity between the concepts involved (Gentner et al. 2001) . We exploit this fact to refine paraphrasing. After obtaining the initial list of possible substitutes for the metaphorical term, the system filters out the terms whose meanings do not share any common properties with that of the metaphorical term. Consider the computer science metaphor \"kill a process,\" which stands for \"terminate a process.\" The basic sense of kill implies an end truth\"). As the task is to identify the literal interpretation, however, the system should remove these.", "cite_spans": [ { "start": 262, "end": 283, "text": "(Gentner et al. 2001)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet Filter.", "sec_num": "4.3.2" }, { "text": "One way of dealing with both problems simultaneously is to use selectional preferences of the verbs. Verbs used metaphorically are likely to demonstrate semantic preference for the source domain, e.g., suppress would select for MOVEMENTS (political) rather than IDEAS, or TRUTH (the target domain), whereas the ones used literally for the target domain (e.g., conceal) would select for TRUTH. Selecting the verbs whose preferences the noun in the metaphorical expression matches best should allow filtering out non-literalness, as well as unrelated terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet Filter.", "sec_num": "4.3.2" }, { "text": "We automatically acquired selectional preference distributions of the verbs in the paraphrase lists (for verb-subject and verb-direct object relations) from the RASPparsed BNC. As in the identification experiment, we derived selectional preference classes by clustering the 2,000 most frequent nouns in the BNC into 200 clusters using Sun and algorithm. In order to quantify how well a particular argument class fits the verb, we adopted the selectional association measure proposed by Resnik (1993) , identical to the one we used within the selectional preference-based filter for metaphor identification, as described in Section 3.2.4. To remind the reader, selectional association is defined as follows:", "cite_spans": [ { "start": 486, "end": 499, "text": "Resnik (1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "WordNet Filter.", "sec_num": "4.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A R (v, c) = 1 S R (v) P(c|v) log P(c|v) P(c)", "eq_num": "(11)" } ], "section": "WordNet Filter.", "sec_num": "4.3.2" }, { "text": "where P(c) is the prior probability of the noun class, P(c|v) is the posterior probability of the noun class given the verb, and S R is the overall selectional preference strength of the verb in the grammatical relation R. We use selectional association as a measure of semantic fitness (i.e., literalness) of the paraphrases. The paraphrases are re-ranked based on their selectional association with the noun in the context. Those paraphrases that are not well suited or used metaphorically are dispreferred within this ranking. The new ranking is shown in Table 8 . The expectation is that the paraphrase in the first rank (i.e., the verb with which the noun in the context has the highest association) represents a literal interpretation.", "cite_spans": [], "ref_spans": [ { "start": 558, "end": 565, "text": "Table 8", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "WordNet Filter.", "sec_num": "4.3.2" }, { "text": "As in the case of identification, the paraphrasing system was tested on verb-subject and verb-direct object metaphorical expressions. These were extracted from the manually annotated metaphor corpus of Shutova and Teufel (2010) , as described in Section 3.1.1. We compared the output of the final selectional-preference based system to that of the WordNet filter acting as a baseline. We evaluated the quality of paraphrasing with the help of human judges in two different experimental settings. The first setting involved direct judgments of system output by humans. In the second setting, the subjects did not have access to system output and had to provide their own literal paraphrases for the metaphorical expressions in the data set. The system was then evaluated against human judgments in Setting 1 and a paraphrasing gold standard created by merging annotations in Setting 2.", "cite_spans": [ { "start": 202, "end": 227, "text": "Shutova and Teufel (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "4.4" }, { "text": "Output. The subjects were presented with a set of sentences containing metaphorical expressions and the top-ranked paraphrases produced by the system and by the baseline, randomized. They were asked to mark as correct the paraphrases that have the same meaning as the term used metaphorically if they are used literally in the given context. Subjects Seven volunteers participated in the experiment. They were all native speakers of English (one bilingual) and had little or no linguistics expertise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 1: Direct Judgment of System", "sec_num": "4.4.1" }, { "text": "Interannotator agreement The reliability was measured at \u03ba = 0.62 (n = 2, N = 95, k = 7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 1: Direct Judgment of System", "sec_num": "4.4.1" }, { "text": "System evaluation against judgments We then evaluated the system performance against the subjects' judgments in terms of Precision at Rank 1, P(1). Precision at Rank (1) measures the proportion of correct literal interpretations among the paraphrases in rank 1. The results are shown in Table 9 . The system identifies literal paraphrases with a P(1) = 0.81 and the baseline with a P(1) = 0.55. We then conducted a one-tailed Sign test (Siegel and Castellan 1988) that showed that this difference in performance is statistically significant (N = 15, x = 1, p < 0.001).", "cite_spans": [ { "start": 436, "end": 463, "text": "(Siegel and Castellan 1988)", "ref_id": "BIBREF94" } ], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Setting 1: Direct Judgment of System", "sec_num": "4.4.1" }, { "text": "System and baseline P(1) and MAP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 9", "sec_num": null }, { "text": "System P(1) Baseline P(1) System MAP Baseline MAP ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation", "sec_num": null }, { "text": "The subjects were presented with a set of sentences containing metaphorical expressions and asked to write down all suitable literal paraphrases for the highlighted metaphorical verbs that they could think of. Subjects Five volunteer subjects who were different from the ones used in the previous setting participated in this experiment. They were all native speakers of English and some of them had a linguistics background (postgraduate-level degree in English).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 2: Creation of a Paraphrasing Gold Standard.", "sec_num": "4.4.2" }, { "text": "Gold Standard The elicited paraphrases combined together can be interpreted as a gold standard. For instance, the gold standard for the phrase \"brushed aside the accusations\" consists of the verbs rejected, ignored, disregarded, dismissed, overlooked, and discarded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 2: Creation of a Paraphrasing Gold Standard.", "sec_num": "4.4.2" }, { "text": "System evaluation by gold standard comparison The system output was compared against the gold standard using mean average precision (MAP) as a measure. MAP is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 2: Creation of a Paraphrasing Gold Standard.", "sec_num": "4.4.2" }, { "text": "MAP = 1 M M j=1 1 N j N j i=1 P ji (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setting 2: Creation of a Paraphrasing Gold Standard.", "sec_num": "4.4.2" }, { "text": "where M is the number of metaphorical expressions, N j is the number of correct paraphrases for the metaphorical expression j, P ji is the precision at each correct paraphrase (the number of correct paraphrases among the top i ranks). First, average precision is estimated for individual metaphorical expressions, and then the mean is computed across the data set. This measure allows one to assess ranking quality beyond rank 1, as well as the recall of the system. As compared with the gold standard, MAP of the paraphrasing system is 0.62 and that of the baseline is 0.56, as shown in Table 9 .", "cite_spans": [], "ref_spans": [ { "start": 588, "end": 595, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Setting 2: Creation of a Paraphrasing Gold Standard.", "sec_num": "4.4.2" }, { "text": "Given that the metaphor paraphrasing task is open-ended, any gold standard elicited on the basis of it cannot be exhaustive. Some of the correct paraphrases may not occur to subjects during the experiment. As an example, for the phrase \"stir excitement\" most subjects suggested only one paraphrase \"create excitement,\" which is found in rank 3, suggesting an average precision of 0.33 for this phrase. The top ranks of the system output are occupied by provoke and stimulate, however, which are intuitively correct, more precise paraphrases, despite none of the subjects having thought of them. Such examples contribute to the fact that the system's MAP is significantly lower than its precision at rank 1, because a number of correct paraphrases proposed by the system are not included in the gold standard. The selectional preference-based re-ranking yields a considerable improvement in precision at rank 1 (26%) over the baseline. This component is also responsible for some errors of the system, however. One of the potential limitations of selectional preferencebased approaches to metaphor paraphrasing is the presence of verbs exhibiting weak selectional preferences. This means that these verbs are not strongly associated with any of their argument classes. As noted in Section 3, such verbs tend to be used literally, and are therefore suitable paraphrases. Our selectional preference model deemphasizes them, however, and, as a result, they are not selected as literal paraphrases despite matching the context. This type of error is exemplified by the phrase \"mend marriage.\" For this phrase, the system ranking overruns the correct top suggestion of the baseline, \"improve marriage,\" and outputs \"repair marriage\" as the most likely literal interpretation, although it is in fact a metaphorical use. This is likely to be due to the fact that improve exposes a moderate selectional preference strength. Table 10 provides frequencies of the common errors of the system by type. The most common type of error is triggered by the conventionality of certain metaphorical verbs. Because they frequently co-occur with the target noun class in the corpus, they receive a high association score with that noun class. This results in a high ranking of conventional metaphorical paraphrases. Examples of top-ranked metaphorical paraphrases include \"confront a question\" for \"tackle a question,\" \"repair marriage\" for \"mend marriage,\" \"example pictures\" for \"example illustrates.\"", "cite_spans": [], "ref_spans": [ { "start": 1915, "end": 1923, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Discussion.", "sec_num": "4.4.3" }, { "text": "These errors concern non-literalness of the produced paraphrases. A less frequently occurring error was paraphrasing with a verb that has a different meaning. One such example was the metaphorical expression \"tension mounted,\" for which the system produced a paraphrase \"tension lifted,\" which has the opposite meaning. This error is likely to have been triggered by the WordNet filter, whereby one of the senses of lift would have a common hypernym with the metaphorical verb mount. This results in lift not being discarded by the filter, and subsequently ranked top due to the conventionality of the expression \"tension lifted.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion.", "sec_num": "4.4.3" }, { "text": "Another important issue that the paraphrase analysis brought to the foreground is the influence of wider context on metaphorical interpretation. The current system processes only the information contained within the GR of interest, discarding the rest of the context. For some cases, however, this is not sufficient and the analysis of a wider context is necessary. For instance, given the phrase \"scientists focus\" the system produces a paraphrase \"scientists think,\" rather than the more likely paraphrase \"scientists study.\" Such ambiguity of focus could potentially be resolved by taking its wider context into account. The context-based paraphrase ranking model described in Section 4.3.1 allows for the incorporation of multiple relations of the metaphorical verb in the sentence. Although the paraphrasing system uses hand-coded lexical knowledge from WordNet, it is important to note that metaphor paraphrasing is not restricted to metaphorical senses included in WordNet. Even if a metaphorical sense is absent from WordNet, the system can still identify its correct literal paraphrase relying on the hyponymy relation and similarity between concepts, as described in Section 4.3.2. For example, the metaphorical sense of handcuff in \"research is handcuffed\" is not included in Wordnet, although the system correctly identifies its paraphrase confine (\"research is confined\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion.", "sec_num": "4.4.3" }, { "text": "Up to now, the identification and the paraphrasing systems were evaluated individually as modules. To determine to which extent the presented systems are applicable within NLP, we then ran the two systems together in a pipeline and evaluated the accuracy of the resulting text-to-text metaphor processing. First, the metaphor identification system was applied to naturally occurring text taken from the BNC and then the metaphorical expressions identified in those texts were paraphrased by the paraphrasing system. Some of the expressions identified and paraphrased by the integrated system are shown in Figure 11 . The system output was compared against human judgments in two phases. In phase 1, a small sample of sentences containing metaphors identified and paraphrased by the system was judged by multiple judges. In phase 2, a larger sample of phrases was judged by only one judge (one of the authors of this article). Agreement of the judgments of the latter with the other judges was measured on the data from phase 1. Because our goal was to evaluate both the accuracy of the integrated system and its usability by other NLP tasks, we assessed its performance in a two-fold fashion. Instances where metaphors were both correctly identified and paraphrased by the system were considered strictly correct, as they show that the system fully achieved CKM 391 Time and time again he would stare at the ground, hand on hip, if he thought he had received a bad call, and then swallow his anger and play tennis. CKM 391 Time and time again he would stare at the ground, hand on hip, if he thought he had received a bad call, and then suppress his anger and play tennis. AD9 3205 He tried to disguise the anxiety he felt when he found the comms system down, but Tammuz was nearly hysterical by this stage. AD9 3205 He tried to hide the anxiety he felt when he found the comms system down, but Tammuz was nearly hysterical by this stage.", "cite_spans": [], "ref_spans": [ { "start": 605, "end": 614, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation of Integrated System", "sec_num": "5." }, { "text": "AMA 349 We will halt the reduction in NHS services for long-term care and community health services which support elderly and disabled patients at home. AMA 349 We will prevent the reduction in NHS services for long-term care and community health services which support elderly and disabled patients at home.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Integrated System", "sec_num": "5." }, { "text": "J7F 77 An economist would frame this question in terms of a cost-benefit analysis: the maximization of returns for the minimum amount of effort injected. J7F 77 An economist would phrase this question in terms of a cost-benefit analysis: the maximization of returns for the minimum amount of effort injected. EEC 1362 In it, Younger stressed the need for additional alternatives to custodial sentences, which had been implicit in the decision to ask the Council to undertake the enquiry. EEC 1362 In it, Younger stressed the need for additional alternatives to custodial sentences, which had been implicit in the decision to ask the Council to initiate the enquiry.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Integrated System", "sec_num": "5." }, { "text": "A1F 24 Moreover, Mr Kinnock brushed aside the suggestion that he needed a big idea or unique selling point to challenge the appeal of Thatcherism. A1F 24 Moreover, Mr Kinnock dismissed the suggestion that he needed a big idea or unique selling point to challenge the appeal of Thatcherism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Integrated System", "sec_num": "5." }, { "text": "Metaphors identified (first sentences) and paraphrased (second sentences) by the system. its goals. Instances where the paraphrasing retained the meaning and resulted in a literal paraphrase (including the cases where the identification module tagged a literal expression as a metaphor) were considered correct lenient. The intuition behind this evaluation setting is that correct paraphrasing of literal expressions by other literal expressions, albeit not demonstrating the positive contribution of metaphor processing, does not lead to any errors in system output and thus does not hamper the overall usability of the integrated system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 11", "sec_num": null }, { "text": "Three volunteer subjects participated in the experiment. They were all native speakers of English and had no formal training in linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "Materials and task Subjects were presented with a set of sentences containing metaphorical expressions identified by the system and their paraphrases, as shown in Figure 12 . There were 35 such sentences in the sample. They were asked to do the following:", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 172, "text": "Figure 12", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "Compare the sentences, decide whether the highlighted expressions have the same meaning, and record this in the box provided;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "2. Decide whether the verbs in both sentences are used metaphorically or literally and tick the respective boxes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "For the second task, the same definition of metaphor as in the identification evaluation (cf. Section 3.3.2) was provided for guidance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "Interannotator agreement The reliability of annotations was evaluated independently for judgments on similarity of paraphrases and their literalness. The interannotator agreement on the task of distinguishing metaphoricity from literalness was measured at \u03ba = 0.53 (n = 2, N = 70, k = 3). On the paraphrase (i.e., meaning retention) task, reliability was measured at \u03ba = 0.63 (n = 2, N = 35, k = 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "System performance We then evaluated the integrated system performance against the subjects' judgments in terms of accuracy (both strictly correct and correct", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Small Sample, Multiple Judges", "sec_num": "5.1" }, { "text": "Evaluation of metaphor identification and paraphrasing. lenient). Strictly correct accuracy in this task measures the proportion of metaphors both identified and paraphrased correctly in the given set of sentences. Correct lenient accuracy, which demonstrates applicability of the system, is represented by the overall proportion of paraphrases that retained their meaning and resulted in a literal paraphrase (i.e., including literal paraphrasing of literal expressions in original sentences). Human judgments were merged into a majority gold standard, which consists of those instances that were considered correct (i.e., identified metaphor correctly paraphrased by the system) by at least two judges. Compared to this majority gold standard, the integrated system operates with a strictly correct accuracy of 0.66 and correct lenient accuracy of 0.71. The average human agreement with the majority gold standard in terms of accuracy is 0.80 on the literalness judgments and 0.89 on the meaning retention judgments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 12", "sec_num": null }, { "text": "The system was also evaluated on a larger sample of automatically annotated metaphorical expressions (600 sentences) using one person's judgments produced following the procedure from phase 1. We measured how far these judgments agree with the judges used in phase 1. The agreement on meaning retention was measured at \u03ba = 0.59 (n = 2, N = 35, k = 4) and that on the literalness of paraphrases at \u03ba = 0.54 (n = 2, N = 70, k = 4). On this larger data set, the system achieved an accuracy of 0.54 (strictly correct) and 0.67 (correct lenient). The proportions of different tagging cases are shown in Table 11 . The table also shows the acceptability of tagging cases. Acceptability indicates whether or not this type of system paraphrasing would cause an error when hypothetically integrated with an external NLP application. Cases where the system produces correct literal paraphrases for metaphorical expressions identified in the text would benefit another NLP application, whereas cases where literal expressions are correctly paraphrased by other literal expressions are considered neutral. Both such cases are deemed acceptable, because they increase or preserve literalness of the text. All other tagging cases introduce errors, thus they are marked as unacceptable. Examples of different tagging cases are shown in Table 12 .", "cite_spans": [], "ref_spans": [ { "start": 598, "end": 606, "text": "Table 11", "ref_id": "TABREF0" }, { "start": 1321, "end": 1329, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Phase 2: Larger Sample, One Judge", "sec_num": "5.2" }, { "text": "The accuracy of metaphor-to-literal paraphrasing (0.54) indicates the level of informative contribution of the system, and the overall accuracy of correct paraphrasing resulting in a literal expression (0.67) represents the level of its acceptability within NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Larger Sample, One Judge", "sec_num": "5.2" }, { "text": "The results of integrated system evaluation suggest that the system is capable of providing useful information about metaphor for an external text processing application Table 12 Examples of different tagging cases.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Discussion and Error Analysis", "sec_num": "5.3" }, { "text": "Correct paraphrase: met \u2192 lit throw an idea \u2192 express an idea Correct paraphrase: lit \u2192 lit adopt a recommendation \u2192 accept a recommendation Correct paraphrase: lit \u2192 met arouse memory \u2192 awaken memory Correct paraphrase: met \u2192 met work killed him \u2192 work exhausted him with a reasonable accuracy (0.67). It may, however, also introduce errors in the text by incorrect paraphrasing, as well as by producing metaphorical paraphrases. If the latter errors are rare (0.5%), the errors of the former type are sufficiently frequent (21.5%) to make the metaphor system less desirable for use in NLP. It is therefore important to address such errors. Table 13 shows the contribution of the individual system components to the overall error. The identification system tags 28% of all instances incorrectly (170). This yields a component performance of 72%. This result is slightly lower than that obtained in its individual evaluation in a setting with multiple judges (79%). This can be explained by the fact that the integrated system was evaluated by one judge only, rather than using a majority gold standard. When compared with the judgments of each annotator pairwise the system precision was measured at 74% (cf. Section 3.3.2). Some of the literal instances tagged as a metaphor by the identification component are then correctly paraphrased with a literal expression by the paraphrasing component. Such cases do not change the meaning of the text, and hence are considered acceptable. The resulting contribution of the identification component to the overall error of the integrated system is thus 15%.", "cite_spans": [], "ref_spans": [ { "start": 642, "end": 650, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Tagging case Examples", "sec_num": null }, { "text": "As Table 13 shows, the paraphrasing component failed in 32% of all cases (196 instances out of 600 were paraphrased incorrectly). As mentioned previously, this error can be further split into paraphrasing without meaning retention (21.5%) and metaphorical paraphrasing (11%). Both of these error types are unacceptable and lead to lower performance of the integrated system. This error rate is also higher than that of the paraphrasing system when evaluated individually on a manually created data set (19%). The reasons for incorrect paraphrasing by the integrated system are manifold Table 13 System errors by component. Three categories are cases where the identification model incorrectly tagged a literal expression as metaphoric (false negatives from this module were not measured). The remaining two categories are for paraphrase errors on correctly identified metaphors.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Table 13", "ref_id": "TABREF0" }, { "start": 586, "end": 594, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Tagging case Examples", "sec_num": null }, { "text": "Identification Paraphrasing and concern both the metaphor identification and paraphrasing components. One of the central problems stems from the initial tagging of literal expressions as metaphorical by the identification system. The paraphrasing system is not designed with literal-to-literal paraphrasing in mind. When it receives literal expressions which have been incorrectly identified as input, it searches for a more literal paraphrase for them. Not all literally used words have suitable substitutes in the given context, however. For instance, the literal expression \"approve conclusion\" is incorrectly paraphrased as \"evaluate conclusion.\" Similar errors occur when metaphorical expressions do not have any single-word literal paraphrases, for example, \"country functions according to...\". This is, however, a more fundamental problem for metaphor paraphrasing as a task. In such cases, the system, nonetheless, attempts to produce a substitute with approximately the same meaning, which often leads to either metaphorical or incorrect paraphrasing. For instance, \"country functions\" is paraphrased by \"country runs,\" with suggestions with lower rank being \"country works\" and \"country operates.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Type of error", "sec_num": null }, { "text": "Some errors that occur at the paraphrasing level are also due to the general word sense ambiguity of certain verbs or nouns. Consider the following paraphrasing example, where Example (24a) shows an automatically identified metaphor and Example (24b) its system-derived paraphrase: This error results from the fact that the verb succeed has a high selectional preference for life in one of its senses (\"attain success or reach a desired goal\") and is similar to follow in WordNet in another of its senses (\"be the successor [of]\"). The system merges these two senses in one, resulting in an incorrect paraphrase. One automatically identified example exhibited interaction of metaphor with metonymy at the interpretation level. In the phrase \"break word,\" the verb break is used metaphorically (although conventionally) and the noun word is a metonym standing for promise. This affected paraphrasing in that the system searched for verbs denoting actions that could be done with words, rather than promises, and suggested the paraphrase \"interrupt word(s).\" This paraphrase is interpretable in the context of a person giving a speech, but not in the context of a person giving a promise. This was the only case of metonymy in the analyzed data, however.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Type of error", "sec_num": null }, { "text": "Another issue that the evaluation on a larger data set revealed is the limitations of the WordNet filter used in the paraphrasing system. Despite being a wide-coverage general-domain database, WordNet does not include information about all possible relations that exist between particular word senses. This means that some of the correct paraphrases suggested by the context-based model get discarded by the WordNet filter due to missing information in WordNet. For instance, the system produces no paraphrase for the metaphors \"hurl comment,\" \"spark enthusiasm,\" and \"magnify thought\" that it correctly identified. This problem motivates the exploration of possible WordNetfree solutions for similarity detection in the metaphor paraphrasing task. The system could either rely entirely on such a solution, or back off to it in cases when the WordNetbased system fails. Table 14 provides a summary of system errors by type. The most common errors are caused by metaphor conventionality resulting in metaphorical paraphrasing (e.g., \"swallow anger \u2192 suppress anger,\" \"work killed him \u2192 work exhausted him\"), followed by the WordNet filter-and general polysemy-related errors (e.g. \"follow lives \u2192 succeed lives\"), resulting in incorrect paraphrasing or the system not producing any paraphrase at all. Metaphor paraphrasing by another conventional metaphor instead of a literal expression is undesirable, although it may still be useful if the paraphrases are more lexicalized than the original expression. The word sense ambiguity-and WordNet-based errors are more problematic, however, and need to be addressed in the future. SP re-ranking is responsible for the majority of incorrect paraphrasing of literal expressions. This may be due to the fact that the model is ignorant of the meaning retention aspect, but rather favors the paraphrases that are used literally (albeit incorrectly) in the given context. This shows that when building an integrated system, it is necessary to adapt the metaphor paraphrasing module to be able to also handle literal expressions, because the identification module is likely to produce at least some of them.", "cite_spans": [], "ref_spans": [ { "start": 870, "end": 878, "text": "Table 14", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Type of error", "sec_num": null }, { "text": "It is hard to directly compare the performance of the presented system to the other recent approaches to metaphor, because all of these approaches assume different task definitions, and hence use data sets and evaluation techniques of their own. Among the data-driven methods, however, the closest in nature to ours is Mason's (2004) CorMet system. Mason's system does not perform metaphor interpretation or identification of metaphorical expressions in text, but rather focuses on the detection of metaphorical links between distant domains. Our system also involves such detection. Whereas Mason relies on domain-specific selectional preferences for this purpose, however, our system uses information about verb subcategorization, as well as general selectional preferences, to perform distributional clustering of verbs and nouns and then link the clusters based on metaphorical seeds. Another fundamental difference is that whereas CorMet assigns explicit domain labels, our system models source and target domains implicitly. In the evaluation of the CorMet system, the acquired metaphorical mappings are compared to those in the manually created Master Metaphor List demonstrating the accuracy of 77%. In our system, on the contrary, metaphor acquisition is evaluated via extraction of naturally occurring metaphorical expressions, achieving a performance of 79% in terms of precision. In order to compare the new mapping acquisition ability of our system to that of CorMet, however, we performed an additional analysis of the mappings hypothesized by our noun clusters in relation to those in the MML. It was not possible to compare the new mappings discovered by our system to the MML directly as was done in Mason's experiments, because in our approach source domains are represented by clusters of their characteristic verbs. The analysis of the noun clusters with respect to expansion of the the seed mappings taken from the MML, however, allowed us to evaluate the mapping acquisition by our system in terms of both precision and recall. The goal was to confirm our hypothesis that abstract concepts get clustered together if they are associated with the same source domain and to evaluate the quality of the newly acquired mappings.", "cite_spans": [ { "start": 319, "end": 333, "text": "Mason's (2004)", "ref_id": "BIBREF63" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "To do this, we randomly selected 10 target domain categories described in the MML and manually extracted all corresponding mappings (42 mappings in total). The categories included SOCIETY, IDEA, LIFE, OPPORTUNITY, CHANGE, LOVE, DIFFICULTY, CREATION, RELATIONSHIP, and COMPETITION. For the concept of OPPORTUNITY, for example, three mappings were present in the MML: OPPORTU-NITIES ARE PHYSICAL OBJECTS, OPPORTUNITIES ARE MOVING ENTITIES, and OPPORTUNITIES ARE OPEN PATHS, whereas for the concept of COMPETITION the list describes only two mappings: COMPETITION IS A RACE, COMPETITION IS A WAR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "We then extracted the system-produced clusters containing the selected 10 target concepts. Examples of the mappings and the corresponding clusters are shown in Figure 13 . Our goal was to verify whether other concepts in the cluster containing the target concept are associated with the source domains given in the mappings. Each member of these clusters was analyzed for possible association with the respective source domains. For each concept in a cluster, we verified that it is associated with the respective source domain by finding a corresponding metaphorical expression and annotating the concepts accordingly. The degree of association of the members of the clusters with a given source domain was evaluated in terms of precision on the set of hypothesized mappings. The precision of the cluster's association with the source Conceptual mapping: RELATIONSHIP IS A MECHANISM (VEHICLE) Cluster: consensus relation tradition partnership resistance foundation alliance friendship contact reserve unity link peace bond myth identity hierarchy relationship connection balance marriage democracy defense faith empire distinction coalition regime division Conceptual mapping: LIFE IS A JOURNEY Cluster: politics practice trading reading occupation profession sport pursuit affair career thinking life Conceptual mapping: SOCIETY is a (HUMAN) BODY Cluster: class population nation state country family generation trade profession household kingdom business industry economy market enterprise world community institution society sector Conceptual mapping: DIFFICULTY is DIFFICULTY IN MOVING (OBSTACLE); PHYSICAL HARDNESS Cluster: threat crisis risk problem poverty obstacle dilemma challenge prospect danger discrimination barrier difficulty shortage Conceptual mapping: OPPORTUNITY is a PHYSICAL OBJECT; MOVING ENTITY; OPEN PATH Cluster: incentive attraction scope remedy chance choice solution option perspective range possibility contrast opportunity selection alternative focus concept was calculated as a proportion of the associated concepts in it. Based on these results we computed the average precision (AP) as follows:", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 169, "text": "Figure 13", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "AP = 1 M M j=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "#associated concepts in cluster c j |c j | (13) where M is the number of hypothesized mappings and c j is the cluster of target concepts corresponding to mapping j.", "cite_spans": [ { "start": 43, "end": 47, "text": "(13)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "The annotation was carried out by one of the authors and its average precision is 0.82. This confirms the hypothesis of clustering by association and shows that our method favorably compares to Mason's system. This is only an approximate comparison, however. Direct comparison of metaphor acquisition by the two systems was not possible, as they produce the output in different formats and, as mentioned earlier, our system models conceptual mappings implicitly, both within the noun clusters, as well as by linking them to the verb clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "We then additionally evaluated the recall of mapping acquisition by our system against the MML. For each selected MML mapping, we manually extracted all alternative target concepts associated with the source domain in the mapping from the MML. For example, in case of LIFE IS A JOURNEY we identified all target concepts associated with JOURNEY according to the MML and extracted them. These included LIFE, CAREER, LOVE, and CHANGE. We then verified whether the relevant systemproduced noun clusters contained these concepts. The recall was then calculated as a proportion of the concepts in this list within one cluster. For example, the concepts LIFE and CAREER are found in the same cluster, but not LOVE and CHANGE. The overall recall of mapping acquisition was measured at 0.50.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "These results show that the system is able to discover a large number of metaphorical connections in the data with high precision. Although the evaluation against the Master Metaphor List is subjective, it suggests that the use of statistical data-driven methods in general, and distributional clustering in particular, is a promising direction for computational modeling of metaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to the CorMet System", "sec_num": "5.4" }, { "text": "The 1980s and 1990s provided us with a wealth of ideas on the structure and mechanisms of metaphor. The computational approaches formulated back then are still highly influential, although their use of task-specific hand-coded knowledge is becoming increasingly less popular. The last decade witnessed a significant technological leap in natural language computation, whereby manually crafted rules gradually gave way to more robust corpus-based statistical methods. This is also the case for metaphor research. In this article, we presented the first integrated statistical system for metaphor processing in unrestricted text. Our method is distinguished from previous work in that it does not rely on any metaphor-specific hand-coded knowledge (besides the seed set in the identification experiments), operates on open-domain text, and produces interpretations in textual format. The system, consisting of independent metaphor identification and paraphrasing modules, operates with a high precision (0.79 for identification, 0.81 for paraphrasing, and 0.67 as an integrated system). Although the system has been tested only on verb-subject and verb-object metaphors at this stage, the described identification and paraphrasing methods should be similarly applicable to a wider range of syntactic constructions. This expectation rests on the fact that both distributional clustering and selectional preference induction techniques have been shown to model the meanings of a range of word classes (Hatzivassiloglou and McKeown 1993; Boleda Torrent and Alonso i Alemany 2003; Brockmann and Lapata 2003; Zapirain, Agirre, and M\u00e0rquez 2009) . Extending the system to deal with metaphors represented by other word classes and constructions as well as multi-word metaphors is part of future work.", "cite_spans": [ { "start": 1497, "end": 1532, "text": "(Hatzivassiloglou and McKeown 1993;", "ref_id": "BIBREF36" }, { "start": 1533, "end": 1574, "text": "Boleda Torrent and Alonso i Alemany 2003;", "ref_id": "BIBREF10" }, { "start": 1575, "end": 1601, "text": "Brockmann and Lapata 2003;", "ref_id": "BIBREF12" }, { "start": 1602, "end": 1637, "text": "Zapirain, Agirre, and M\u00e0rquez 2009)", "ref_id": "BIBREF102" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "6." }, { "text": "Such an extension of the identification system would require the creation of a seed set exemplifying more syntactic constructions and the corpus search over further grammatical relations (e.g., verb-prepositional phrase [PP] complement relations: \"Hillary leapt in the conversation;\" adjectival modifier-noun relations \"slippery mind, deep unease, heavy loss;\" noun-PP complement relations: \"a fraction of self-control, a foot of a mountain;\" verb-VP complement relations: \"aching to begin the day;\" and copula constructions: \"Death is the sorry end of the human story, not a mysterious prelude to a new one\"). Besides noun and verb clustering, it would also be necessary to perform clustering of adjectives and adverbs. Clusters of verbs, adjectives, adverbs, and concrete nouns would then represent source domains within the model. The data study of Shutova and Teufel (2010) suggested that it is sometimes difficult to choose the optimal level of abstraction of domain categories that would generalize well over the data. Although the system does not explicitly assign any domain labels, its domain representation is still restricted by the fixed level of generality of source concepts, defined by the chosen cluster granularity. To relax this constraint, one could attempt to automatically optimize cluster granularity to fit the data more accurately and to ensure that the generated clusters explain the metaphorical expressions in the data more comprehensively. A hierarchical clustering algorithm, such as that of Yu, Yu, and Tresp (2006) or Sun and , could be used for this purpose. Besides this, it would be desirable to be able to generalize metaphorical associations learned from one type of syntactic construction across all syntactic constructions, without providing explicit seed examples for the latter. For instance, given the seed phrase \"stir excitement,\" representing the conceptual mapping FEELINGS ARE LIQUIDS, the system should be able to discover not only that phrases such as \"swallow anger\" are metaphorical, but that phrases such as \"ocean of happiness\" are as well.", "cite_spans": [ { "start": 852, "end": 877, "text": "Shutova and Teufel (2010)", "ref_id": null }, { "start": 1521, "end": 1545, "text": "Yu, Yu, and Tresp (2006)", "ref_id": "BIBREF101" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "6." }, { "text": "The extension of the paraphrasing system to other syntactic constructions would involve the extraction of further grammatical relations from the corpus, such as those listed herein, and their incorporation into the context-based paraphrase selection model. Extending both the identification system and the paraphrasing system would require the application of the selectional preference model to other word classes. Although Resnik's selectional association measure has been used to model selectional preferences of verbs for their nominal arguments, it is in principle a generalizable measure of word association. Information-theoretic word association measures (e.g., mutual information [Church and Hanks 1990] ) have been continuously successfully applied to a range of syntactic constructions in a number of NLP tasks (Hoang, Kim, and Kan 2009; Baldwin and Kim 2010) . This suggests that applying a distributional association measure, such as the one proposed by Resnik, to other part-of-speech classes should still result in a realistic model of semantic fitness, which in our terms corresponds to a measure of \"literalness\" of the paraphrases.", "cite_spans": [ { "start": 700, "end": 711, "text": "Hanks 1990]", "ref_id": null }, { "start": 821, "end": 847, "text": "(Hoang, Kim, and Kan 2009;", "ref_id": "BIBREF38" }, { "start": 848, "end": 869, "text": "Baldwin and Kim 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "6." }, { "text": "In addition, the selectional preference model can be improved by using an SP acquisition algorithm that can handle word sense ambiguity (e.g., Rooth et al. 1999 ; O S\u00e9aghdha 2010; Reisinger and Mooney 2010). The current approach relies on SP classes produced by hard clustering and fails to accurately model word senses of generally polysemous words. This resulted in a number of errors in metaphor paraphrasing and it therefore needs to be addressed in the future.", "cite_spans": [ { "start": 143, "end": 160, "text": "Rooth et al. 1999", "ref_id": "BIBREF87" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "6." }, { "text": "The current version of the metaphor paraphrasing system still relies on some handcoded knowledge in the form of WordNet. WordNet has been criticized for a lack of consistency, high granularity of senses, and negligence with respect to some important semantic relations (Lenat, Miller, and Yokoi 1995) . In addition, WordNet is a generaldomain resource, which is less suitable if one wanted to apply the system to domainspecific data. For all of these reasons it would be preferable to develop a WordNet-free fully automated approach to metaphor resolution. Vector space models of word meaning (Erk 2009; Rudolph and Giesbrecht 2010; Van de Cruys, Poibeau, and Korhonen 2011) might provide a solution, as they have proved efficient in general paraphrasing and lexical substitution settings (Erk and Pad\u00f3 2009) . The feature similarity component of the paraphrasing system that is currently based on WordNet could be replaced by such a model.", "cite_spans": [ { "start": 269, "end": 300, "text": "(Lenat, Miller, and Yokoi 1995)", "ref_id": "BIBREF54" }, { "start": 593, "end": 603, "text": "(Erk 2009;", "ref_id": "BIBREF23" }, { "start": 604, "end": 632, "text": "Rudolph and Giesbrecht 2010;", "ref_id": "BIBREF88" }, { "start": 633, "end": 674, "text": "Van de Cruys, Poibeau, and Korhonen 2011)", "ref_id": "BIBREF97" }, { "start": 789, "end": 808, "text": "(Erk and Pad\u00f3 2009)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "6." }, { "text": "Another crucial problem that needs to be addressed is the coverage of the identification system. To enable high usability of the system it is necessary to perform high-recall processing. One way to improve the coverage is the creation of a larger, more diverse seed set. Although it is hardly possible to describe the whole variety of metaphorical language, it is possible to compile a set representative of (1) all most common source-target domain mappings and (2) all types of syntactic constructions that exhibit metaphoricity. The existing metaphor resources, primarily the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991) , and examples from the linguistic literature about metaphor, could be a sensible starting point on a route to such a data set. Having a diverse seed set should enable the identification system to attain a broad coverage of the corpus.", "cite_spans": [ { "start": 599, "end": 636, "text": "(Lakoff, Espenson, and Schwartz 1991)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "6." }, { "text": "The proposed text-to-text representation of metaphor processing is directly transferable to other NLP tasks and applications that could benefit from the inclusion of a metaphor processing component. Overall, our results suggest that the system can provide useful and accurate information about metaphor to other NLP tasks relying on lexical semantics. In order to prove its usefulness for external applications, however, an extrinsic task-based evaluation is outstanding. In the future, we intend to integrate metaphor processing with NLP applications, exemplified by MT and opinion mining, in order to demonstrate the contribution of this pervasive yet rarely addressed phenomenon to natural language semantics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "6." }, { "text": "http://translate.google.com/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://wordnet.princeton.edu/. 3 EuroWordNet is a multilingual database containing WordNets for several European languages (Dutch, Italian, Spanish, German, French, Czech, and Estonian). The WordNets are structured in the same way as the Princeton WordNet for English. URL: http://www.illc.uva.nl/EuroWordNet/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the volunteer annotators for their help in the evaluations, as well as the Cambridge Overseas Trust (UK), EU FP-7 PANACEA project, and the Royal Society (UK), who funded our work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "The list of paraphrases with the initial ranking (correct paraphrases are underlined). or termination of life. Thus termination is the shared element of the metaphorical verb and its literal interpretation. Such an overlap of properties can be identified using the hyponymy relations in the WordNet taxonomy. Within the initial list of paraphrases, the system selects the terms that are hypernyms of the metaphorical term, or share a common hypernym with it. To maximize the accuracy, we restrict the hypernym search to a depth of three levels in the taxomomy. Table 7 shows the filtered lists of paraphrases for some of the test phrases, together with their log-likelihood. Selecting the highest ranked paraphrase from this list as a literal interpretation will serve as a baseline.", "cite_spans": [], "ref_spans": [ { "start": 561, "end": 568, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Table 7", "sec_num": null }, { "text": "Verb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-likelihood Replacement", "sec_num": null }, { "text": "The lists which were generated contain some irrelevant paraphrases (e.g., \"contain the truth\" for \"hold back the truth\") and some paraphrases where the substitute itself is metaphorically used (e.g., \"suppress the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-ranking Based on Selectional Preferences.", "sec_num": "4.3.3" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fully unsupervised core-adjunct argument classification", "authors": [ { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "226--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abend, Omri and Ari Rappoport. 2010. Fully unsupervised core-adjunct argument classification. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 226-236, Uppsala.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Metaphor, inference and domain-independent mappings", "authors": [ { "first": "Rodrigo", "middle": [], "last": "Agerri", "suffix": "" }, { "first": "John", "middle": [], "last": "Barnden", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Wallington", "suffix": "" } ], "year": 2007, "venue": "Proceedings of RANLP-2007", "volume": "", "issue": "", "pages": "17--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agerri, Rodrigo, John Barnden, Mark Lee, and Alan Wallington. 2007. Metaphor, inference and domain-independent mappings. In Proceedings of RANLP-2007, pages 17-23, Borovets.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Encoding information on metaphoric expressions in WordNet-like resources", "authors": [ { "first": "Antonietta", "middle": [], "last": "Alonge", "suffix": "" }, { "first": "Margherita", "middle": [], "last": "Castelli", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL 2003 Workshop on Lexicon and Figurative Language", "volume": "", "issue": "", "pages": "10--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alonge, Antonietta and Margherita Castelli. 2003. Encoding information on metaphoric expressions in WordNet-like resources. In Proceedings of the ACL 2003 Workshop on Lexicon and Figurative Language, pages 10-17, Sapporo.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The BNC parsed with RASP4UIMA", "authors": [ { "first": "Oistein", "middle": [], "last": "Andersen", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Nioche", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2008, "venue": "Proceedings of LREC 2008", "volume": "", "issue": "", "pages": "865--869", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andersen, Oistein, Julien Nioche, Ted Briscoe, and John Carroll. 2008. The BNC parsed with RASP4UIMA. In Proceedings of LREC 2008, pages 865-869, Marrakech.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multiword expressions", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" } ], "year": 2010, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "267--292", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baldwin, Timothy and Su Nam Kim. 2010. Multiword expressions. In N. Indurkhya and F. J. Damerau, editors, Handbook of Natural Language Processing, Second Edition. CRC Press, Taylor and Francis Group, Boca Raton, FL, pages 267-292.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An artificial intelligence approach to metaphor understanding", "authors": [ { "first": "John", "middle": [], "last": "Barnden", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2002, "venue": "Theoria et Historia Scientiarum", "volume": "6", "issue": "1", "pages": "399--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barnden, John and Mark Lee. 2002. An artificial intelligence approach to metaphor understanding. Theoria et Historia Scientiarum, 6(1):399-412.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning to paraphrase: an unsupervised approach using multiple-sequence alignment", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": ";", "middle": [], "last": "", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01", "volume": "1", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barzilay, Regina and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple-sequence alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03, pages 16-23, Edmonton. Barzilay, Regina and Kathryn McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01, pages 50-57, Toulouse.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Discriminative learning of selectional preference from unlabeled text", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Randy", "middle": [], "last": "Goebel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08", "volume": "", "issue": "", "pages": "59--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bergsma, Shane, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 59-68, Honolulu, HI.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A clustering approach for the nearly unsupervised recognition of nonliteral language", "authors": [ { "first": "Julia", "middle": [], "last": "Birke", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL-06", "volume": "", "issue": "", "pages": "329--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Birke, Julia and Anoop Sarkar. 2006. A clustering approach for the nearly unsupervised recognition of nonliteral language. In Proceedings of EACL-06, pages 329-336, Trento.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Synonymous paraphrasing using Wordnet and Internet", "authors": [ { "first": "Max", "middle": [], "last": "Black", "suffix": "" }, { "first": "N", "middle": [ "Y" ], "last": "Ithaca", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Boleda Torrent", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Alonso I Alemany", "suffix": "" }, { "first": ";", "middle": [], "last": "Budapest", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Bolshakov", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" } ], "year": 1962, "venue": "Proceedings of the 9th International Conference on Applications of Natural Language to Information Systems, NLDB 2004", "volume": "2", "issue": "", "pages": "117--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black, Max. 1962. Models and Metaphors. Cornell University Press, Ithaca, NY. Boleda Torrent, Gemma and Laura Alonso i Alemany. 2003. Clustering adjectives for class acquisition. In Proceedings of the Tenth Conference of the European Chapter of the Association for Computational Linguistics - Volume 2, EACL '03, pages 9-16, Budapest. Bolshakov, Igor and Alexander Gelbukh. 2004. Synonymous paraphrasing using Wordnet and Internet. In Proceedings of the 9th International Conference on Applications of Natural Language to Information Systems, NLDB 2004, pages 312-323, Alicante. Brew, Chris and Sabine Schulte im Walde. 2002. Spectral clustering for German verbs. In Proceedings of EMNLP, pages 117-124, Philadelphia, PA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The second release of the RASP system", "authors": [ { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Watson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL on Interactive Presentation Sessions", "volume": "", "issue": "", "pages": "77--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Briscoe, Ted, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceedings of the COLING/ACL on Interactive Presentation Sessions, pages 77-80, Sydney.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating and combining approaches to selectional preference acquisition", "authors": [ { "first": "Carsten", "middle": [], "last": "Brockmann", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Tenth Conference of the European Chapter", "volume": "1", "issue": "", "pages": "27--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brockmann, Carsten and Mirella Lapata. 2003. Evaluating and combining approaches to selectional preference acquisition. In Proceedings of the Tenth Conference of the European Chapter of the Association for Computational Linguistics - Volume 1, EACL '03, pages 27-34, Budapest.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Reference Guide for the British National Corpus (XML Edition", "authors": [ { "first": "Lou", "middle": [], "last": "Burnard", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burnard, Lou. 2007. Reference Guide for the British National Corpus (XML Edition). Available at http://www.natcorp. ox.ac.uk/docs/URG.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improved statistical machine translation using paraphrases", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2006, "venue": "Proceedings of NAACL, HLT-NAACL '06", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Callison-Burch, Chris, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of NAACL, HLT-NAACL '06, pages 17-24, New York, NY.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Metaphor in Educational Discourse. Continuum", "authors": [ { "first": "Lynne", "middle": [], "last": "Cameron", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cameron, Lynne. 2003. Metaphor in Educational Discourse. Continuum, London.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Simplifying text for language-impaired readers", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Guido", "middle": [], "last": "Minnen", "suffix": "" }, { "first": "Darren", "middle": [], "last": "Pearce", "suffix": "" }, { "first": "Yvonne", "middle": [], "last": "Canning", "suffix": "" }, { "first": "Siobhan", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "John", "middle": [], "last": "Tait", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 9th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "269--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carroll, John, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, and John Tait. 1999. Simplifying text for language-impaired readers. In Proceedings of the 9th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 269-270, Bergen.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Unsupervised relation disambiguation using spectral clustering", "authors": [ { "first": "Jinxiu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Donghong", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Zhengyu", "middle": [], "last": "Chew Lim Tan", "suffix": "" }, { "first": "", "middle": [], "last": "Niu", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the COLING/ACL", "volume": "16", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Jinxiu, Donghong Ji, Chew Lim Tan, and Zhengyu Niu. 2006. Unsupervised relation disambiguation using spectral clustering. In Proceedings of the COLING/ACL, pages 89-96, Sydney. Church, Kenneth and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Wide-coverage efficient statistical parsing with CCG and log-linear models", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "4", "pages": "493--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, Stephen and James Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semi-productive polysemy and sense extension", "authors": [ { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 1995, "venue": "Journal of Semantics", "volume": "12", "issue": "", "pages": "15--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Copestake, Ann and Ted Briscoe. 1995. Semi-productive polysemy and sense extension. Journal of Semantics, 12:15-67.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Superior and efficient fully unsupervised pattern-based concept acquisition using an unsupervised parser", "authors": [ { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL '09", "volume": "", "issue": "", "pages": "48--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davidov, Dmitry, Roi Reichart, and Ari Rappoport. 2009. Superior and efficient fully unsupervised pattern-based concept acquisition using an unsupervised parser. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL '09, pages 48-56, Boulder, CO.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Combining distributional and paradigmatic information in a lexical substitution task", "authors": [ { "first": "De", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Diego", "middle": [], "last": "", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EVALITA Workshop, 11th Congress of Italian Association for Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "De Cao, Diego and Roberto Basili. 2009. Combining distributional and paradigmatic information in a lexical substitution task. In Proceedings of EVALITA Workshop, 11th Congress of Italian Association for Artificial Intelligence, Reggie Emilia.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Tree Adjoining Grammar and the Reluctant Paraphrasing of Text", "authors": [ { "first": "Mark", "middle": [], "last": "Dras", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dras, Mark. 1999. Tree Adjoining Grammar and the Reluctant Paraphrasing of Text. Ph.D. thesis, Macquarie University, Australia.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Representing words as regions in vector space", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "57--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erk, Katrin. 2009. Representing words as regions in vector space. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 57-65, Boulder, CO.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Paraphrase assessment in structured vector space: exploring parameters and datasets", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": ";", "middle": [], "last": "Erk", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "17", "issue": "", "pages": "49--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erk, Katrin and Diana McCarthy. 2009. Graded word sense assignment. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 440-449, Edinburgh. Erk, Katrin and Sebastian Pad\u00f3. 2009. Paraphrase assessment in structured vector space: exploring parameters and datasets. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, pages 57-65, Athens. Fass, Dan. 1991. met*: A method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49-90.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Preference semantics, ill-formedness, and metaphor", "authors": [ { "first": "Dan", "middle": [], "last": "Fass", "suffix": "" }, { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1983, "venue": "Computational Linguistics", "volume": "9", "issue": "3-4", "pages": "178--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fass, Dan and Yorick Wilks. 1983. Preference semantics, ill-formedness, and metaphor. Computational Linguistics, 9(3-4):178-187.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The Way We Think: Conceptual Blending and the Mind's Hidden Complexities", "authors": [ { "first": "Gilles", "middle": [], "last": "Fauconnier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Turner", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fauconnier, Gilles and Mark Turner. 2002. The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. Basic Books, New York, NY.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "From Molecule to Metaphor: A Neural Theory of Language", "authors": [ { "first": "Jerome", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Cambridge", "suffix": "" }, { "first": "Jerome", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2004, "venue": "Brain and Language", "volume": "89", "issue": "2", "pages": "385--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feldman, Jerome. 2006. From Molecule to Metaphor: A Neural Theory of Language. The MIT Press, Cambridge, MA. Feldman, Jerome and Srini Narayanan. 2004. Embodied meaning in a neural theory of language. Brain and Language, 89(2):385-392.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "WordNet: An Electronic Lexical Database (ISBN: 0-262-06197-X)", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, Christiane, editor. 1998. WordNet: An Electronic Lexical Database (ISBN: 0-262-06197-X). MIT Press, Cambridge, MA.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Background to FrameNet", "authors": [ { "first": "Charles", "middle": [], "last": "Fillmore", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Miriam", "middle": [], "last": "Petruck", "suffix": "" } ], "year": 2003, "venue": "International Journal of Lexicography", "volume": "16", "issue": "3", "pages": "235--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fillmore, Charles, Christopher Johnson, and Miriam Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16(3):235-250.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Catching metaphors", "authors": [ { "first": "Matt", "middle": [], "last": "Gedigian", "suffix": "" }, { "first": "John", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Branimir", "middle": [], "last": "Ciric", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 3rd Workshop on Scalable Natural Language Understanding", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gedigian, Matt, John Bryant, Srini Narayanan, and Branimir Ciric. 2006. Catching metaphors. In Proceedings of the 3rd Workshop on Scalable Natural Language Understanding, pages 41-48, New York, NY.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Structure mapping: A theoretical framework for analogy", "authors": [ { "first": "Dedre", "middle": [], "last": "Gentner", "suffix": "" } ], "year": 1983, "venue": "Cognitive Science", "volume": "7", "issue": "", "pages": "155--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gentner, Dedre. 1983. Structure mapping: A theoretical framework for analogy. Cognitive Science, 7:155-170.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Metaphor is like analogy", "authors": [ { "first": "Dedre", "middle": [], "last": "Gentner", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Bowdle", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Wolff", "suffix": "" }, { "first": "Consuelo", "middle": [], "last": "Boronat", "suffix": "" } ], "year": 2001, "venue": "The Analogical Mind: Perspectives from Cognitive Science", "volume": "", "issue": "", "pages": "199--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gentner, Dedre, Brian Bowdle, Phillip Wolff, and Consuelo Boronat. 2001. Metaphor is like analogy. In D. Gentner, K. J. Holyoak, and B. N. Kokinov, editors, The Analogical Mind: Perspectives from Cognitive Science. MIT Press, Cambridge, MA, pages 199-253.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The Language of Metaphors", "authors": [ { "first": "Andrew", "middle": [], "last": "Goatly", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goatly, Andrew. 1997. The Language of Metaphors. Routledge, London.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Foundations of Meaning: Primary Metaphors and Primary Scenes", "authors": [ { "first": "Joe", "middle": [], "last": "Grady", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grady, Joe. 1997. Foundations of Meaning: Primary Metaphors and Primary Scenes. Ph.D. thesis, University of California at Berkeley.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A pattern dictionary for natural language processing", "authors": [ { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2005, "venue": "", "volume": "10", "issue": "", "pages": "63--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanks, Patrick and James Pustejovsky. 2005. A pattern dictionary for natural language processing. Revue Fran\u00e7aise de linguistique appliqu\u00e9e, 10(2):63-82.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Towards the automatic identification of adjectival scales: Clustering adjectives according to meaning", "authors": [ { "first": "Vasileios", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, ACL '93", "volume": "", "issue": "", "pages": "172--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hatzivassiloglou, Vasileios and Kathleen R. McKeown. 1993. Towards the automatic identification of adjectival scales: Clustering adjectives according to meaning. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, ACL '93, pages 172-182, Columbus, OH.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Models and Analogies in Science", "authors": [ { "first": "Mary", "middle": [], "last": "Hesse", "suffix": "" } ], "year": 1966, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hesse, Mary. 1966. Models and Analogies in Science. Notre Dame University Press, Notre Dame, IN.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A re-examination of lexical association measures", "authors": [ { "first": "Hung", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Huu", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Multiword Expressions", "volume": "", "issue": "", "pages": "31--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoang, Hung Huu, Su Nam Kim, and Min-Yen Kan. 2009. A re-examination of lexical association measures. In Proceedings of the Workshop on Multiword Expressions, pages 31-39, Singapore.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought", "authors": [ { "first": "Douglas", "middle": [], "last": "Hofstadter", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofstadter, Douglas. 1995. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. HarperCollins Publishers, London.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The Copycat Project: A model of mental fluidity and analogy-making", "authors": [ { "first": "Douglas", "middle": [], "last": "Hofstadter", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 1994, "venue": "Advances in Connectionist and Neural Computation Theory. Ablex", "volume": "24", "issue": "", "pages": "41--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofstadter, Douglas and Melanie Mitchell. 1994. The Copycat Project: A model of mental fluidity and analogy-making. In K. J. Holyoak and J. A. Barnden, editors, Advances in Connectionist and Neural Computation Theory. Ablex, New York, NY. Karov, Yael and Shimon Edelman. 1998. Similarity-based word sense disambiguation. Computational Linguistics, 24(1):41-59.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Paraphrasing for automatic evaluation", "authors": [ { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Main Conference on Human Language Technology, Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06", "volume": "", "issue": "", "pages": "455--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kauchak, David and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of the Main Conference on Human Language Technology, Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06, pages 455-462, New York, NY.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "From TreeBank to PropBank", "authors": [ { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2002, "venue": "Proceedings of LREC-2002", "volume": "", "issue": "", "pages": "1989--1993", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingsbury, Paul and Martha Palmer. 2002. From TreeBank to PropBank. In Proceedings of LREC-2002, pages 1989-1993, Gran Canaria, Canary Islands.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Statistics-based summarization-step one: Sentence compression", "authors": [ { "first": "Karin", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Neville", "middle": [], "last": "Ryant", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": ";", "middle": [], "last": "Torino", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": ";", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence", "volume": "", "issue": "", "pages": "703--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kipper, Karin, Anna Korhonen, Neville Ryant, and Martha Palmer. 2006. Extensive classifications of English verbs. In Proceedings of the 12th EURALEX International Congress, pages 1-15, Torino. Klein, Dan and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423-430, Sapporo. Knight, Kevin and Daniel Marcu. 2000. Statistics-based summarization-step one: Sentence compression. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 703-710, Austin, TX. Kok, Stanley and Chris Brockett. 2010. Hitting the right paraphrases in good time. In Human Language Technologies: The 2010", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "145--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 145-153, Los Angeles, CA.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A large subcategorization lexicon for natural language processing applications", "authors": [ { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Krymolowski", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC 2006", "volume": "", "issue": "", "pages": "1015--1020", "other_ids": {}, "num": null, "urls": [], "raw_text": "Korhonen, Anna, Yuval Krymolowski, and Ted Briscoe. 2006. A large subcategorization lexicon for natural language processing applications. In Proceedings of LREC 2006, pages 1015-1020, Genoa.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Generation of single-sentence paraphrases from predicate/argument structure using lexico-grammatical resources", "authors": [ { "first": "K", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "", "middle": [], "last": "Vijay-Shanker", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Second International Workshop on Paraphrasing", "volume": "16", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCoy, and K. Vijay-Shanker. 2003. Generation of single-sentence paraphrases from predicate/argument structure using lexico-grammatical resources. In Proceedings of the Second International Workshop on Paraphrasing - Volume 16, PARAPHRASE '03, pages 1-8, Sapporo.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Hunting elusive metaphors using lexical resources", "authors": [ { "first": "Saisuresh", "middle": [], "last": "Krishnakumaran", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Workshop on Computational Approaches to Figurative Language", "volume": "", "issue": "", "pages": "13--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krishnakumaran, Saisuresh and Xiaojin Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proceedings of the Workshop on Computational Approaches to Figurative Language, pages 13-20, Rochester, NY.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Japanese translation task", "authors": [], "year": null, "venue": "Proceedings of the SENSEVAL-2 Workshop", "volume": "", "issue": "", "pages": "37--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Japanese translation task. In Proceedings of the SENSEVAL-2 Workshop, pages 37-44, Toulouse.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "The master metaphor list", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Espenson", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George, Jane Espenson, and Alan Schwartz. 1991. The master metaphor list. Technical report, University of California at Berkeley.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Metaphors We Live By", "authors": [ { "first": "George", "middle": [], "last": "Lakoff", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lakoff, George and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press, Chicago, IL.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "The Acquisition and Modeling of Lexical Knowledge: A Corpus-Based Investigation of Systematic Polysemy", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lapata, Mirella. 2001. The Acquisition and Modeling of Lexical Knowledge: A Corpus-Based Investigation of Systematic Polysemy. Ph.D. thesis, University of Edinburgh.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "CYC, WordNet, and EDR: Critiques and responses", "authors": [ { "first": "Doug", "middle": [], "last": "Lenat", "suffix": "" }, { "first": "George", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Toshio", "middle": [], "last": "Yokoi", "suffix": "" } ], "year": 1995, "venue": "Commun. ACM", "volume": "38", "issue": "11", "pages": "45--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lenat, Doug, George Miller, and Toshio Yokoi. 1995. CYC, WordNet, and EDR: Critiques and responses. Commun. ACM, 38(11):45-48.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "English Verb Classes and Alternations", "authors": [ { "first": "Beth", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levin, Beth. 1993. English Verb Classes and Alternations. University of Chicago Press, Chicago, IL.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Automatic retrieval and clustering of similar words", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "768--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Dekang. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th International Conference on Computational Linguistics, pages 768-774, Montreal.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Discovery of inference rules for question answering", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "", "pages": "343--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Dekang and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7:343-360.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Birte and Carina Eilts. 2004. A current resource and future perspectives for enriching Wordnets with metaphor information", "authors": [ { "first": "Birte", "middle": [], "last": "L\u00f6nneker", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the LREC 2004 Workshop on Language Resources for Linguistic Creativity", "volume": "", "issue": "", "pages": "157--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "L\u00f6nneker, Birte. 2004. Lexical databases as resources for linguistic creativity: Focus on metaphor. In Proceedings of the LREC 2004 Workshop on Language Resources for Linguistic Creativity, pages 9-16, Lisbon. L\u00f6nneker, Birte and Carina Eilts. 2004. A current resource and future perspectives for enriching Wordnets with metaphor information. In Proceedings of the Second International WordNet Conference-GWC 2004, pages 157-162, Brno.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Representing regularities in the metaphoric lexicon", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the 12th Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "396--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 1988. Representing regularities in the metaphoric lexicon. In Proceedings of the 12th Conference on Computational Linguistics, pages 396-401, Budapest.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "A Computational Model of Metaphor Interpretation", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 1990. A Computational Model of Metaphor Interpretation. Academic Press Professional, Inc., San Diego, CA.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Metabank: A knowledge-base of metaphoric language conventions", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1994, "venue": "Computational Intelligence", "volume": "10", "issue": "", "pages": "134--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 1994. Metabank: A knowledge-base of metaphoric language conventions. Computational Intelligence, 10:134-149.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "A corpus-based analysis of context effects on metaphor comprehension", "authors": [ { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2006, "venue": "Corpus-Based Approaches to Metaphor and Metonymy", "volume": "", "issue": "", "pages": "214--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, James. 2006. A corpus-based analysis of context effects on metaphor comprehension. In A. Stefanowitsch and S. T. Gries, editors, Corpus-Based Approaches to Metaphor and Metonymy. Mouton de Gruyter, Berlin, pages 214-236.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Cormet: A computational, corpus-based conventional metaphor extraction system", "authors": [ { "first": "Zachary", "middle": [], "last": "Mason", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "1", "pages": "23--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mason, Zachary. 2004. Cormet: A computational, corpus-based conventional metaphor extraction system. Computational Linguistics, 30(1):23-44.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Lexical substitution as a task for WSD evaluation", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Workshop on Word Sense Disambiguation: Recent Successes and Future Directions", "volume": "8", "issue": "", "pages": "109--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, Diana. 2002. Lexical substitution as a task for WSD evaluation. In Proceedings of the ACL-02 Workshop on Word Sense Disambiguation: Recent Successes and Future Directions -Volume 8, WSD '02, pages 109-115, Philadelphia, PA.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Getting synonym candidates from raw data in the English lexical substitution task", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Keller", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 14th EURALEX International Congress", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, Diana, Bill Keller, and Roberto Navigli. 2010. Getting synonym candidates from raw data in the English lexical substitution task. In Proceedings of the 14th EURALEX International Congress, Leeuwarden.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Semeval-2007 task 10: English lexical substitution task", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "48--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, Diana and Roberto Navigli. 2007. Semeval-2007 task 10: English lexical substitution task. In Proceedings of the 4th Workshop on Semantic Evaluations (SemEval-2007), pages 48-53, Prague.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "The English lexical substitution task. Language Resources and Evaluation", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "", "volume": "43", "issue": "", "pages": "139--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, Diana and Roberto Navigli. 2009. The English lexical substitution task. Language Resources and Evaluation, 43(2):139-159.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Paraphrasing using given and new information in a question-answer system", "authors": [ { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1979, "venue": "Proceedings of the 17th Annual Meeting of the Association for Computational Linguistics, ACL '79", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "McKeown, Kathleen. 1979. Paraphrasing using given and new information in a question-answer system. In Proceedings of the 17th Annual Meeting of the Association for Computational Linguistics, ACL '79, pages 67-72, La Jolla, CA.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "A random walks view of spectral segmentation", "authors": [ { "first": "Marina", "middle": [], "last": "Meila", "suffix": "" }, { "first": "Jianbo", "middle": [], "last": "Shi", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the 12th Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "431--436", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meila, Marina and Jianbo Shi. 2001. A random walks view of spectral segmentation. In AISTATS, Key West, FL. Meteer, Marie and Varda Shaked. 1988. Strategies for effective paraphrasing. In Proceedings of the 12th Conference on Computational Linguistics -Volume 2, COLING '88, pages 431-436, Budapest.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Vector-based models of semantic composition", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "236--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell, Jeff and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236-244, Columbus, OH.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "On metaphoric representation", "authors": [ { "first": "Gregory", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "60", "issue": "", "pages": "173--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murphy, Gregory. 1996. On metaphoric representation. Cognition, 60:173-204.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Knowledge-based Action Representations for Metaphor and Aspect (KARMA)", "authors": [ { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narayanan, Srini. 1997. Knowledge-based Action Representations for Metaphor and Aspect (KARMA). Ph.D. thesis, University of California at Berkeley.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Moving right along: A computational model of metaphoric reasoning about events", "authors": [ { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 1999, "venue": "Proceedings of AAAI 99", "volume": "", "issue": "", "pages": "121--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narayanan, Srini. 1999. Moving right along: A computational model of metaphoric reasoning about events. In Proceedings of AAAI 99, pages 121-128, Orlando, FL.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Poetic and prosaic metaphors", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Nunberg", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the 1987 Workshop on Theoretical Issues in Natural Language Processing", "volume": "", "issue": "", "pages": "198--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nunberg, Geoffrey. 1987. Poetic and prosaic metaphors. In Proceedings of the 1987 Workshop on Theoretical Issues in Natural Language Processing, pages 198-201, Stroudsburg, PA.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Latent variable models of selectional preference", "authors": [ { "first": "", "middle": [], "last": "O S\u00e9aghdha", "suffix": "" }, { "first": "", "middle": [], "last": "Diarmuid", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "435--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "O S\u00e9aghdha, Diarmuid. 2010. Latent variable models of selectional preference. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 435-444, Uppsala.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Politics and the English Language", "authors": [ { "first": "George", "middle": [], "last": "Orwell", "suffix": "" } ], "year": 1946, "venue": "Horizon", "volume": "13", "issue": "76", "pages": "252--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Orwell, George. 1946. Politics and the English Language. Horizon, 13(76):252-265.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Discovering word senses from text", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "613--619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pantel, Patrick and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 613-619, Edmonton.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Distributional clustering of English words", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1993, "venue": "Proceedings of ACL-93", "volume": "", "issue": "", "pages": "183--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, Fernando, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of ACL-93, pages 183-190, Morristown, NJ.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Lexicalised systematic polysemy in Wordnet", "authors": [ { "first": "Wim", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Ivonne", "middle": [], "last": "Peters", "suffix": "" } ], "year": 2000, "venue": "Proceedings of LREC 2000", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peters, Wim and Ivonne Peters. 2000. Lexicalised systematic polysemy in Wordnet. In Proceedings of LREC 2000, Athens.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "The Stuff of Thought: Language as a Window into Human Nature", "authors": [ { "first": "Stephen", "middle": [], "last": "Pinker", "suffix": "" } ], "year": 2005, "venue": "Proceedings of IWP", "volume": "", "issue": "", "pages": "73--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pinker, Stephen. 2007. The Stuff of Thought: Language as a Window into Human Nature. Viking Adult, New York, NY. Power, Richard and Donia Scott. 2005. Automatic generation of large-scale paraphrases. In Proceedings of IWP, pages 73-79.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "MIP: A method for identifying metaphorically used words in discourse", "authors": [ { "first": "Pragglejaz", "middle": [], "last": "Group", "suffix": "" } ], "year": 2007, "venue": "Metaphor and Symbol", "volume": "22", "issue": "", "pages": "1--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pragglejaz Group. 2007. MIP: A method for identifying metaphorically used words in discourse. Metaphor and Symbol, 22:1-39.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "A system for large-scale acquisition of verbal, nominal and adjectival subcategorization frames from corpora", "authors": [ { "first": "Judita", "middle": [], "last": "Preiss", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Workshop on Natural Language Processing Methods and Corpora in Translation, Lexicography, and Language Learning, MCTLLL '09", "volume": "45", "issue": "", "pages": "21--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preiss, Judita, Ted Briscoe, and Anna Korhonen. 2007. A system for large-scale acquisition of verbal, nominal and adjectival subcategorization frames from corpora. In Proceedings of ACL-2007, volume 45, page 912, Prague. Preiss, Judita, Andrew Coonce, and Brittany Baker. 2009. HMMs, GRs, and n-grams as lexical substitution techniques: are they portable to other languages? In Proceedings of the Workshop on Natural Language Processing Methods and Corpora in Translation, Lexicography, and Language Learning, MCTLLL '09, pages 21-27, Borovets.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Unsupervised lexical substitution with a word space model", "authors": [ { "first": "Dario", "middle": [], "last": "Pucci", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Franco", "middle": [], "last": "Cutugno", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the EVALITA Workshop, 11th Congress of Italian Association for Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pucci, Dario, Marco Baroni, Franco Cutugno, and Alessandro Lenci. 2009. Unsupervised lexical substitution with a word space model. In Proceedings of the EVALITA Workshop, 11th Congress of Italian Association for Artificial Intelligence, Reggie Emilia.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Monolingual machine translation for paraphrase generation", "authors": [ { "first": "James", "middle": [ ";" ], "last": "Pustejovsky", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Quirk", "suffix": "" }, { "first": "Chris", "middle": [], "last": "", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "William", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "142--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, James. 1995. The Generative Lexicon. MIT Press, Cambridge, MA. Quirk, Chris, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 142-149, Barcelona.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Corpus-driven metaphor harvesting", "authors": [ { "first": "Astrid", "middle": [], "last": "Reining", "suffix": "" }, { "first": "Birte", "middle": [], "last": "L\u00f6nneker-Rodman", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the HLT/NAACL-07 Workshop on Computational Approaches to Figurative Language", "volume": "", "issue": "", "pages": "5--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reining, Astrid and Birte L\u00f6nneker-Rodman. 2007. Corpus-driven metaphor harvesting. In Proceedings of the HLT/NAACL-07 Workshop on Computational Approaches to Figurative Language, pages 5-12, Rochester, NY.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Selection and Information: A Class-based Approach to Lexical Relationships", "authors": [ { "first": "Joseph", "middle": [], "last": "Reisinger", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10", "volume": "", "issue": "", "pages": "1173--1182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reisinger, Joseph and Raymond Mooney. 2010. A mixture model with sharing for lexical semantics. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 1173-1182, Cambridge, MA. Resnik, Philip. 1993. Selection and Information: A Class-based Approach to Lexical Relationships. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Inducing a semantically annotated lexicon via EM-based clustering", "authors": [ { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Detlef", "middle": [], "last": "Prescher", "suffix": "" }, { "first": "Glenn", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Franz", "middle": [], "last": "Beil", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL 99", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rooth, Mats, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Proceedings of ACL 99, pages 104-111, Maryland.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Compositional matrix-space models of language", "authors": [ { "first": "Sebastian", "middle": [], "last": "Rudolph", "suffix": "" }, { "first": "Eugenie", "middle": [], "last": "Giesbrecht", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "907--916", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudolph, Sebastian and Eugenie Giesbrecht. 2010. Compositional matrix-space models of language. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 907-916, Uppsala.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Experiments on the automatic induction of German semantic verb classes", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "2", "pages": "159--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schulte im Walde, Sabine. 2006. Experiments on the automatic induction of German semantic verb classes. Computational Linguistics, 32(2):159-194.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing", "authors": [ { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sekine, Satoshi, Kentaro Inui, Ido Dagan, Bill Dolan, Danilo Giampiccolo, and Bernardo Magnini, editors. 2007. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Prague.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Analogy and metaphor", "authors": [ { "first": "Cosma", "middle": [], "last": "Shalizi", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shalizi, Cosma. 2003. Analogy and metaphor. Available at http://masi.cscs.lsa. umich.edu/\u223ccrshalizi/notabene.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Paraphrase acquisition for information extraction", "authors": [ { "first": "Yusuke", "middle": [], "last": "Shinyama", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Second International Workshop on Paraphrasing", "volume": "16", "issue": "", "pages": "65--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shinyama, Yusuke and Satoshi Sekine. 2003. Paraphrase acquisition for information extraction. In Proceedings of the Second International Workshop on Paraphrasing - Volume 16, PARAPHRASE '03, pages 65-71, Sapporo.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Metaphor corpus annotated for source-target domain mappings", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLING 2010", "volume": "1", "issue": "", "pages": "255--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shutova, Ekaterina. 2010. Automatic metaphor interpretation as a paraphrasing task. In Proceedings of NAACL 2010, pages 1029-1037, Los Angeles, CA. Shutova, Ekaterina, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proceedings of COLING 2010, pages 1,002-1,010, Beijing. Shutova, Ekaterina and Simone Teufel. 2010. Metaphor corpus annotated for source-target domain mappings. In Proceedings of LREC 2010, pages 3,255-3,261, Malta.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Improving verb clustering with automatically acquired selectional preferences", "authors": [ { "first": "Sidney", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "N. John", "middle": [], "last": "Castellan", "suffix": "" }, { "first": ";", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lin", "middle": [], "last": "", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 1988, "venue": "Proceedings of EMNLP 2009", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siegel, Sidney and N. John Castellan. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill Book Company, New York, NY. Sun, Lin and Anna Korhonen. 2009. Improving verb clustering with automatically acquired selectional preferences. In Proceedings of EMNLP 2009, pages 638-647, Singapore. Sun, Lin and Anna Korhonen. 2011. Hierarchical verb clustering using graph factorization. In Proceedings of EMNLP, pages 1,023-1,033, Edinburgh.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "The lexical substitution task at EVALITA", "authors": [ { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EVALITA Workshop, 11th Congress of Italian Association for Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toral, Antonio. 2009. The lexical substitution task at EVALITA 2009. In Proceedings of EVALITA Workshop, 11th Congress of Italian Association for Artificial Intelligence, Regio Emilia.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Understanding and appreciating metaphors", "authors": [ { "first": "Roger", "middle": [], "last": "Tourangeau", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Sternberg", "suffix": "" } ], "year": 1982, "venue": "Cognition", "volume": "11", "issue": "", "pages": "203--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tourangeau, Roger and Robert Sternberg. 1982. Understanding and appreciating metaphors. Cognition, 11:203-244.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Latent vector weighting for word meaning in context", "authors": [ { "first": "Tim", "middle": [], "last": "Van De Cruys", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Poibeau", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 1979, "venue": "Proceedings of EMNLP", "volume": "1", "issue": "", "pages": "12--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van de Cruys, Tim, Thierry Poibeau, and Anna Korhonen. 2011. Latent vector weighting for word meaning in context. In Proceedings of EMNLP, pages 1,012-1,022, Edinburgh. van Rijsbergen, Keith. 1979. Information Retrieval, 2nd edition. Butterworths, London.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "A fluid knowledge representation for understanding and generating creative metaphors", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" }, { "first": "Yanfen", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2008, "venue": "Proceedings of COLING 2008", "volume": "", "issue": "", "pages": "945--952", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veale, Tony and Yanfen Hao. 2008. A fluid knowledge representation for understanding and generating creative metaphors. In Proceedings of COLING 2008, pages 945-952, Manchester.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "A preferential pattern-seeking semantics for natural language inference", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1975, "venue": "Artificial Intelligence", "volume": "6", "issue": "", "pages": "53--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilks, Yorick. 1975. A preferential pattern-seeking semantics for natural language inference. Artificial Intelligence, 6:53-74.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Making preferences more active", "authors": [ { "first": "Yorick", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1978, "venue": "Artificial Intelligence", "volume": "11", "issue": "3", "pages": "197--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilks, Yorick. 1978. Making preferences more active. Artificial Intelligence, 11(3):197-223.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Soft clustering on graphs. NIPS", "authors": [ { "first": "K", "middle": [], "last": "Yu", "suffix": "" }, { "first": "S", "middle": [], "last": "Yu", "suffix": "" }, { "first": "V", "middle": [], "last": "Tresp", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "1553--1561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, K., S. Yu, and V. Tresp. 2006. Soft clustering on graphs. NIPS, pages 1553-1561, Vancouver.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "Generalizing over lexical features: selectional preferences for semantic role classification", "authors": [ { "first": "Be\u00f1at", "middle": [], "last": "Zapirain", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", "volume": "", "issue": "", "pages": "73--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zapirain, Be\u00f1at, Eneko Agirre, and Llu\u00eds M\u00e0rquez. 2009. Generalizing over lexical features: selectional preferences for semantic role classification. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 73-76, Singapore.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "Pivot approach for extracting paraphrase patterns from bilingual corpora", "authors": [ { "first": "S", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "H", "middle": [], "last": "Wang", "suffix": "" }, { "first": "T", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08:HLT", "volume": "", "issue": "", "pages": "780--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, S., H. Wang, T. Liu, and S. Li. 2008. Pivot approach for extracting paraphrase patterns from bilingual corpora. In Proceedings of ACL-08:HLT, pages 780-788, Columbus, OH.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Application-driven statistical paraphrase generation", "authors": [ { "first": "Shiqi", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "2", "issue": "", "pages": "834--842", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, Shiqi, Xiang Lan, Ting Liu, and Sheng Li. 2009. Application-driven statistical paraphrase generation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2, ACL '09, pages 834-842, Suntec.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "PARAEVAL: Using paraphrases to evaluate summaries automatically", "authors": [ { "first": "Liang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Dragos", "middle": [], "last": "Stefan Munteanu", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "447--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, Liang, Chin-Yew Lin, Dragos Stefan Munteanu, and Eduard H. Hovy. 2006. PARAEVAL: Using paraphrases to evaluate summaries automatically. In Proceedings of HLT-NAACL, pages 447-454, New York, NY.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Approach to spoken Chinese paraphrasing based on feature extraction", "authors": [ { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" }, { "first": "Yujie", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kazuhide", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2001, "venue": "Proceedings of NLPRS", "volume": "", "issue": "", "pages": "551--556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zong, Chengqing, Yujie Zhang, and Kazuhide Yamamoto. 2001. Approach to spoken Chinese paraphrasing based on feature extraction. In Proceedings of NLPRS, pages 551-556, Tokyo.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Examples of metaphor translation.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "11) a. All of this stirred an uncontrollable excitement in her. b. All of this provoked an uncontrollable excitement in her. (12) a. a carelessly leaked report b. a carelessly disclosed report", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "TIME IS MONEY (e.g., \"That flat tire cost me an hour\") r IDEAS ARE PHYSICAL OBJECTS (e.g., \"I cannot grasp his way of thinking\") r LINGUISTIC EXPRESSIONS ARE CONTAINERS (e.g., \"I would not be able to put all my feelings into words\") r EMOTIONS ARE VEHICLES (e.g., \"[...] she was transported with pleasure\") r FEELINGS ARE LIQUIDS (e.g., \"[...] all of this stirred an unfathomable excitement in her\")", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Diana and Charles did not succeed in mending their marriage. (20) The wheels of Stalin's regime were well oiled and already turning.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Cluster of target concepts associated with MECHANISM.", "type_str": "figure", "num": null, "uris": null }, "FIGREF5": { "text": "Identification of new metaphorical expressions in text.", "type_str": "figure", "num": null, "uris": null }, "FIGREF6": { "text": "Clustered verbs.", "type_str": "figure", "num": null, "uris": null }, "FIGREF7": { "text": "Grammatical relations output for metaphorical expressions.", "type_str": "figure", "num": null, "uris": null }, "FIGREF8": { "text": "24) a. B71 852 Craig Packer and Anne Pusey of the University of Chicago have continued to follow the life and loves of these Tanzanian lions. b. B71 852 Craig Packer and Anne Pusey of the University of Chicago have continued to succeed the life and loves of these Tanzanian lions.", "type_str": "figure", "num": null, "uris": null }, "FIGREF9": { "text": "Noun clusters.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "html": null, "type_str": "table", "content": "
CueBNC frequency Sample size Metaphors Precision
\"metaphorically speaking\"7750.71
\"literally\"1,93650130.26
\"figurative\"1255090.18
\"utterly\"1,25150160.32
\"completely\"8,33950130.26
\"so to speak\"35349350.71
", "text": "Corpus statistics for linguistic cues.", "num": null }, "TABREF2": { "html": null, "type_str": "table", "content": "
SPSVerb
1.3175undo
1.3160bud
1.3143deplore
1.3138seal
1.3131slide
1.3126omit
1.3118reject
1.3097augment
1.3094frustrate
1.3087restrict
1.3082employ
1.3081highlight
1.3081correspond
1.3056dab
1.3053assist
1.3043neglect
...
", "text": "Verbs with weak direct object SPs.", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
SPSVerbSPSVerb
...
3.0810aggravate2.9434coop
3.0692dispose2.9326hobble
3.0536rim2.9285paper
3.0504deteriorate2.9212sip
3.0372mourn...
3.0365tread1.7889schedule
3.0348cadge1.7867cheat
3.0254intersperse1.7860update
3.0225activate1.7840belt
3.0085predominate1.7835roar
3.0033lope1.7824intensify
2.9957bone1.7811read
2.9955pummel1.7805unnerve
2.9868disapprove1.7776arrive
2.9838hoover1.7775publish
2.9824beam1.7775reason
2.9807amble1.7774bond
2.9760diversify1.7770issue
2.9759mantle1.7760verify
2.9730pulverize1.7734vomit
2.9604skim1.7728impose
2.9539slam1.7726phone
2.9523archive1.7723purify
2.9504grease...
", "text": "Verbs with strong direct object SPs.", "num": null }, "TABREF4": { "html": null, "type_str": "table", "content": "
Seed phraseHarvested metaphorsBNC frequency
reflect concern (V-O):reflect concern78
reflect interest74
reflect commitment26
reflect preference22
reflect wish17
reflect determination12
reflect intention8
reflect willingness4
reflect sympathy3
reflect loyalty2
disclose interest10
disclose intention3
disclose concern2
disclose sympathy1
disclose commitment1
disguise interest6
disguise intention3
disguise determination2
obscure interest1
obscure determination1
cast doubt (V-O):cast doubt197
cast fear3
cast suspicion2
catch feeling3
catch suspicion2
catch enthusiasm1
catch emotion1
spark fear10
spark enthusiasm3
spark passion1
spark feeling1
campaign surged (S-V): campaign surged1
charity boomed1
effort decreased1
expedition doubled1
effort doubled1
campaign shrank1
campaign soared1
drive spiraled1
", "text": "Examples of seed set expansion by the system.", "num": null }, "TABREF5": { "html": null, "type_str": "table", "content": "
Seed phraseHarvested metaphors BNC frequency
reflect concern (V-O):reflect concern78
ponder business1
ponder headache1
reflect business4
reflect care2
reflect fear19
reflect worry3
cast doubt (V-O):cast doubt197
cast question11
couch question1
drop question2
frame question21
purge doubt2
put doubt12
put question151
range question1
roll question1
shed doubt2
stray question1
throw doubt35
throw question17
throw uncertainty1
campaign surged (S-V):campaign surged1
campaign soared1
", "text": "Examples of seed set expansion by the baseline.", "num": null }, "TABREF6": { "html": null, "type_str": "table", "content": "
Source of errorSubject-Verb Verb-Object Totals
Metaphor conventionality71421
General polysemy9615
Verb clustering459
Noun clustering213
SP filter000
Totals222648
", "text": "Common system errors by type.", "num": null }, "TABREF7": { "html": null, "type_str": "table", "content": "
AssociationReplacement
Verb-DirectObject
hold back truth:
0.1161conceal
0.0214keep
0.0070suppress
0.0022contain
0.0018defend
0.0006hold
stir excitement:
0.0696provoke
0.0245elicit
0.0194arouse
0.0061conjure
0.0028create
0.0001stimulate
\u2248 0r a i s e
\u2248 0m a k e
\u2248 0excite
leak report:
0.1492disclose
0.1463discover
0.0674reveal
0.0597issue
\u2248 0e m e r g e
\u2248 0e x p o s e
Subject-Verb
campaign surge:
0.0086improve
0.0009run
\u2248 0s o a r
\u2248 0lift
", "text": "Paraphrases re-ranked by SP model (correct paraphrases are underlined).", "num": null }, "TABREF9": { "html": null, "type_str": "table", "content": "
Source of errorSubject-Verb Verb-Object Totals
Metaphor conventionality055
General polysemy/WordNet filter112
SP re-ranking011
Lack of context112
Totals2810
", "text": "Common system errors by type.", "num": null }, "TABREF10": { "html": null, "type_str": "table", "content": "
Tagging caseAcceptability Percentage
Correct paraphrase: metaphorical \u2192 literal\u271453.8
Correct paraphrase: literal \u2192 literal\u271413.5
Correct paraphrase: literal \u2192 metaphorical\u27170.5
Correct paraphrase: metaphorical \u2192 metaphorical\u271710.7
Incorrect paraphrase\u271721.5
", "text": "Integrated system performance.", "num": null }, "TABREF12": { "html": null, "type_str": "table", "content": "
No literal paraphrase exists1105218
Metaphor conventionality5330056
General polysemy00131023
WordNet filter00212142
SP re-ranking0041748
Lack of context00628
Interaction with metonymy00011
Totals6438643196
", "text": "Errors of the paraphrasing component by type.Source of errorMet\u2192Met Lit\u2192Met Incorr. for Lit Incorr. for Met Total", "num": null } } } }