ACL-OCL / Base_JSON /prefixW /json /W16 /W16-0203.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W16-0203",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:59:42.868256Z"
},
"title": "Intersecting Word Vectors to Take Figurative Language to New Heights",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Gagliano",
"suffix": "",
"affiliation": {},
"email": "andrea.gagliano@berkeley.edu"
},
{
"first": "Emily",
"middle": [],
"last": "Paul",
"suffix": "",
"affiliation": {},
"email": "emily.paul@berkeley.edu"
},
{
"first": "Kyle",
"middle": [],
"last": "Booten",
"suffix": "",
"affiliation": {},
"email": "kbooten@berkeley.edu"
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": "",
"affiliation": {},
"email": "hearst@berkeley.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a technique to create figurative relationships using Mikolov et al.'s word vectors. Drawing on existing work on figurative language, we start with a pair of words and use the intersection of word vector similarity sets to blend the distinct semantic spaces of the two words. We conduct preliminary quantitative and qualitative observations to compare the use of this novel intersection method with the standard word vector addition method for the purpose of supporting the generation of figurative language.",
"pdf_parse": {
"paper_id": "W16-0203",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a technique to create figurative relationships using Mikolov et al.'s word vectors. Drawing on existing work on figurative language, we start with a pair of words and use the intersection of word vector similarity sets to blend the distinct semantic spaces of the two words. We conduct preliminary quantitative and qualitative observations to compare the use of this novel intersection method with the standard word vector addition method for the purpose of supporting the generation of figurative language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "\"I sit in my chair all day and work and work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Measuring words against each other.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "-Conrad Aiken Improvisations: Light And Snow",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While metaphors are part of everyday language, in poetry they are vital. Metaphorical language, in contrast with literal or non-metaphorical language, \"mak[es] use of structure imported from a completely different conceptual domain\" (Lakoff and Turner, 1989) .",
"cite_spans": [
{
"start": 233,
"end": 258,
"text": "(Lakoff and Turner, 1989)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lakoff and Turner analyze famous poems and show how they can be understood as the blending of concepts from multiple metaphorical frames. For example, they state that a common metaphor is DEATH AS DEPARTURE and provide an example of an Emily Dickinson poem in which she merely needs to mention the words \"death\" and \"carriage\" in the same set of stanzas for the reader to know that the carriage is not taking a spin around the block, but rather a one-way trip with no return.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\"Because I could not stop for Death -He kindly stopped for me - The Carriage held but just Ourselves -And Immortality.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lakoff and Turner convincingly argue that there are basic conceptual metaphors that hold for how we conceive of, and therefore talk about, death (e.g., DEATH IS WINTER, DEATH IS REST or DEATH IS FREEDOM FROM BONDAGE) and these are combined in poetry with other metaphors, such as LIFE IS A JOURNEY, A LIFETIME IS A YEAR, NIGHT IS A COVER, PEOPLE ARE PLANTS, and so on. Our goal in this work is to develop new methods of automatically suggesting words that link together concepts across semantic spaces or frames, and so to aid both programs and people in the generation of poetic and figurative language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We add to the body of work on poetry analysis and generation by exploring a method to generate a set of words (connector words) that can be used to create a figurative relationship with a pair of anchor words, as shown in Figure 1 . We do this by making use of recent advances in statistical word similarity generation methods, in particular the word2vec embedding technology. Mikolov vector space, actualized in the word2vec technology, identifies semantic relationships between words using word vector algebra. These word vectors perform remarkably well at identifying semantic analogy relationships, e.g., capital city to country, currency to country, city to state, and man to woman (Mikolov et al., 2013a; Mikolov et al., 2013b; Mikolov et al., 2013c) . The classic example showcasing the power of the word vector algebra is:",
"cite_spans": [
{
"start": 377,
"end": 384,
"text": "Mikolov",
"ref_id": null
},
{
"start": 687,
"end": 710,
"text": "(Mikolov et al., 2013a;",
"ref_id": null
},
{
"start": 711,
"end": 733,
"text": "Mikolov et al., 2013b;",
"ref_id": null
},
{
"start": 734,
"end": 756,
"text": "Mikolov et al., 2013c)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "vector('King') -vector('Man') + vector('Woman') = vector that is closest to vector('Queen')",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we extend the use of these vectors beyond their primary application of identifying such analogous type relationships: we explore their use to draw together two semantic spaces for the creation of figurative relationships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Specifically, starting with a pair of anchor words, made up of a concrete noun and a poetic theme, we leverage Mikolov et al.'s word vectors to return the sets of words most similar to either anchor word in the pair. By finding the intersection of these two sets, we can identify connector words that draw together the anchor words to create figurative relationships. Some examples of the anchor and connector words can be seen in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For each pair of anchor words we also generate a list of suggested connector words by using Mikolov et al.'s word vector algebra. We then quantitatively observe the difference between the lists of connector words produced by intersection and by addition. For the connector words from the intersection list, we observe a more balanced similarity score between the connector words and each anchor word than we do for the connector words from the addition list. We then construct an initial dataset to explore qualitatively the figurative relationships generated using connector words from the addition list and those generated using connector words from the intersection list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper provides an overview of related work on figurative language and semantic relationships; outlines the computational methods used to retrieve connector words; describes quantitative and qualitative observations of the retrieved connector words; and discusses future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\"Figurative language,\" in particular metaphor, plays a crucial role in poetry. Lakoff and Turner (1989) explicate in great detail how metaphors are combined in everyday language, and how poets extend and elaborate on conventional metaphors in new ways. A metaphor can be thought of as a linguistic structure that creates a \"mapping\" of two conceptual spaces or frames (Lakoff and Turner, 1989) or the \"blending\" of two input spaces (Fauconnier and Turner, 2008) . Veale et al. (2000) point to this as an important theoretical model for computational work on metaphor, and emphasize the highly structured relationships that metaphors create between two terms. In one of their examples, to see Scientists as Priests metaphorically could mean to see their lab-benches as altars. In this case, the general terms are metaphorically connected based on some more specific attribute that is common to each. This commonality, however, is not immediately obvious, so a good metaphor reveals something surprising about a conceptual space by combining it with another conceptual space. Veale et al. (2000) offer examples of systems that create such relationships between two input terms to form metaphors. More recently, researchers have generated poetic metaphorical relationships between two terms by leveraging large corpora. Veale and Hao (2007) found metaphorical relationships between two terms by mining Google search results for adjectives used to describe both terms. Veale and Hao (2008) applied a similar approach to mine more complex metaphors from WordNet. Such techniques have been deployed in \"computer poetry\" applications that automatically generate verse (Veale, 2013; Harmon, 2015) .",
"cite_spans": [
{
"start": 79,
"end": 103,
"text": "Lakoff and Turner (1989)",
"ref_id": "BIBREF4"
},
{
"start": 368,
"end": 393,
"text": "(Lakoff and Turner, 1989)",
"ref_id": "BIBREF4"
},
{
"start": 432,
"end": 461,
"text": "(Fauconnier and Turner, 2008)",
"ref_id": null
},
{
"start": 464,
"end": 483,
"text": "Veale et al. (2000)",
"ref_id": "BIBREF11"
},
{
"start": 1074,
"end": 1093,
"text": "Veale et al. (2000)",
"ref_id": "BIBREF11"
},
{
"start": 1317,
"end": 1337,
"text": "Veale and Hao (2007)",
"ref_id": "BIBREF9"
},
{
"start": 1465,
"end": 1485,
"text": "Veale and Hao (2008)",
"ref_id": "BIBREF10"
},
{
"start": 1661,
"end": 1674,
"text": "(Veale, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 1675,
"end": 1688,
"text": "Harmon, 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Work on figurative language",
"sec_num": "2"
},
{
"text": "Literary theorist William Empson (2004) argued for the importance of reading the \"ambiguities\" present in verse. The most basic type of ambiguity occurs when a metaphor simultaneously draws on different qualities of the items brought into metaphorical relation, and so it is \"effective in several ways at once.\" For instance, eyes are like sun for multiple reasons (e.g., both are literally round, and both may be \"bright\"). More complicated ambiguities may be generated through puns, in which a word simultaneously carries two distinct and ironically opposite meanings, each relevant to the context. To use Empson's example: in Pope's Dunciad, a character sleeping \"in port\" may be both safe at harbor and drunk (on port wine). In this case, two distinct conceptual spaces are activated by one word. Furthermore, as Empson argues, poets themselves are not always fully in control of the meanings of their words and, \"discovering his idea in the act of writing\" may create \"a simile which applies to nothing exactly, but lies half-way between two things,\" a subtly mixed metaphor. He gives the example of a passage from a verse-play by John Ford in which the term \"gall\" at first seems to mean \"boldness\"-but, when the author later mentions a \"well-grown oak,\" comes to retroactively signal \"oak-galls,\" a horticultural disease. \"Figurative language\" is not merely the sort of ordered and symmetrical matching of cognitive structures evinced by clear metaphors; it is also what happens when poets get caught up in loose and chaotic association between words, the way jazz musicians zig and zag between notes.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "William Empson (2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Work on figurative language",
"sec_num": "2"
},
{
"text": "Distributional approaches to representing word meaning have a long history in computational linguistics, and are motivated by the notion that \"You shall know a word by the company it keeps!\" (Firth, 1957) and the Wittgenstein-inspired notion that a concept is not an isolated thing but really a constellation of concepts linked by family resemblances (Rosch and Mervis, 1975) . Sch\u00fctze (1993) made early attempts to represent the meaning of concepts by creating n-dimensional word spaces. More recent attempts which use larger collections and innovations in algorithms have yielded more accurate representations of semantic relatedness. These include Mikolov et al.'s work on word embeddings learned using a Continuous Skipgram Model. It extends beyond bag-of-words models by accounting for the context a word appears in. This model is implemented in Google's word2vec tool 1 which has shown significant success in finding both syntactic and semantic relationships between words (Mikolov et al., 2013b; Mikolov et al., 2013a; Mikolov et al., 2013c) .",
"cite_spans": [
{
"start": 191,
"end": 204,
"text": "(Firth, 1957)",
"ref_id": "BIBREF2"
},
{
"start": 351,
"end": 375,
"text": "(Rosch and Mervis, 1975)",
"ref_id": "BIBREF6"
},
{
"start": 378,
"end": 392,
"text": "Sch\u00fctze (1993)",
"ref_id": "BIBREF7"
},
{
"start": 979,
"end": 1002,
"text": "(Mikolov et al., 2013b;",
"ref_id": null
},
{
"start": 1003,
"end": 1025,
"text": "Mikolov et al., 2013a;",
"ref_id": null
},
{
"start": 1026,
"end": 1048,
"text": "Mikolov et al., 2013c)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic relationships from word embeddings",
"sec_num": "2.2"
},
{
"text": "Mikolov et al. find semantic relationships from word pairs which can be categorized into 79 specific word relations, such as Cause:Effect or Action:Goal as identified in the SemEval-2012 Task 2, Measuring Relation Similarity (Jurgens et al., 2012; Turney, 2012) . For example, the word pair clothing:shirt falls into the Class Inclusion:Singular Collective relation. Mikolov et al. (2013c) then use a vector offset technique to understand the validity of resulting analogous relationships, such as \"clothing is to shirt as dish is to bowl\" as tested against the word relation data set presented by Jurgens et al. 2012. Similarly, our work aims to identify a semantic relationship between two words. It differs in that we aim to find figurative relationships between two words for poetic purposes, as opposed to analogous relationships between two pairs of words.",
"cite_spans": [
{
"start": 225,
"end": 247,
"text": "(Jurgens et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 248,
"end": 261,
"text": "Turney, 2012)",
"ref_id": "BIBREF8"
},
{
"start": 367,
"end": 389,
"text": "Mikolov et al. (2013c)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic relationships from word embeddings",
"sec_num": "2.2"
},
{
"text": "In this section we outline two methods -an addition model and an intersection model -to retrieve connector words that support figurative relationships using word2vec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational methods in word2vec to retrieve connector words",
"sec_num": "3"
},
{
"text": "The standard functionality of word2vec is to retrieve the top-ranked most similar words. Word2vec addition is optimized for such tasks, but here we are more interested in retrieving words to support figurative relationships. We aim to find the words in the overlap of the family resemblance spaces of each of the anchor words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational methods in word2vec to retrieve connector words",
"sec_num": "3"
},
{
"text": "We do so by retrieving words from word2vec similarity lists that are common to each anchor word. In contrast to the addition model, the intersection model retrieves words from further down on the sim- ilarity lists for each anchor word (moving towards the outer edges of their respective family resemblance spaces). The resulting words in the shared space maintain a balance between the two anchor words, thus drawing them together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational methods in word2vec to retrieve connector words",
"sec_num": "3"
},
{
"text": "To narrow the scope, the work in this paper focuses on anchor word pairs comprising one concrete noun and one poetic theme. We chose this focus from our observations that poetry often relies on a connection between a concrete concept and a more abstract theme, which is consistent with Kao and Jurafsky's (2015) findings that professional poetry contains more concreteness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational methods in word2vec to retrieve connector words",
"sec_num": "3"
},
{
"text": "For the investigation here, we randomly generate anchor word pairs from a list of concrete nouns (see Table 2 ) and a list of poetic themes 2 (see Table 3 ). 3",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 147,
"end": 154,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Selecting anchor word pairs",
"sec_num": "3.1"
},
{
"text": "2 Created from a list of poetic themes from http://www.poetseers.org/themes/ then expanded to include the top 5 most similar words, using word2vec. The expanded list was normalized to lower-case words. Overly-specific words, including \"rainbows\", \"cats,\" \"pets,\" \"rabbits,\" \"dogs,\" \"Iraq,\" and \"sewage\", were removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting anchor word pairs",
"sec_num": "3.1"
},
{
"text": "3 These particular lists were chosen for expediency; the rigorous definition of concrete nouns and poetic themes is not central to our exploration. The set of concrete nouns comes from existing poetry. 4 The words are selected based on their word frequency across the corpus, number of noun senses in WordNet (WordNet, 2010), and degree of concreteness using the word concreteness dataset developed by Brysbaert et al. (2013). The frequency measure is normalized across the corpus. The mean concreteness ratings from Brysbaert et al. (2013) range from 0.0 to 5.0 and include standard deviations. The concrete noun list is composed of nouns with a normalized frequency ranging from 0.0 to 0.1; the number of noun senses ranging from 0 to 4; and the degree of concreteness ranging from 3.5 to 5.0, with the degree of concreteness standard deviation ranging from 0.0 to 3.0.",
"cite_spans": [
{
"start": 202,
"end": 203,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting anchor word pairs",
"sec_num": "3.1"
},
{
"text": "In generating anchor word pairs, we randomly select a concrete noun and a poetic theme pair where the two words occupy distinct semantic spaces. For the purposes of this paper, we use a cosine similarity score of less than 0.4 as a threshold. The similarity scores of candidate concrete nouns to candidate poetic themes range from -0.15 (dissimilar) to 0.45 Top 10 words from word2vec addition for storm + surrendering surrendered hurricane storms snowstorm rainstorm tornado blizzard typhoon twister squall Table 4 : Top 10 words retrieved when adding anchor words storm and surrendering using word2vec addition.",
"cite_spans": [],
"ref_spans": [
{
"start": 508,
"end": 515,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selecting anchor word pairs",
"sec_num": "3.1"
},
{
"text": "(similar). This similarity check is used to create an anchor word pair comprising two words with different semantic spaces. If the two anchor words are too similar, they will rely on a synonymous connection and thus will not provide two distinct semantic spaces to blend.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting anchor word pairs",
"sec_num": "3.1"
},
{
"text": "The addition model retrieves a set of connector words which are the most similar to a pair of anchor words using word2vec's existing vector addition approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Addition model in word2vec to retrieve connector words",
"sec_num": "3.2"
},
{
"text": "Implementation of this addition model involves starting with word2vec's word vector representations of the concrete noun, c, and the poetic theme, t, of the anchor word pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Addition model in word2vec to retrieve connector words",
"sec_num": "3.2"
},
{
"text": "The word vector, a, is then defined such that a = c + t. Word2vec then searches for the word vectors with the greatest cosine similarity to a, which approximates their similarity (Mikolov et al., 2013c) . We use this vector addition to find a set A containing n words closest to a. For example, if we take the concrete noun \"storm\" and the poetic theme \"surrendering\" as an anchor word pair, we can retrieve the list of words in Table 4 . 5 The resulting list contains primarily words that are synonyms to one of the two anchor words. However, our goal is to retrieve connector words that blend the semantic spaces of the two anchor words, so we investigate an alternative computation in word2vec, the intersection model.",
"cite_spans": [
{
"start": 179,
"end": 202,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF5"
},
{
"start": 439,
"end": 440,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 429,
"end": 436,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Addition model in word2vec to retrieve connector words",
"sec_num": "3.2"
},
{
"text": "For the intersection model, we start with word2vec's vector representations of the concrete noun, c, and the poetic theme, t, of the anchor word pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection model in word2vec to retrieve connector words",
"sec_num": "3.3"
},
{
"text": "Using word2vec, we then find a set, C, which contains the top n = 1000 word vectors that have the greatest cosine similarity to c. Similarly, we find a set, T , which contains the top n = 1000 word vectors that have the greatest cosine similarity to t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection model in word2vec to retrieve connector words",
"sec_num": "3.3"
},
{
"text": "Looking at the intersection, I, of the two sets I = C \u2229 T , we find words that relate both to the initial concrete noun, ( c), and the poetic theme, ( t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection model in word2vec to retrieve connector words",
"sec_num": "3.3"
},
{
"text": "The resulting set, I, varies significantly in size depending on the concrete noun and poetic theme pair chosen. The depth n also contributes to the size of the set, I. In our analyses, we elected to use n = 1000 because it elicited plentiful yet meaningful results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection model in word2vec to retrieve connector words",
"sec_num": "3.3"
},
{
"text": "It is the case that for any sets I and A of similar size, there are likely to be words unique to each set, as well as words that are shared between these sets. A proof of this appears in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection model in word2vec to retrieve connector words",
"sec_num": "3.3"
},
{
"text": "Since A and I have overlapping words, but also contain unique words, we remove the overlapping words to focus on the resulting set U I = I \\ A, containing the words unique to the intersection set, and U A = A \\ I, containing the words unique to the addition set. We focus on these unique words sets to facilitate our observations of the differences between the two models. Quantitatively, we observe differences in the range of similarity scores between the anchor word-connector word pairs. Qualitatively, we use the unique words from each set to consider the potential to support figurative language by combining the semantic spaces of the two anchor words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection model in word2vec to retrieve connector words",
"sec_num": "3.3"
},
{
"text": "With the example anchor words, storm and surrendering, we see the resulting unique word lists in Table 5 . In the next two sections we quantitatively and qualitatively observe these two lists. ",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Intersection model in word2vec to retrieve connector words",
"sec_num": "3.3"
},
{
"text": "In observing the cosine similarities between the words in U I and each anchor word, and the words in U A and each anchor word, we begin to see a pattern where the words from U I fall within a smaller band of similarity than of those in U A (see Tables 6 and 7 for example with words from Table 5 ). Note that the highest possible cosine similarity score is 1, indicating maximum similarity, and the lowest is -1, indicating dissimilarity.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 260,
"text": "Tables 6 and 7",
"ref_id": "TABREF9"
},
{
"start": 289,
"end": 296,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Quantitative observations of retrieved connector words",
"sec_num": "4"
},
{
"text": "Across 10 different randomly selected anchor word pairs, we see that this same pattern holds. The words in U I fall within a band of similarity ranging from approximately 0.25 to approximately 0.30 where the average spread between the two similarities is 0.06. By comparison, the similarities between connector words in U A and each anchor word falls within a larger band of similarity ranging from approximately 0.1 to approximately 0.6 where the average spread between the two similarities is 0.44. Table 8 shows these ranges for each anchor word pair. The connector words in U I are more balanced between both of the anchor words, whereas the con- nector words in U A are more closely related to a single anchor word.",
"cite_spans": [],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantitative observations of retrieved connector words",
"sec_num": "4"
},
{
"text": "Next we qualitatively explore the potential of each model to retrieve words in the shared space between two anchor words using a crowd-sourced dataset of figurative relationships. We annotate these relationships based on the types of connections made between the connector words and anchor word pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative observations of retrieved connector words",
"sec_num": "5"
},
{
"text": "We construct a dataset made up of sentences stating the figurative relationships tying the connector words from the addition and intersection lists to pairs of anchor words. This dataset allows us to explore the potential of the connector words pro- Average spread between similarity scores: 0.56 vided by each approach to blend the distinct semantic spaces of the two anchor words. To generate this dataset, we presented crowd-sourced workers from Mechanical Turk with a list of words, either those unique to the I set (U I ) or those unique to the A set (U A ). The words in the provided sets were normalized to exclude proper nouns, lower-case all characters, and eliminate morphological duplicates. If the unique word list exceeded 10 words, a random sample of 10 words was shown. The Mechanical Turk workers then selected a single connector word from the list and wrote a sentence to describe the relationship between the anchor words and the connector word. Mechanical Turk workers were provided the diagram in Figure 1 with the concrete noun and poetic theme words populated. We informed workers that they should select the connector word that \"best connects the anchor words in a poetic sense (e.g., using a double meaning, creating a new image, creating an interesting relationship, etc.)\".",
"cite_spans": [],
"ref_spans": [
{
"start": 1017,
"end": 1025,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset construction",
"sec_num": "5.1"
},
{
"text": "The workers were prompted to fill in text to complete a template sentence of the form: \" [ With 25 workers writing 4 sentences each across 10 anchor word pairs, the constructed dataset contained 100 generated sentences.",
"cite_spans": [
{
"start": 89,
"end": 90,
"text": "[",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset construction",
"sec_num": "5.1"
},
{
"text": "The following sections provide examples of the figurative relationships among the selected connector word and anchor word pairs created by Mechanical Turk workers and discussion of whether the relationships created achieve heightened effects by drawing together two distinct semantic spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset construction",
"sec_num": "5.1"
},
{
"text": "Using the dataset of generated sentences, we explore the potential for word2vec to provide connector words that blend the two distinct semantic spaces of the two anchor words using the addition and intersection operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample data",
"sec_num": "5.2"
},
{
"text": "Below, we present detailed results for 3 of the assessed anchor word pairs. We show which words were chosen by Mechanical Turk workers as the best connector word (bolded), and the sentences describing the relationship among the connector word and the anchor words (underlined) as created by the workers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample data",
"sec_num": "5.2"
},
{
"text": "All connector words chosen are shown in Table 9 . Sample connection descriptions from Mechanical Turk workers as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Table 9",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "Sample 1: color and earthly",
"sec_num": "5.2.1"
},
{
"text": "\"Radiant connects color and earthly because radiant means a bright color that looks like it's shining and at night, the earthly sky is radiant because it shines brightly with the stars.\" \"Hues connects color and earthly because hues imply various colors, shades, or characteristics and hues can be earthly in tone, such as blues, greens and browns.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample 1: color and earthly",
"sec_num": "5.2.1"
},
{
"text": "All connector words chosen are shown in Table 10 . Sample connection descriptions from Mechanical Turk workers are as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 49,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sample 2: storm and surrendering",
"sec_num": "5.2.2"
},
{
"text": "\"Barrage connects storm and surrendering because a storm is a barrage of bad weather like winds and rain people surrender when they feel a barrage of overwhelming things coming at them.\" \"Hurricane connects storm and surrendering because it is a type of storm and those who surrender to it are spared, like grass and those who stand against it are devastated, like big trees.\" ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample 2: storm and surrendering",
"sec_num": "5.2.2"
},
{
"text": "All connector words chosen are shown in Table 11 . Sample connection descriptions from Mechanical Turk workers are as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 49,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sample 3: flame and caring",
"sec_num": "5.2.3"
},
{
"text": "\"Cook connects caring and flame because it is related to flame as flames are used in cooking and cooking can be a symbol of caring for someone with good food.\" \"Torch connects caring and flame because when someone cares about someone else it's often said they are carrying a torch for them, while the visual of a torch itself tends to have a flame atop it.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample 3: flame and caring",
"sec_num": "5.2.3"
},
{
"text": "As stated above, our goal in suggesting connector words is to blend the distinct semantic spaces of the two anchor words to create figurative relationships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of qualitative observations",
"sec_num": "5.3"
},
{
"text": "While the sentences with synonyms do contain figurative language, they do not achieve our goal of using the connector word to blend the two anchor words. Instead, the connector word shares a semantic space with one of the anchor words and this shared semantic space is then blended with the semantic space of the other anchor word. The sentences reflect a figurative relationship that is present between the two anchor words (which does not depend on the connector word) rather than a new space created by the introduction of the connector word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of qualitative observations",
"sec_num": "5.3"
},
{
"text": "We observe that the connector words that create a heightened effect by blending the two anchor words have a balanced cosine similarity to both anchor words (in the range of approximately 0.25 to approximately 0.30 as discussed in section 4 and shown in Table 8 ). This means that the connector word is not closer to one or the other anchor word, but rather occupies the shared space between the two anchor words. In contrast, the connector words that are synonymous with one of the anchor words, and thus do not blend the semantic spaces of the two anchor words, have imbalanced cosine similarities to the two anchor words. The connector word's shared semantic space with one of the anchor words is visible in a higher cosine similarity to that anchor word (approximately 0.6) and a much lower cosine similarity to the other anchor word (approximately 0.1). In this latter case, the connector word is not blending the semantic spaces of the two anchor words but is rather sharing the semantic space of one.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion of qualitative observations",
"sec_num": "5.3"
},
{
"text": "In the sentences that rely on synonyms for one part of the relationship, the connector word has a metaphorical relationship with one of the anchor words and a non-metaphorical with the other anchor word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships based on synonyms",
"sec_num": "5.3.1"
},
{
"text": "By looking at word2vec similarity scores of concrete noun to connector word and poetic theme to connector word, we can see that in the relationships that rely on synonym there is a relatively wide spread between the similarity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships based on synonyms",
"sec_num": "5.3.1"
},
{
"text": "The examples below show figurative relationships that rely on synonym-based relationships between the connector word and one of the anchor words along with the similarity scores between the connector word and the concrete noun and between the connector word and the poetic theme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships based on synonyms",
"sec_num": "5.3.1"
},
{
"text": "\"Torch connects caring and flame because caring for someone can feel like a flame or a torch burns inside you for them.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples:",
"sec_num": null
},
{
"text": "Torch and flame are connected through a synonymous relationship; these two words are then connected to caring through a metaphor (caring is a torch burning) Similarity score torch-caring: 0.06 Similarity score torch-flame: 0.67 \"Hues connects color and earthly because hues imply various colors, shades, or characteristics and hues can be earthly in tone, such as blues, greens and browns.\" Hues and color are connected through a synonymous relationship; these two words are then connected to earthly through a metaphor (colors are earthly).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples:",
"sec_num": null
},
{
"text": "Similarity score hues-color: 0.61 Similarity score hues-earthly: 0.09",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples:",
"sec_num": null
},
{
"text": "The figurative relationships that result in a heightened effect are created through a connector word retrieved from the overlapping semantic space between the two anchor words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships blending distinct semantic spaces",
"sec_num": "5.3.2"
},
{
"text": "In these relationships, the word2vec similarity scores of concrete noun to connector word and poetic theme to connector word are close, indicating a balanced relationship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships blending distinct semantic spaces",
"sec_num": "5.3.2"
},
{
"text": "The examples below show figurative relationships that use a connector word that blends the two distinct semantic spaces of the anchor words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationships blending distinct semantic spaces",
"sec_num": "5.3.2"
},
{
"text": "\"Barrage connects storm and surrendering because a storm is a barrage of bad weather like winds and rain people surrender when they feel a barrage of overwhelming things coming at them.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples:",
"sec_num": null
},
{
"text": "A storm is a barrage of bad weather and life can be a barrage to which you surrender. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples:",
"sec_num": null
},
{
"text": "In the dataset of sentences generated by Mechanical Turk workers drawing the connections between anchor words and connector words, we observe the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of observations",
"sec_num": "6"
},
{
"text": "\u2022 Instances where the cosine similarity scores between each of the anchor words and the connector word are unbalanced tend to lead to a synonymous relationship between one anchor word-connector word pair (a nonmetaphorical relation) and a shared figurative relationship with the second anchor word (a metaphorical relation). In these cases the connector word is not drawing together the family resemblance semantic spaces of the two anchor words, because it already exists in the semantic space of one of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of observations",
"sec_num": "6"
},
{
"text": "\u2022 Instances where the cosine similarity scores between each of the anchor words and the connector word are balanced tend to lead to a heightened effect relationship blending the two distinct semantic spaces of the anchor word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of observations",
"sec_num": "6"
},
{
"text": "As seen in Table 8 , the band of similarity scores resulting from words in U I is smaller than the band of similarity resulting from connector words in U A , suggesting a more balanced relationship.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summary of observations",
"sec_num": "6"
},
{
"text": "We have observed figurative relationships resulting from the introduction of a connector word to an anchor word pair. We notice that balanced cosine similarity scores between the connector word and each anchor word tend to lead to heightened effects by blending the two distinct semantic spaces of the anchor word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "The words unique to the intersection list proposed here have balanced cosine similarity scores ranging from approximately 0.25 to 0.30, suggesting that finding the words unique to the intersection list prioritizes the retrieval of words that blend the distinct semantic spaces of two anchor words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "The next step in this work is to test this hypothesis with an evaluation. Such an evaluation may include looking at the band of similarity from 0.25 to 0.30 directly, by way of the unique words to intersection and unique words to addition sets, and/or by way of the complete intersection and complete addition sets. We could also conduct threshold testing for varying word2vec settings and top n settings. In evaluating this work, it would be interesting to see if everyday people and practicing poets judge the relationships differently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Additionally, through further evaluation, the nature of other bands of similarity outside of the 0.25 to 0.30 range could be tested, as well as the presence of such a band of similarity when expanding beyond the concrete noun-poetic theme scope.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Expanding beyond the concrete noun-poetic theme scope could also involve grounding the anchor pair selection more explicitly in the metaphors proposed by Lakoff and Turner (1989) .",
"cite_spans": [
{
"start": 154,
"end": 178,
"text": "Lakoff and Turner (1989)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Further related work may include consideration of more computations within word2vec to see what types of word relations such computations support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Once a conceptual understanding is more established, research could then be conducted regarding the various applications that such findings could be used for. Such applications may include poetry generation or tools to assist in creative writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Overall, we hope that this work will continue to promote the development of computational approaches to figurative language, because:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "\"By metaphor you paint A thing.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "-Wallace Stevens Poem Written At Morning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Throughout this paper, we are using the Gensim implementation of word2vec (\u0158eh\u016f\u0159ek and Sojka, 2010), trained on 'pruned.word2vec.txt'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Existing poetry from a corpus of 2,860 poems downloaded from the \"19th Century American Poetry\" section of http://famouspoetsandpoems.com.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Proper nouns were removed from the list and morphological duplicates removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank David Bamman for his guidance and valuable insight,s and the reviewers for their very thoughtful and thorough feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Formally, the set A contains the top n most similar word vectors, w, such that cos( w, a) \u2265 \u03b1, where \u03b1 is a minimum similarity threshold resulting from selecting the top n words. As such:The set I contains all word vectors, w, such that cos( w, c) \u2265 \u03b2 and cos( w, t) \u2265 \u03b3, where \u03b2 and \u03b3 are minimum similarity thresholds resulting from selecting the top n words from each list.If we were finding the single word vector that maximized (1) and (2), the two equations would be equivalent, as shown by Levy and Goldberg (2014) . Rather, in the addition model, we are finding the set of words that satisfy (1), and, in the intersection model, we are finding the set of words that satisfy (2). We can see that (1) and (2) are not necessarily equivalent. If they were, we would have a connector word, w, such that (1) and (2) were always both satisfied. As such, we would need to satisfy (3):Note that (3) assumes the word vectors are lengthnormalized. We then expand (3) as follows:We can solve (4) as follows:is not necessarily always true. Thus, the initial assumption that the addition and intersection models contain the same word vectors is contradicted, which confirms that A does not necessarily equal I.",
"cite_spans": [
{
"start": 497,
"end": 521,
"text": "Levy and Goldberg (2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix: Proof",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Concreteness Ratings for 40 Thousand Generally Known English Words and Lemmas",
"authors": [],
"year": 2013,
"venue": "Behavior Research Methods",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References [Brysbaert et al.2013] Marc Brysbaert, Amy Beth War- riner, and Victor Kuperman. 2013. Concreteness Rat- ings for 40 Thousand Generally Known English Words and Lemmas. In Behavior Research Methods, pages 1-8.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Random House. [Fauconnier and Turner2008] Gilles Fauconnier and Mark Turner",
"authors": [
{
"first": "William",
"middle": [],
"last": "Empson",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "645",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Empson. 2004. Seven types of ambiguity, volume 645. Random House. [Fauconnier and Turner2008] Gilles Fauconnier and Mark Turner. 2008. The way we think: Conceptual blending and the mind's hidden complexities. Basic Books.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis (Special",
"authors": [
{
"first": "John",
"middle": [],
"last": "Rupert",
"suffix": ""
},
{
"first": "Firth",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1957,
"venue": "Philological Society)",
"volume": "",
"issue": "",
"pages": "168--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Rupert Firth. 1957. A synopsis of lin- guistic theory, 1930-1955. Studies in linguistic analy- sis (Special volume of the Philological Society), pages 1-31. Reprinted in: Frank R. Palmer (ed.) Selected papers of J. R. Firth 1952-59, Longmans, Green and Co Ltd, London and Harlow, UK, 168-205; citation on page 179.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semeval-2012 task 2: Measuring degrees of relational similarity",
"authors": [
{
"first": "Sarah",
"middle": [
"Harmon"
],
"last": "",
"suffix": ""
},
{
"first": ". ; David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Holyoak",
"suffix": ""
}
],
"year": 2012,
"venue": "A computational analysis of poetic style: Imagism and its influence on modern professional and amateur poetry",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Harmon. 2015. Figure8: A novel system for generating and evaluating figurative lan- guage. In Proceedings of the Sixth International Con- ference on Computational Creativity June, page 71. [Jurgens et al.2012] David Jurgens, Saif Mohammad, Pe- ter Turney, and Keith Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics (SemEval 2012), page 356364. Association for Computational Linguistics. [Kao and Jurafsky2015] Justine Kao and Dan Jurafsky. 2015. A computational analysis of poetic style: Imag- ism and its influence on modern professional and ama- teur poetry. Linguistic Issues in Language Technology, 12(3).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Linguistic Regularities in Sparse and Explicit Word Representations",
"authors": [
{
"first": "Turner1989] George",
"middle": [],
"last": "Lakoff",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Turner",
"suffix": ""
}
],
"year": 1989,
"venue": "Distributed Representations of Words and Phrases and their Compositionality",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "and Turner1989] George Lakoff and Mark Turner. 1989. More than cool reason: A field guide to poetic metaphor. University of Chicago Press. [Levy and Goldberg2014] Omer Levy and Yoav Gold- berg. 2014. Linguistic Regularities in Sparse and Ex- plicit Word Representations. In CoNLL. [Mikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, and Dan Jurafsky. 2013a. Ef- ficient Estimation of Word Representations in Vector Space. In arXiv preprint arXiv:1301.3781. [Mikolov et al.2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed Representations of Words and Phrases and their Compositionality. In arXiv:1310.4546 [cs.CL].",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Linguistic Regularities in Continuous Space Word Representations",
"authors": [
{
"first": "[",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Mikolov et al.2013c] Tomas Mikolov, Wen-tau Yih, Greg Corrado, and Jeffrey Dean. 2013c. Linguistic Regularities in Continuous Space Word Representa- tions. In HLT-NAACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "Eleanor",
"middle": [],
"last": "Sojka",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"B"
],
"last": "Rosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mervis",
"suffix": ""
}
],
"year": 1975,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "7",
"issue": "",
"pages": "573--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[\u0158eh\u016f\u0159ek and Sojka2010] Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, May. ELRA. [Rosch and Mervis1975] Eleanor Rosch and Carolyn B Mervis. 1975. Family resemblances: Studies in the internal structure of categories. Cognitive psychology, 7(4):573-605.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word space",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1993,
"venue": "Advances in Neural Information Processing Systems 5",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1993. Word space. In Advances in Neural Information Processing Systems 5.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Domain and Function: A Dual-Space Model of Semantic Relations and Compositions",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "533--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney. 2012. Domain and Function: A Dual-Space Model of Semantic Relations and Com- positions. Journal of Artificial Intelligence Research, pages 533-585.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comprehending and generating apt metaphors: a webdriven, case-based approach to figurative language",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
},
{
"first": "Yanfen",
"middle": [],
"last": "Hao",
"suffix": ""
}
],
"year": 2007,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "1471--1476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Veale and Hao2007] Tony Veale and Yanfen Hao. 2007. Comprehending and generating apt metaphors: a web- driven, case-based approach to figurative language. In AAAI, volume 2007, pages 1471-1476.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A fluid knowledge representation for understanding and generating creative metaphors",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Hao2008",
"suffix": ""
},
{
"first": "Yanfen",
"middle": [],
"last": "Veale",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hao",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "945--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Veale and Hao2008] Tony Veale and Yanfen Hao. 2008. A fluid knowledge representation for understanding and generating creative metaphors. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 945-952. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Computation and blending",
"authors": [
{
"first": "",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2000,
"venue": "Cognitive Linguistics",
"volume": "11",
"issue": "3/4",
"pages": "253--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veale et al.2000] Tony Veale, Diarmuid O Donoghue, and Mark T. Keane. 2000. Computation and blend- ing. Cognitive Linguistics, 11(3/4):253-282.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Less rhyme, more reason: Knowledge-based poetry generation with feeling, insight and wit",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Computational Creativity",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Veale. 2013. Less rhyme, more rea- son: Knowledge-based poetry generation with feeling, insight and wit. In Proceedings of the International Conference on Computational Creativity, pages 152- 159.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Connector word drawing together the two semantic spaces of the anchor words.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Life & road connected by journey is an example of how the framework inFigure 1maps to Lakoff's LIFE IS A JOUR-NEY metaphor.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "8: The low end of the ranges is the average of the minimum similarity scores across all the connector words to each of the words in the anchor word pair. The upper end of the ranges is the average of the maximums. A smaller range means that the anchor words have more balanced similarity to the connector word. comp. = compassionate; surr. = surrendering. because...\". For example,\"Barrage connects storm and surrendering because...\".",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "Similarity score barrage-storm: 0.25 Similarity score barrage-surrendering: 0.20 \"Cook connects caring and flame because it is related to flame as flames are used in cooking and cooking can be a symbol of caring for someone with good food.\" Providing nourishment by cooking requires flames and is caring. Similarity score cook-caring: 0.26 Similarity score cook-flame: 0.22",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "et al.'s work on word representations in",
"type_str": "table",
"content": "<table><tr><td>Anchor word pairs</td><td>Connector words</td></tr><tr><td>surrendering &amp; storm</td><td>barrage</td></tr><tr><td>caring &amp; flame</td><td>cook</td></tr><tr><td>life &amp; road</td><td>journey</td></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"text": "Examples of anchor word pairs and connector words.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"text": "Pool of concrete nouns used in the selection of anchor pairs.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF5": {
"text": "Pool of poetic themes used in the selection of anchor pairs.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF7": {
"text": "Connector words for storm and surrendering retrieved from the words unique to I and the words unique to A.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF9": {
"text": "Similarity scores between connector words found in UI to anchor words storm and surrendering. The average spread between the scores of 0.05 indicates the small band of similarity the words exist in, showing the balanced similarity the connector word has with each of the anchor words.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF11": {
"text": "Similarity scores between connector words found inUA to anchor words storm and surrendering. The average spread between the scores of 0.56 shows the wide range of similarity scores.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF13": {
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF15": {
"text": "Figurative ties between color and earthly. Bolded words were selected by Mechanical Turk workers as the best word to create the figurative tie.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF17": {
"text": "Figurative ties between storm and surrendering.Bolded words were selected by Mechanical Turk workers as the best word to create the figurative tie.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF19": {
"text": "Figurative ties between flame and caring. Bolded words were selected by Mechanical Turk workers as the best word to create the figurative tie.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}