ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:57:29.440373Z"
},
"title": "Decoding Brain Activity Associated with Literal and Metaphoric Sentence Comprehension Using Distributional Semantic Models",
"authors": [
{
"first": "Vesna",
"middle": [
"G"
],
"last": "Djokic",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Jean",
"middle": [],
"last": "Maillard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Luana",
"middle": [],
"last": "Bulat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"country": "The Netherlands"
}
},
"email": "e.shutova@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent years have seen a growing interest within the natural language processing (NLP) community in evaluating the ability of semantic models to capture human meaning representation in the brain. Existing research has mainly focused on applying semantic models to decode brain activity patterns associated with the meaning of individual words, and, more recently, this approach has been extended to sentences and larger text fragments. Our work is the first to investigate metaphor processing in the brain in this context. We evaluate a range of semantic models (word embeddings, compositional, and visual models) in their ability to decode brain activity associated with reading of both literal and metaphoric sentences. Our results suggest that compositional models and word embeddings are able to capture differences in the processing of literal and metaphoric sentences, providing support for the idea that the literal meaning is not fully accessible during familiar metaphor comprehension.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent years have seen a growing interest within the natural language processing (NLP) community in evaluating the ability of semantic models to capture human meaning representation in the brain. Existing research has mainly focused on applying semantic models to decode brain activity patterns associated with the meaning of individual words, and, more recently, this approach has been extended to sentences and larger text fragments. Our work is the first to investigate metaphor processing in the brain in this context. We evaluate a range of semantic models (word embeddings, compositional, and visual models) in their ability to decode brain activity associated with reading of both literal and metaphoric sentences. Our results suggest that compositional models and word embeddings are able to capture differences in the processing of literal and metaphoric sentences, providing support for the idea that the literal meaning is not fully accessible during familiar metaphor comprehension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Distributional semantics aims to represent the meaning of linguistic fragments as high-dimensional dense vectors. It has been successfully used to model the meaning of individual words in semantic similarity and analogy tasks (Mikolov et al., 2013; Pennington et al., 2014) ; as well as the meaning of larger linguistic units in a variety of tasks, such as translation (Bahdanau et al., 2014) and natural language inference (Bowman et al., 2015) . Recent research has also demonstrated the ability of distributional models to predict patterns of brain activity associated with the meaning of words, obtained via functional magnetic resonance imaging (fMRI) (Mitchell et al., 2008; Devereux et al., 2010; Pereira et al., 2013) . Following in their steps, Anderson et al. (2017b) have investigated visually grounded semantic models in this context. They found that while both visual and text-based models can equally decode concrete words, textbased models show an overall advantage over visual models when decoding more abstract words.",
"cite_spans": [
{
"start": 226,
"end": 248,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF42"
},
{
"start": 249,
"end": 273,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF47"
},
{
"start": 369,
"end": 392,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 424,
"end": 445,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 657,
"end": 680,
"text": "(Mitchell et al., 2008;",
"ref_id": "BIBREF43"
},
{
"start": 681,
"end": 703,
"text": "Devereux et al., 2010;",
"ref_id": "BIBREF22"
},
{
"start": 704,
"end": 725,
"text": "Pereira et al., 2013)",
"ref_id": "BIBREF48"
},
{
"start": 754,
"end": 777,
"text": "Anderson et al. (2017b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Other research has shown that data-driven semantic models can also successfully predict patterns of brain activity associated with the processing of sentences (Pereira et al., 2018) and larger narrative text passages (Wehbe et al., 2014; Huth et al., 2016) . Recently, Jain and Huth (2018) investigated long short-term memory (LSTM) recurrent neural networks and showed that semantic models that incorporate larger-sized context windows outperform those with smaller-sized context windows, as well as the baseline bagof-words model, in predicting brain activity associated with narrative listening. This suggests that compositional semantic models are sufficiently advanced to study the impact of linguistic context on semantic representation in the brain. In this paper, we investigate the extent to which lexical and compositional semantic models are able to capture differences in human meaning representations, resulting from meaning disambiguation of literal and metaphoric uses of words in context.",
"cite_spans": [
{
"start": 159,
"end": 181,
"text": "(Pereira et al., 2018)",
"ref_id": "BIBREF49"
},
{
"start": 217,
"end": 237,
"text": "(Wehbe et al., 2014;",
"ref_id": "BIBREF55"
},
{
"start": 238,
"end": 256,
"text": "Huth et al., 2016)",
"ref_id": null
},
{
"start": 269,
"end": 289,
"text": "Jain and Huth (2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Metaphoric uses of words involve a transfer of meaning, arising through semantic composition (Mohammad et al., 2016) . For instance, the meaning of the verb push is not intrinsically metaphorical; yet it receives a metaphorical interpretation when we talk about pushing agendas, pushing drugs, or pushing ourselves. Theories of metaphor comprehension differ in terms of the kinds of processes (and stages) involved in arriving at the metaphorical interpretation, mainly whether or not the abstract meaning is indirectly accessed via processing the literal meaning first or directly accessible largely bypassing the literal meaning (Bambini et al., 2016) . To this extent, the role that access to and retrieval of the literal meaning plays during metaphor processing is often debated. On the one hand, metaphor comprehension involves juxtaposing two unlike things and this may invite a search for common relational structure through a process of direct comparison. Inferences flow from the vehicle to the topic giving rise to the metaphoric interpretation (Gentner and Bowdle, 2005) . In a slightly different vein, Lakoff (1980) suggest that metaphor comprehension involves systematic mappings (between a concrete domain onto another typically more abstract domain) that become established through co-occurrences over the course of experience. This draws on mental imagery or the re-activation of neural representations involved during primary experience (i.e., sensorimotor simulation) allowing appropriate inferences to be made. Other theories, however, suggest that the literal meaning in metaphor comprehension may be largely bypassed if the abstract meaning is directly or immediately accessible involving more categorical processing (Glucksberg, 2003) . For example, the word used metaphorically could be immediately recognized as belonging to an abstract superordinate category of which both the vehicle and topic belong. Alternatively, it has been suggested that more familiar metaphors involve categorical processing, while comparatively novel metaphor will involve initially greater processing of the literal meaning (Desai et al., 2011) .",
"cite_spans": [
{
"start": 93,
"end": 116,
"text": "(Mohammad et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 631,
"end": 653,
"text": "(Bambini et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 1055,
"end": 1081,
"text": "(Gentner and Bowdle, 2005)",
"ref_id": "BIBREF28"
},
{
"start": 1114,
"end": 1127,
"text": "Lakoff (1980)",
"ref_id": "BIBREF40"
},
{
"start": 1738,
"end": 1756,
"text": "(Glucksberg, 2003)",
"ref_id": "BIBREF29"
},
{
"start": 2126,
"end": 2146,
"text": "(Desai et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To contribute to our understanding of metaphor comprehension, including the accessibility of the literal meaning, we investigate whether semantic models are able to decode patterns of brain activity associated with literal and metaphoric sentence comprehension, using the fMRI dataset of Djokic et al. (forthcoming). This dataset contains neural activity associated with the processing of both literal and familiar metaphorical uses of handaction verbs (such as push, grasp, squeeze, etc.) in the context of their nominal object. We experiment with several kinds of semantic models: (1) word-based models, namely, word embeddings of the verb and the nominal object; (2) compositional models, namely, vector addition and an LSTM recurrent neural network; and (3) visual models, learning visual representations of the verb and its nominal object. This choice of models allows us to investigate: (1) the role of the verb and its nominal object (captured by their respective word embeddings) in the interpretation of literal and metaphoric sentences; (2) the extent to which compositional models capture the patterns of human meaning representation in case of literal and metaphoric use; and (3) the role of visual information in literal and metaphor interpretation. We test these models in their ability to decode brain activity associated with literal and metaphoric sentence comprehension, using the similarity decoding method of Anderson et al. (2016) . We perform decoding at the whole brain level, as well as within specific regions implicated in linguistic, motor and visual processing.",
"cite_spans": [
{
"start": 453,
"end": 489,
"text": "(such as push, grasp, squeeze, etc.)",
"ref_id": null
},
{
"start": 1429,
"end": 1451,
"text": "Anderson et al. (2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results demonstrate that several of our semantic models are able to predict patterns of brain activity associated with the meaning of literal and metaphorical sentences. We find that (1) compositional semantic models are superior in decoding both literal and metaphorical sentences as compared to the lexical (i.e., word-based) models; (2) semantic representations of the verb are superior compared to that of the nominal object in decoding literal phrases, whereas semantic representations of the object are superior to that of the verb in decoding metaphorical phrases; and (3) linguistic models capture both language-related and sensorimotor representations for literal sentencesin contrast, for metaphoric sentences, linguistic models capture language-related representations and the visual models captured sensorimotor representations in the brain. Although the results do not offer straightforward conclusions regarding the role of the literal meaning in metaphor comprehension, they provide some support to the idea that lexical-semantic relations associated with the literal meaning are not fully accessible during familiar metaphor comprehension, particularly within action-related brain regions. Mitchell et al. (2008) were the first to show that distributional representations of concrete nouns built from co-occurrence counts with 25 experiential verbs could predict brain activity elicited by these nouns. Later studies used the fMRI data of Mitchell et al. (2008) as a benchmark for testing a range of semantic models including topic modelbased semantic features learned from Wikipedia (Pereira et al., 2013) , feature-norm based semantic features (Devereux et al., 2010) , and skip-gram word embeddings (Bulat et al., 2017) . Anderson et al. (2013) demonstrate that visually grounded semantic models can also decode brain activity associated with concrete words and show the best results using multimodal models. Additionally, Anderson et al. (2015) show that text-based models are superior in predicting brain activity of concrete words in brain areas related to linguistic processing, and the visual models in those related to visual processing. Lastly, Anderson et al. (2017b) use image and text-based semantic models to decode an fMRI dataset containing nouns with varying degree of concreteness. They show that text-based models have an advantage decoding the more abstract words over the visual models, supporting the view that concrete concepts involve linguistic and visual codes, while abstract concepts mainly linguistic codes (Paivio, 1971) .",
"cite_spans": [
{
"start": 1210,
"end": 1232,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF43"
},
{
"start": 1459,
"end": 1481,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF43"
},
{
"start": 1604,
"end": 1626,
"text": "(Pereira et al., 2013)",
"ref_id": "BIBREF48"
},
{
"start": 1666,
"end": 1689,
"text": "(Devereux et al., 2010)",
"ref_id": "BIBREF22"
},
{
"start": 1722,
"end": 1742,
"text": "(Bulat et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 1745,
"end": 1767,
"text": "Anderson et al. (2013)",
"ref_id": "BIBREF1"
},
{
"start": 1946,
"end": 1968,
"text": "Anderson et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 2175,
"end": 2198,
"text": "Anderson et al. (2017b)",
"ref_id": "BIBREF3"
},
{
"start": 2556,
"end": 2570,
"text": "(Paivio, 1971)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Subsequent studies have focused on evaluating the ability of distributional semantic models to encode brain activity elicited by larger text fragments. Pereira et al. (2018) showed that a regression model trained to map between word embeddings and the fMRI patterns of words could predict neural representations for unseen sentences. Adding to this, both Wehbe et al. (2014) and Huth et al. (2016) showed that distributional semantic models could predict neural activity associated with narrative comprehension. For instance, Wehbe et al. (2014) showed that a regression model that learned a mapping between several story features (distributional semantics, syntax, and discourse-related) and fMRI patterns associated with narrative reading could distinguish between two stories. These findings suggest that encoding models using word embeddings as features can predict brain activity associated with larger linguistic units. Other researchers have evaluated models that more directly consider the role played by the linguistic context and syntax (Anderson et al., 2019; Jain and Huth, 2018) . Jain and Huth (2018) showed that a regressionbased model mapping between fMRI patterns associated with narrative listening and contextual features obtained from an LSTM language model outperformed the bag-of-words model. Moreover, the performance increased when using LSTMs with larger context-windows.",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "Pereira et al. (2018)",
"ref_id": "BIBREF49"
},
{
"start": 355,
"end": 374,
"text": "Wehbe et al. (2014)",
"ref_id": "BIBREF55"
},
{
"start": 379,
"end": 397,
"text": "Huth et al. (2016)",
"ref_id": null
},
{
"start": 526,
"end": 545,
"text": "Wehbe et al. (2014)",
"ref_id": "BIBREF55"
},
{
"start": 1047,
"end": 1070,
"text": "(Anderson et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 1071,
"end": 1091,
"text": "Jain and Huth, 2018)",
"ref_id": "BIBREF34"
},
{
"start": 1094,
"end": 1114,
"text": "Jain and Huth (2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Brain Activity",
"sec_num": "2.1"
},
{
"text": "In parallel to this work, several other works have been successful in decoding word-level and sentential meanings using semantic models based on human behavioral data. Chang et al. (2010) use taxonomic encodings of McRae et al. (2005) , while Fernandino et al. (2015) use semantic models based on human-elicited salience scores for sensorimotor attributes to decode neural activity associated with concrete concepts. Interestingly, the latter report that their model is unable to decode brain activity associated with the meaning of more abstract concepts. Lastly, other research has achieved similar success in decoding sentential meanings using neuro-cognitively driven features that more closely reflect human experience (Anderson et al., 2017a; . For example, Anderson et al. (2017a) showed that a multiple-regression model trained to map between 65-dimensional experiential attribute model of word meaning (e.g., motor, spatial, social-emotional) and the fMRI activations associated with words could predict neural activation of unseen sentences. These findings highlight the importance of considering the neurocognitive constraints on semantic representation in the brain.",
"cite_spans": [
{
"start": 168,
"end": 187,
"text": "Chang et al. (2010)",
"ref_id": "BIBREF19"
},
{
"start": 215,
"end": 234,
"text": "McRae et al. (2005)",
"ref_id": "BIBREF41"
},
{
"start": 243,
"end": 267,
"text": "Fernandino et al. (2015)",
"ref_id": "BIBREF27"
},
{
"start": 724,
"end": 748,
"text": "(Anderson et al., 2017a;",
"ref_id": "BIBREF0"
},
{
"start": 764,
"end": 787,
"text": "Anderson et al. (2017a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Brain Activity",
"sec_num": "2.1"
},
{
"text": "Semantic processing is thought to depend on a number of brain regions functioning in concert as a unified semantic network linking language, memory, and modality-specific systems in the brain (Binder et al., 2009) . Xu et al. (2016) provide evidence in support of at least three functionally segregated systems that together comprise such a semantic network. A left-lateralized languagebased system spanning frontal-temporal (e.g., left inferior frontal gyrus [LIFG] , left posterior middle temporal gyrus [LMTP] ), but also parietal areas, is associated with lexical-semantics and syntactic processing. It preferentially responds to language tasks when compared to non-linguistic tasks of similar complexity (Fedorenko et al., 2011) . Notably, both Devereux et al. (2014) and Anderson et al. (2015) found that linguistic models could decode concrete concepts within brain areas in this system, mainly the LMTP. Importantly, this system works in tandem with a memorybased simulation system that interacts directly with medial-temporal areas critical in memory (and multimodal) processing. The memory-based simulation system retrieves memory images relevant to a concept and includes occipital areas such as the superior lateral occipital cortex, implicated in visual processing and which Anderson et al. (2015) showed could decode concrete concepts with visual models. This system also recruits modality-specific information. In line with this, Carota et al. (2017) showed that the semantic similarity of text-based models correlates with fMRI patterns of action words not only in languagerelated areas, but also in motor areas (left precentral gyrus [LPG] , left premotor cortex [LPM] ). Lastly, a fronto-parietal semantic control system manages interactions between these two systems, such as directing attention to different aspects of meaning depending on the linguistic context.",
"cite_spans": [
{
"start": 192,
"end": 213,
"text": "(Binder et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 216,
"end": 232,
"text": "Xu et al. (2016)",
"ref_id": "BIBREF56"
},
{
"start": 460,
"end": 466,
"text": "[LIFG]",
"ref_id": null
},
{
"start": 506,
"end": 512,
"text": "[LMTP]",
"ref_id": null
},
{
"start": 709,
"end": 733,
"text": "(Fedorenko et al., 2011)",
"ref_id": "BIBREF26"
},
{
"start": 750,
"end": 772,
"text": "Devereux et al. (2014)",
"ref_id": "BIBREF23"
},
{
"start": 777,
"end": 799,
"text": "Anderson et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 1288,
"end": 1310,
"text": "Anderson et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 1445,
"end": 1465,
"text": "Carota et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 1651,
"end": 1656,
"text": "[LPG]",
"ref_id": null
},
{
"start": 1680,
"end": 1685,
"text": "[LPM]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Representation in the Brain",
"sec_num": "2.2"
},
{
"text": "Prior neuroimaging experiments show that concrete concepts activate the relevant modalityspecific systems in the brain (Barsalou, 2008 (Barsalou, , 2009 , while the processing of abstract concepts has been found to engage mainly language-related brain regions in the left hemisphere and areas implicated in cognitive control Sabsevitz et al., 2005) . Relatedly, action-related words and literal phrases activate motor regions (e.g., to access motoric features of verbs) (Pulvermuller, 2005; Kemmerer et al., 2008) . In contrast, the degree to which action-related metaphors engage motor brain regions appears to depend on novelty, with more familiar metaphors (Desai et al., 2011) showing little to no activity in motor areas. In sum, concrete language involves modality-specific and language-related brain regions, while abstract language mainly language areas (Hoffman et al., 2015) .",
"cite_spans": [
{
"start": 119,
"end": 134,
"text": "(Barsalou, 2008",
"ref_id": "BIBREF8"
},
{
"start": 135,
"end": 152,
"text": "(Barsalou, , 2009",
"ref_id": "BIBREF9"
},
{
"start": 325,
"end": 348,
"text": "Sabsevitz et al., 2005)",
"ref_id": "BIBREF52"
},
{
"start": 470,
"end": 490,
"text": "(Pulvermuller, 2005;",
"ref_id": "BIBREF50"
},
{
"start": 491,
"end": 513,
"text": "Kemmerer et al., 2008)",
"ref_id": "BIBREF36"
},
{
"start": 862,
"end": 884,
"text": "(Hoffman et al., 2015)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Representation in the Brain",
"sec_num": "2.2"
},
{
"text": "To assess the role of linguistic versus visual information in literal and metaphor decoding, we investigated the extent to which our semantic models were able to decode literal and metaphoric sentences not only across the whole brain (and brain's lobes), but also within specific brain regions of interest (ROIs) implicated in visual, action, and language processing. The visual ROIs include high-level visual brain regions (left lateral occipital temporal cortex, left ventral temporal cortex), part of the ventral visual stream implicated in object recognition (Bugatus et al., 2017) . The action ROIs include sensorimotor brain re-gions (LPG, LPM) implicated in action-semantics (Kemmerer et al., 2008) . Lastly, the languagerelated ROIs include areas of the classic language network (LIFG, LMTP) implicated in lexicosemantic and syntactic processing (Foderenko et al., 2012) .",
"cite_spans": [
{
"start": 563,
"end": 585,
"text": "(Bugatus et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 682,
"end": 705,
"text": "(Kemmerer et al., 2008)",
"ref_id": "BIBREF36"
},
{
"start": 854,
"end": 878,
"text": "(Foderenko et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Representation in the Brain",
"sec_num": "2.2"
},
{
"text": "We expect to find that lexical and compositional semantic models can capture differences in the processing of literal and metaphoric language in the brain. In line with the idea that literal language co-occurs more directly with our everyday perceptual experience, we expect that visual models will show an overall advantage in literal but perhaps not metaphor decoding across the whole brain (particularly within Occipital and Temporal lobes) and in visual (action) ROIs compared to languagerelated ROIs. In contrast, for metaphor decoding we expect that linguistic models will mainly show an advantage in language-related ROIs compared with visual (and action) ROIs due to their more abstract nature. Lastly, we expect compositional models to be superior to lexical models in metaphor decoding, which relies on semantic composition for meaning disambiguation in context. This allows investigating whether metaphor comprehension involves lingering access to the literal meaning including more grounded visual and sensorimotor representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Representation in the Brain",
"sec_num": "2.2"
},
{
"text": "Stimuli consisted of sentences divided into five main conditions: 40 affirmative literal, 40 negated literal, 40 affirmative metaphor, 40 negated metaphor, and 40 affirmative literal paraphrases of the metaphor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brain Imaging Data",
"sec_num": "3"
},
{
"text": "Stimuli Stimuli consisted of sentences divided into five main conditions: 40 affirmative literal, 40 negated literal, 40 affirmative metaphor, 40 negated metaphor, and 40 affirmative literal paraphrases of the metaphor (used as control). A total of 31 unique hand-action verbs were used (9 verbs were re-used twice per condition). For each verb, the authors created four conditions: affirmative literal, affirmative metaphoric, negated literal, and negated metaphoric. All sentences were in the third person singular, present tense, progressive, see Figure 1 . Stimuli were created by the authors and normed for psycholinguistic variables (i.e., length, familiarity, concreteness) by an independent set of participants in a behavioral experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 550,
"end": 558,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Brain Imaging Data",
"sec_num": "3"
},
{
"text": "Participants Fifteen adults (8 women, ages 18 to 35) were involved in the fMRI study. All participants were right-handed, native English speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brain Imaging Data",
"sec_num": "3"
},
{
"text": "Experimental Paradigm Participants were instructed to passively read the object of the sentence (e.g., ''the yellow lemon''), briefly shown on screen first, followed by the sentence (e.g., ''She's squeezing the lemon''). The object was shown on screen for 2 s, followed by a 0.5 s interval, then the sentence was presented for 4 s followed by a rest of 8 s. A total of 5 runs were completed, each lasting 10.15 minutes (3 participants only completed 4 runs). Stimulus presentation was randomized across participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brain Imaging Data",
"sec_num": "3"
},
{
"text": "fMRI data acquisition fMRI images were acquired with a Siemens MAGNETOM Trio 3T System with a 32-channel head matrix coil. Highresolution anatomical scans were acquired with a structural T1-weighted magnetization prepared rapid gradient echo (MPRAGE) with TR = 1950 ms, TE = 2.26 ms, flip angle 10%, 256 \u00d7 256 mm matrix, 1 mm resolution, and 208 coronal slices. Whole brain functional images were obtained with a T2* weighted single-shot gradientrecalled echoplanar imaging, echo-planar sequence (EPI) using blood oxygenation-level-dependent contrast with TR = 2000 ms, TE = 30 ms, flip angle 90 degrees, 64\u00d764 mm matrix, 3.5 mm resolution. Each functional image consisted of 37 contiguous axial slices, acquired in interleaved mode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brain Imaging Data",
"sec_num": "3"
},
{
"text": "All our linguistic models are based on GloVe (Pennington et al., 2014) 100-dimensional (dim) word vectors provided by the authors, trained on Wikipedia and the Gigaword corpus. 1 We investigate the following semantic models:",
"cite_spans": [
{
"start": 45,
"end": 70,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF47"
},
{
"start": 177,
"end": 178,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Models",
"sec_num": "4.1"
},
{
"text": "Individual Word Vectors In this model, stimulus phrases are represented as the individual Ddim word embeddings for their verb and direct object. We will refer to these models as VERB and OBJECT, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Models",
"sec_num": "4.1"
},
{
"text": "We then experiment with modelling phrase meanings as the 2D-dim concatenation of their verb and direct object embeddings (VERBOBJECT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": null
},
{
"text": "Addition This model takes the embeddings w 1 , . . . , w n for the words of the stimulus phrase, and computes the stimulus phrase representation as their average:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": null
},
{
"text": "h = 1 n n i=1 w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": null
},
{
"text": "LSTM As a more sophisticated compositional model, we take the LSTM recurrent neural network architecture of Hochreiter and Schmidhuber (1997) . We trained the LSTM on a natural language inference task (Bowman et al., 2015), as it is a complex semantic task where we expect rich meaning representations to play an important role. Given two sentences, the goal of natural language inference is to decide whether the first entails or contradicts the second, or whether they are unrelated. We used the LSTM to compute compositional representations for each sentence, and then used a single-layer perceptron classifier (Bowman et al., 2016) to predict the correct relationship. The inputs to the LSTM were the same 100-dim GloVe embeddings used for the other models, and were updated during training. The model was optimized using Adam (Kingma and Ba, 2014). We extracted 100-dim vector representations from the hidden state of the LSTM for the verb-object phrases in our stimulus set.",
"cite_spans": [
{
"start": 108,
"end": 141,
"text": "Hochreiter and Schmidhuber (1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": null
},
{
"text": "We use the MMfeat toolkit (Kiela, 2016) to obtain visual representations in line with Anderson et al. (2017b) . We retrieved 10 images for each word or phrase in our dataset using Google Image Search. We then extracted an embedding for each of the images from a deep convolutional neural network that was trained on the ImageNet classification task (Russakovsky et al., 2015) . We used an architecture consisting of five convolutional layers, followed by two fully connected rectified linear unit layers and a softmax layer for classification. To obtain an embedding for a given image we performed a forward pass through the network and extracted the 4096-dim fully connected layer that precedes the softmax layer. The visual representation of a word or a phrase is computed as the mean of its 10 individual image representations.",
"cite_spans": [
{
"start": 26,
"end": 39,
"text": "(Kiela, 2016)",
"ref_id": "BIBREF37"
},
{
"start": 86,
"end": 109,
"text": "Anderson et al. (2017b)",
"ref_id": "BIBREF3"
},
{
"start": 349,
"end": 375,
"text": "(Russakovsky et al., 2015)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visually Grounded Models",
"sec_num": "4.2"
},
{
"text": "We experiment with word-based models (VISUAL VERB and VISUAL OBJECT) and the following three visual compositional models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visually Grounded Models",
"sec_num": "4.2"
},
{
"text": "Concatenation This model represents the stimulus phrase as the concatenation of the two D-dim visual representations for the verb and the object (VISUAL VERBOBJECT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visually Grounded Models",
"sec_num": "4.2"
},
{
"text": "Addition We take the average of the visual representations for the verb and object to give the representation for the phrase (VISUAL ADDITION).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visually Grounded Models",
"sec_num": "4.2"
},
{
"text": "Phrase We obtain visual representations for the phrase, by querying Google images for the verbobject phrase directly (VISUAL PHRASE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visually Grounded Models",
"sec_num": "4.2"
},
{
"text": "For our experiments we limited analysis to the 12 individuals who completed all runs. The runs were combined across time to form each participant's dataset and preprocessed (high-pass filtered, motioncorrected, linearly detrended) with FSL. 2 General Linear Modeling After fMRI preprocessing, we selected sentences within the affirmative literal and affirmative metaphoric conditions representative of the 31 unique verbs as conditions of interest for all our experiments. We fit a model of the hemodynamic response function to each stimulus presentation using a univariate general linear model with PyMVPA. 3 The entire stimulus presentation was modeled as an event lasting 6 s after taking into account the hemodynamic lag of 4 s. The model parameters (Beta weights) were normalized to Z-scores. Each stimulus presentation was then represented as a single volume containing voxel-wise Z-score maps for each of the 31 affirmative literal and 31 metaphoric sentences. The affirmative literal or metaphoric neural estimates were then used to perform similarity-based decoding, separately.",
"cite_spans": [
{
"start": 608,
"end": 609,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "fMRI Data Processing",
"sec_num": "5.1"
},
{
"text": "We performed feature selection by selecting the top 35% of voxels that showed the highest sensitivity (F-statistics) using a univariate ANOVA as a feature-wise measure with two groups: the 31 affirmative literal sentences versus 31 affirmative metaphoric sentences. F-statistics were computed for each feature as the standard fraction of between and within group variances using PyMVPA. This selected voxels sensitive to univariate activation differences between literal and metaphoric categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voxel Selection",
"sec_num": null
},
{
"text": "Following Anderson et al. 2013, we performed decoding at the whole-brain level and across four gross anatomical divisions: the frontal, temporal, occipital, and parietal lobes. The masks were created using the Montreal Neurological Institute (MNI) Structural Atlas in FSL. We also defined the following a priori ROIs to compare the performance of literal and metaphoric decoding in visual and sensorimotor brain regions vs. language-related brain areas implicated in lexical-semantic processing: (1) visual ROIs (left lateral occipital temporal cortex [LLOCT] , left ventral temporal cortex (LVT)); (2) action ROIs (LPG, LPM); (3) languagerelated ROIs (LMTP, LIFG). The LLOTC and LVT were created manually in FSL using the anatomical landmarks of Bugatus et al. (2017) . The LPG and LPM were created using the Juelich Histological Atlas thresholded at 25% in FSL. The LMTP and LIFG were created using the Harvard-Oxford Cortical Probabilistic Atlas thresholded at 25% in FSL. Masks were transformed from MNI standard space into the participant's functional space.",
"cite_spans": [
{
"start": 552,
"end": 559,
"text": "[LLOCT]",
"ref_id": null
},
{
"start": 747,
"end": 768,
"text": "Bugatus et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Regions of Interest",
"sec_num": "5.2"
},
{
"text": "We use similarity-based decoding (Anderson et al., 2016) to evaluate to what extent the representations produced by our semantic models are able to decode brain activity patterns associated with our stimuli. We first compute two similarity matrices (k stimuli \u00d7 k stimuli), containing similarities between all stimulus phrases in the dataset: the model similarity matrix (where similarities are computed using the semantic model vectors) and the brain similarity matrix (where similarities are computed using the brain activity vectors). The similarities were computed using Pearson correlation coefficient as a measure. We then perform the decoding using a leave-two-out decoding scheme in this similarity space. Specifically, from the set of all possible pairs of stimuli (the number of possible pairs for k = 31 stimuli is 465), a single pair is selected at a time. Model similaritycodes are obtained for each stimulus in the pair by extracting the relevant column vectors for those stimuli from the model similaritymatrix. In the same way, neural similarity-codes are extracted from the neural similarity-matrix.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Anderson et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Based Decoding",
"sec_num": "5.3"
},
{
"text": "Correlations with the stimuli pairs themselves are removed to not bias decoding. The model similarity-codes of the two held-out stimuli are correlated with their respective neural similaritycodes. If the correct labeling scheme produces a higher sum of correlation coefficients than the incorrect labeling scheme, this is counted as a correct classification, and otherwise as incorrect. When this procedure is completed for all possible held-out pairs, the number of correct classifications over the total number of possible pairs yields a decoding accuracy. We perform group-level similarity-decoding by first averaging the neural similarity-codes across participants to yield group-level neural similarity-codes across participants to yield group-level neural similaritycodes equivalent to a fixed-effects analysis as in Anderson et al. (2016) . The group-level neural similarity-codes and model similarity-codes are then used to perform leave-two-out decoding as described above.",
"cite_spans": [
{
"start": 823,
"end": 845,
"text": "Anderson et al. (2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Based Decoding",
"sec_num": "5.3"
},
{
"text": "Statistical significance was carried out as in Anderson et al. (2016) using a non-parametric permutation test. The null-hypothesis is that there is no correspondence between the model-based similarity-codes and the group-level neural similarity codes. The null-distribution was estimated using a permutation scheme. We randomly shuffled the rows and columns of the model-based similarity matrix, leaving the neural similarity-matrix fixed. Following each permutation (n = 10,000), we perform group-level similarity-decoding obtaining 10,000 decoding accuracies we would expect by chance using random labeling. The probability (pvalue) of obtaining a decoding accuracy under the null-distribution is then at least as large as the observed accuracy score. We correct for the number of statistical tests performed using False-Discovery-Rate (FDR) with a corrected error probability threshold of p = 0.05 (Benjamini and Hochberg, 1995) .",
"cite_spans": [
{
"start": 47,
"end": 69,
"text": "Anderson et al. (2016)",
"ref_id": "BIBREF5"
},
{
"start": 901,
"end": 931,
"text": "(Benjamini and Hochberg, 1995)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Significance",
"sec_num": "5.4"
},
{
"text": "We use group-level similarity-decoding to decode brain activity associated with literal and metaphoric sentences using each of our semantic models. We perform decoding at the sentence level for literal and metaphor conditions (affirmative only), separately. Decoding was performed at the wholebrain level and across the brain's lobes, as well as within a priori defined ROIs implicated in visual, action and language-related processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "Literal sentences When decoding literal sentences with linguistic models across the brain's lobes we found significant decoding accuracies surviving FDR correction for multiple testing for the ADDITION and VERB models, see Metaphoric Sentences When decoding metaphor with linguistic models we found significant decoding accuracies for the ADDITION, VERBOBJECT, and OBJECT models, mainly in the Temporal lobe, see Table 2 . A two-way ANOVA showed a main effect of model F(4,16) = 18.77 , p < 0.001, and brain lobe F(4,16) = 7.58, p < 0.01. Post-hoc t-tests showed a significant advantage for the ADDITION model over other models. We also found that the VERBOBJECT model significantly outperformed the LSTM (t = 3.89, p < 0.05, df = 4) and VERB (t = 4.36, p < 0.05, df = 4) models, while the OBJECT model also outperformed the VERB model (t = 5.42, p < 0.01, df = 4). Thus, models that incorporate the object directly (i.e., OBJECT, VERBOBJECT), outperform the VERB model. A post-hoc unpaired t-test confirmed that the performance of the OBJECT model was higher in metaphor versus literal decoding (t = 2.88, p < 0.05, df = 8). The results suggest that the ADDITION, VERBOBJECT, and OBJECT models are superior compared with other models in decoding metaphoric sentences and, furthermore, that the OBJECT model more closely captures the variance associated with the metaphor versus literal category. Lastly, additional post-hoc t-tests showed that the Temporal lobe significantly outperformed other lobes (except the Occipital lobe) across the models. This suggests an advantage for linguistic models in temporal areas, possibly pointing to an increased dependence on memory and language processing associated with medial and lateral temporal areas, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic Models",
"sec_num": "6.1"
},
{
"text": "Literal Sentences When decoding literal sentences with visual models, we found significant decoding accuracies for the VISUAL OBJECT and VISUAL ADDITION models, mainly in Occipital and Temporal lobes, see Table 1 . A two-way ANOVA showed a main effect of model, but this did not survive multiple-testing correction. The results suggest that visual models can decode brain activity associated with concrete concepts only in occipital-temporal areas, part of the ventral visual stream, possibly pointing to increased reliance on these areas for object recognition, but see ROI analysis in section 6.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Visual Models",
"sec_num": "6.2"
},
{
"text": "Metaphoric sentences When decoding metaphoric sentences with visual models in the brain, we found significant decoding accuracies for both the VISUAL VERB and VISUAL PHRASE model in the Frontal lobe, see Table 2 . A two-way ANOVA showed a main effect of model F(4,16) = 6.12, p < 0.01 and brain lobe F(4,16) = 5.21, p < 0.01. Post-hoc t-tests showed that both the VISUAL VERB (t = 5.40, p < 0.01, df = 4) and VISUAL VERBOBJECT (t = 8.49, p < 0.01, df = 4) models outperformed the VISUAL OBJECT model across the lobes. This suggests that visual information about the verb is more relevant to metaphor decoding than that of the object. Relatedly, when comparing the performance of visual and linguistic models across the lobes, we found that the OBJECT model significantly outperformed the VISUAL OBJECT model across the lobes, surviving correction for multiple comparisons. In sum, these results suggest that visual information corresponds more strongly to the concrete verb, whereas linguistic information corresponds more strongly with the abstract object in metaphor decoding, but see ROI analysis section 6.3. We found a main effect of brain lobe that did not survive multiple-testing correction.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visual Models",
"sec_num": "6.2"
},
{
"text": "Literal Sentences When comparing the performance of linguistic models across the ROIs, we found that the performance of linguistic models within language-related ROIs was on par with that within vision and action ROIs, see Table 3 . This suggests that the linguistic models may be capturing sensorimotor and visual representations in the brain during literal sentence processing. Adding to this, we observed that linguistic models significantly outperformed visual models in action ROIs (t = 6.83, p < 0.001, df = 9), suggesting that the linguistic models are more closely able to capture the motoric features and action semantics relevant to literal sentence processing when compared even to the more visually grounded models. The results suggest that the visual models may correlate with information in action-related brain regions (e.g., sensorimotor representations). In sum, the results suggest that literal sentence processing involves both language-related and perceptual/sensorimotor representations (relevant to action semantics) in the brain.",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Region of Interest (ROI) analysis",
"sec_num": "6.3"
},
{
"text": "Metaphoric Sentences When comparing the performance of linguistic models across the ROIs (see Table 4 ), we observed that linguistic models were superior in decoding metaphoric sentences in language-related ROIs compared to visual (t = 3.11, p < 0.05, df = 9) and action ROIs (t = 2.97, p < 0.05, df = 9). This suggests that linguistic models mainly capture language-related representations in the brain during metaphor processing. Interestingly, we did observe that the visual models significantly outperformed the linguistic models in action related ROIs (t = 3.91, p < 0.01, df = 9) for metaphor decoding. Relatedly, we also observed that visual models were superior in decoding metaphoric sentences in action compared with language-related ROIs (t = 3.06, p < 0.05, df = 9), in contrast to literal sentences as described above.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Region of Interest (ROI) analysis",
"sec_num": "6.3"
},
{
"text": "A post-hoc unpaired t-test confirmed that the performance of visual models in action ROIs was significantly higher in metaphor versus literal decoding (t = 8.92, p < 0.001, df = 18). The results suggest that the visual models may correlate with information in action-related brain (e.g., sensorimotor representations). The significant values reported are those that survived correction for multiple comparisons in the ROI analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Region of Interest (ROI) analysis",
"sec_num": "6.3"
},
{
"text": "Addition vs. LSTM We found that the ADDI-TION model outperformed both lexical models and the VERBOBJECT model. This suggests that compositional semantic models that average semantic representations of the individual words in a phrase can decode brain activity associated with sentential meanings, irrespective of whether actionverbs are used in a literal or metaphoric context. The findings complement prior work showing that regression-based models that use word embeddings as features can predict brain activity associated with larger linguistic units (Wehbe et al., 2014; Huth et al., 2016; Pereira et al., 2018) . The LSTM, however, did not outperform the other models. This is surprising given prior work showing that contextual representations from an unsupervised LSTM language model outperform the bag-of-words model (Jain and Huth, 2018) . The authors show increasing performance gains using representations from the second layer with longer context lengths (i.e., > 3 words). However, using the representations from the last layer together with a shorter context window sometimes showed inferior performance compared to the word-embedding encoding model. The latter finding is more closely aligned with our own parameters and findings. It is possible that the LSTM model shows the largest performance gain over the bag-of-words model when predicting brain activity associated with narrative listening (i.e., where the subject must keep track of entities and events over longer periods). In contrast, our sentence comprehension task depends on the next word for meaning disambiguation. It is also possible that semantic models trained in the NLI task may not be ideally suited for capturing differences in literal and metaphor processing.",
"cite_spans": [
{
"start": 554,
"end": 574,
"text": "(Wehbe et al., 2014;",
"ref_id": "BIBREF55"
},
{
"start": 575,
"end": 593,
"text": "Huth et al., 2016;",
"ref_id": null
},
{
"start": 594,
"end": 615,
"text": "Pereira et al., 2018)",
"ref_id": "BIBREF49"
},
{
"start": 825,
"end": 846,
"text": "(Jain and Huth, 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "The Role of the Verb and the Object We found that the VERB model outperformed the other models (except the ADDITION model) in literal decoding. In contrast, in metaphor decoding we observed that models that incorporate the object directly (i.e., VERBOBJECT and OBJECT models) outperformed the VERB model. Moreover, the performance of the VERB model was higher in literal versus metaphor decoding, while we found the opposite pattern in metaphor decoding where the OBJECT model had an advantage. It is possible that the VERB model more closely captures the variance associated with the overall concrete meaning in the brain. In support of this, the performance of the linguistic models including that of the VERB model was higher in action-related brain regions in literal compared to metaphoric decoding. On the other hand, the OBJECT model may best capture the variance associated with the overall abstract meaning in the brain. The objects (topic) in metaphoric sentences tend to be more abstract and capture the overall aboutness of the metaphoric meaning to a greater extent than the verb (vehicle). In support of this, in metaphor decoding the linguistic models exhibited a higher performance in language-related areas than within visual and action-related areas. Critically, we restricted analysis to voxels showing maximum variance between the univariate brain response of literal and metaphoric categories. Thus, the results mainly highlight models that can decode literal and metaphoric sentences to the extent that they are able to identify the largest differences between literal and metaphor processing in the brain, more generally. Therefore, the results do not necessarily suggest that the VERB model, for example, is not an adequate representation for metaphoric sentences, just that when distinguishing literal and metaphoric processing in the brain it more closely aligns with representations for literal sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Alternatively, the VERB model may be superior in capturing the variance associated with the literal case, in particular compared to the OBJECT model, as the verbs were found to be significantly more frequent than their arguments for literal sentences in the training corpus. However, we also found that the metaphoric uses of the verbs are significantly more frequent than the literal uses in the training corpus likely due to the fact that written language often reflects more abstract topics. However, we found that the VERB model showed higher performance in literal compared to metaphoric decoding suggesting that frequency of usage in the corpus does not always impact decoding as might be expected. Importantly, the literal and metaphorical sentences did not differ in familiarity (i.e., subjective frequency) nor did we find significant differences in the cloze probabilities between the literal and metaphoric phrases in the training corpus suggesting this broader factor is not at play.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Taken together, the results with the linguistic models suggest that one of the main ways lexical-semantic similarity differs in literal versus metaphor processing in the brain is along the concrete versus abstract dimension, as we might expect. The results are in line with prior neuroscientific studies showing that concrete concepts recruit more sensorimotor areas, whereas abstract concepts rely more heavily on language-related brain regions (Hoffman et al., 2015) . More specifically, the findings are in agreement with the idea that action-related words and sentences are embedded in action-perception circuits in the brain due to co-occurrences between the words and the action-percepts they denote (Pulvermuller, 2005) . However, the extent to which action-perception circuits are recruited may be modulated by the linguistic context (Desai et al., 2013) .",
"cite_spans": [
{
"start": 446,
"end": 468,
"text": "(Hoffman et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 706,
"end": 726,
"text": "(Pulvermuller, 2005)",
"ref_id": "BIBREF50"
},
{
"start": 842,
"end": 862,
"text": "(Desai et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "These results also shed light on possible factors underlying the performance advantage we observed for the ADDITION model over the lexical models (and VERBOBJECT model). The ADDITION model enhances common features present in the individual word embeddings of the verb and the object. Therefore, given the preference we observed for the VERB over the OBJECT in literal decoding (and vice versa for metaphor decoding), this suggests that adding the complimentary embedding largely enhances lexical-semantic relations already present in either the VERB or OBJECT alone rather than provide other significant dimensions of variance, per se. For literal decoding, the OBJECT may enhance variance already associated with the VERB by narrowing the range of relevant object-directed actions (e.g., actions on inanimate versus animate objects) highlighting more concrete information. In contrast, for metaphor decoding it is more likely that the VERB enhances variance associated with the OBJECT by narrowing in on abstract uses as opposed to literal uses of each object (e.g., ''writing the poem'' versus ''grasping the poem''), highlighting more abstract information in the process. It should be noted that this effect may be due to the fact that we used familiar metaphors well represented in the corpus, which will need to be investigated in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We observed that the VISUAL OBJECT and VISUAL ADDITION models performed well in temporal-occipital areas. These results are in line with prior work showing that visual models can decode brain activity associated with concrete concepts in lateral occipital-temporal areas part of the ventral visual stream implicated in object recognition (Anderson et al., 2015) . However, this was not specific to literal decoding. In fact, we observed that the VISUAL VERB and VISUAL VERBOBJECT models outperformed the VISUAL OBJECT model in metaphor decoding. Overall, we found that the visual models outperformed linguistic models in action-related ROIs in metaphor decoding. The performance of visual models in action ROIs was also significantly higher in metaphor versus literal decoding. The latter suggests that the visual models correlate with sensorimotor features and may play a role in metaphor processing in the brain. This could possibly suggest that different aspects of the literal meaning of the verb (distinct from its prototypical or salient literal use) may play a role in metaphor processing in the brain. These less salient motoric aspects of the literal meaning captured by the visual verb models could reflect (a) more abstract sensorimotor representations such as information about higher-level action goals or (b) social-emotional factors associated with each action, such as information about people, bodies, or faces tied to interoceptive experience.",
"cite_spans": [
{
"start": 338,
"end": 361,
"text": "(Anderson et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Models",
"sec_num": null
},
{
"text": "It could also be the case that these aspects of the literal meaning are not necessarily less salient or prototypical, but are simply distinct from the specific literal uses of verbs in our stimuli (which contained primarily verb predicates with inanimate objects as arguments). It is possible that verb predicates with animate objects as arguments involving social interactions may also be relevant to the metaphoric meaning. Indeed, an important embodied dimension of variance for abstract concepts is social-emotional information (Barsalou, 2009) .",
"cite_spans": [
{
"start": 532,
"end": 548,
"text": "(Barsalou, 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Models",
"sec_num": null
},
{
"text": "Additionally, it is possible that differences in overall visual statistics between our images for objects versus verbs across literal and metaphorical sentences may have biased decoding. Kiela et al. (2014) show that images for concrete objects are more internally homogenous (less dispersed) than that for abstract concepts, which may have impacted the performance of the VISUAL OBJECT model in metaphor decoding. Importantly, however, differences in literal and metaphor decoding with the VISUAL VERB model should not necessarily be impacted by this as the verbs used were the same. Therefore, the fact that the visual models in action-related areas overall had higher decoding accuracies in metaphor compared to literal decoding suggests that this effect is not influenced by image dispersion. Rather this effect suggests that the VISUAL VERB may capture sensorimotor features relevant to metaphor decoding. Future studies will need to more carefully consider these possible confounding factors and possibly experiment with video data in place of images.",
"cite_spans": [
{
"start": 187,
"end": 206,
"text": "Kiela et al. (2014)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Models",
"sec_num": null
},
{
"text": "Accessibilty of the Literal Meaning When only looking at the linguistic models, the results appear largely in line with the direct view or a categorical processing of familiar metaphor in which the literal meaning is not fully accessible. The VERB model showed a clear advantage in literal compared to metaphor decoding. Moreover, the VERB model showed significant decoding accuracies in motor areas only in the literal but not metaphoric case, suggesting that the literal meaning is not being fully simulated in the metaphoric case. This aligns with neuroimaging work showing that literal versus familiar metaphoric actions more reliably activate motor areas (Desai et al., 2011) .",
"cite_spans": [
{
"start": 660,
"end": 680,
"text": "(Desai et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Models",
"sec_num": null
},
{
"text": "Importantly, however, we found evidence that the VERB model showed some significant decoding accuracies for metaphor decoding in languagerelated brain regions (e.g., LMTP). Future work will need to determine whether this reflects distinct aspects of the literal meaning relevant to metaphor processing or reflects lexico-semantic information associated primarily with the more abstract sense of the verb. Adding to this, the poor temporal resolution of fMRI does not permit looking at different temporal processing stages and, therefore, cannot rule out the idea that the literal meaning is initially fully accessed and, subsequently, (partially) discarded or suppressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Models",
"sec_num": null
},
{
"text": "We also found further evidence to suggest that the linguistic context may modulate which representations associated with the verb are most accessible. Mainly, we found that visual models including the VISUAL VERB model were superior in decoding metaphoric versus literal sentences in action-related brain areas. This suggests that different aspects of the literal meaning (possibly less salient or prototypical literal meanings) may play a role in processing the metaphoric meaning. Thus, while the results do not definitively adjudicate between different putative stages of metaphor processing, they, nevertheless, inform our understanding of the debate in that they suggest that future studies will need to consider (control for) contextual effects of literality and their role in the study of metaphor comprehension. For instance, it may be useful to present subjects in the scanner with single words (grasp, push, etc.) to assess a prototypical brain response and then look at how different contexts (literal or metaphorical) modulate that response over time. This may reveal different kinds of processing stages and the influence of bottom up (immediate and automatic) versus top down (context and inference-driven) influences at play during literal versus metaphor processing. This would permit more carefully assessing the role of the literal meaning in metaphor comprehension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Models",
"sec_num": null
},
{
"text": "We presented the first study evaluating a range of semantic models in their ability to decode brain activity when reading literal and metaphoric sentences. We found evidence to suggest that compositional models can decode sentences irrespective of figurativeness in the brain and that at least for the linguistic models the VERB model may be more closely associated with the literal (concrete) meaning and the OBJECT model more closely associated with the metaphoric (abstract) meaning. This includes a closer relationship between the VERB model and action-related brain regions in the brain during literal sentence processing, in line with neuroimaging work showing that literal versus familiar metaphoric actions more reliably activate sensorimotor areas. This adds support to the idea that the literal meaning may not be as accessible for familiar metaphors. Taken together, the linguistic model results are in line with prior neuroscientific studies suggesting that differences between literal and metaphoric sentence processing align with concrete versus abstract concept processing in the brain, mainly with a greater reliance of concrete concepts on sensorimotor areas, while abstract concepts rely more heavily on language-related brain regions. Interestingly, however, the results with the visual models point to the need to also consider how metaphor (abstract language) may be grounded in more abstract knowledge about actions or social-interaction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Directions",
"sec_num": "8"
},
{
"text": "Future studies will need to further investigate the accessibility of the literal meaning (and abstract meaning) in metaphor comprehension using a larger dataset. For example, by considering a wider range of metaphors (e.g., metaphoric uses of objects) representing different semantic domains and different degrees of ambiguity. Also, it may be useful to consider event embeddings optimized towards learning representations of events and their thematic roles that may be better able to deal with different verb senses by learning non-linear compositions of predicates and their arguments (Tilk et al., 2016) .",
"cite_spans": [
{
"start": 587,
"end": 606,
"text": "(Tilk et al., 2016)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Directions",
"sec_num": "8"
},
{
"text": "https://nlp.stanford.edu/projects/ glove/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://fsl.fmrib.ox.ac.uk/fsl/fslwiki. 3 http://www.pymvpa.org/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Predicting neural activity patterns associated with sentences using a neurobiologically motivated model of semantic representation",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Binder",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Fernandino",
"suffix": ""
},
{
"first": "Colin",
"middle": [
"J"
],
"last": "Humphries",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"L"
],
"last": "Conant",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Xixi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Donias",
"middle": [],
"last": "Doko",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [
"D S"
],
"last": "Raizada",
"suffix": ""
}
],
"year": 2017,
"venue": "Cerebral Cortex",
"volume": "27",
"issue": "9",
"pages": "4379--4395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J. Anderson, Jeffrey R. Binder, Leonardo Fernandino, Colin J. Humphries, Lisa L. Conant, Mario Aguilar, Xixi Wang, Donias Doko, and Rajeev D.S. Raizada. 2017a. Pre- dicting neural activity patterns associated with sentences using a neurobiologically motivated model of semantic representation. Cerebral Cortex, 27(9):4379-4395.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Of words, eyes and brains: Correlating imagebased distributional semantic models with neural representations of concepts",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Ulisse",
"middle": [],
"last": "Bordignon",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1960--1970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J. Anderson, Elia Bruni, Ulisse Bordignon, Massimo Poesio, and Marco Baroni. 2013. Of words, eyes and brains: Correlating image- based distributional semantic models with neural representations of concepts. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1960-1970. Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lopopolo",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "NeuroImage",
"volume": "120",
"issue": "",
"pages": "309--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J. Anderson, Elia Bruni, Alessandro Lopopolo, Massimo Poesio, and Marco Baroni. 2015. Reading visually embodied meaning from the brain: Visually grounded compu- tational models decode visual-object mental imagery induced by written text. NeuroImage, 120:309-322.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "17--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J. Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017b. Visually grounded and textual semantic models differ- entially decode brain activity associated with concrete and abstract nouns. Transactions of the Association for Computational Linguistics, 5:17-30.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multiple regions of a cortical network commonly encode the meaning of words in multiple grammatical positions of read sentences",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Edmund",
"middle": [
"C"
],
"last": "Lalor",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Binder",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Fernandino",
"suffix": ""
},
{
"first": "Colin",
"middle": [
"J"
],
"last": "Humphries",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"L"
],
"last": "Conant",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [
"D S"
],
"last": "Raizada",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Grimm",
"suffix": ""
},
{
"first": "Xixi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Cerebral Cortex",
"volume": "29",
"issue": "6",
"pages": "2396--2411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J. Anderson, Edmund C. Lalor, Feng Lin, Jeffrey R. Binder, Leonardo Fernandino, Colin J. Humphries, Lisa L. Conant, Rajeev D. S. Raizada, Scott Grimm, and Xixi Wang. 2019. Multiple regions of a cortical network commonly encode the meaning of words in mul- tiple grammatical positions of read sentences. Cerebral Cortex, 29(6):2396-2411.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Representational similarity encoding for fMRI: Patternbased synthesis to predict brain activity using stimulus-model-similarities",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"D"
],
"last": "Zinszer",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [
"D S"
],
"last": "Raizada",
"suffix": ""
}
],
"year": 2016,
"venue": "NeuroImage",
"volume": "128",
"issue": "",
"pages": "44--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J. Anderson, Benjamin D. Zinszer, and Rajeev D.S. Raizada. 2016. Representa- tional similarity encoding for fMRI: Pattern- based synthesis to predict brain activity using stimulus-model-similarities. NeuroImage, 128:44-53.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Disentangling metaphor from context: An ERP study",
"authors": [
{
"first": "Valentina",
"middle": [],
"last": "Bambini",
"suffix": ""
},
{
"first": "Chiara",
"middle": [],
"last": "Bertini",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Schaeken",
"suffix": ""
},
{
"first": "Alessandra",
"middle": [],
"last": "Stella",
"suffix": ""
},
{
"first": "Francesco",
"middle": [
"Di"
],
"last": "Russo",
"suffix": ""
}
],
"year": 2016,
"venue": "Frontiers in Psychology",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentina Bambini, Chiara Bertini, Walter Schaeken, Alessandra Stella, and Francesco Di Russo. 2016. Disentangling metaphor from context: An ERP study. Frontiers in Psychol- ogy, 7:559.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Grounded cognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"W"
],
"last": "Barsalou",
"suffix": ""
}
],
"year": 2008,
"venue": "Annual Review of Psychology",
"volume": "59",
"issue": "1",
"pages": "617--645",
"other_ids": {
"PMID": [
"17705682"
]
},
"num": null,
"urls": [],
"raw_text": "Lawrence W. Barsalou. 2008. Grounded cognition. Annual Review of Psychology, 59(1):617-645. PMID: 17705682,",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Simulation, situated conceptualization, and prediction. Philosophical Transactions of the",
"authors": [
{
"first": "Lawrence",
"middle": [
"W"
],
"last": "Barsalou",
"suffix": ""
}
],
"year": null,
"venue": "Royal Society of London. Series B",
"volume": "364",
"issue": "",
"pages": "1281--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence W. Barsalou. 2009. Simulation, situ- ated conceptualization, and prediction. Philo- sophical Transactions of the Royal Society of London. Series B, Biological Sciences, 364(1521):1281-89.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Controlling the false discovery rate: A practical and powerful approach to multiple testing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Benjamini",
"suffix": ""
},
{
"first": "Yosef",
"middle": [],
"last": "Hochberg",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)",
"volume": "57",
"issue": "1",
"pages": "289--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289-300.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies",
"authors": [
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Binder",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Rutvik",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Desai",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"L"
],
"last": "Graves",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conant",
"suffix": ""
}
],
"year": 2009,
"venue": "Cerebral Cortex",
"volume": "19",
"issue": "12",
"pages": "2767--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey R. Binder, Rutvik H. Desai, William W. Graves, and Lisa L. Conant. 2009. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12):2767-96.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distinct brain systems for processing concrete and abstract concepts",
"authors": [
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Binder",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"F"
],
"last": "Westbury",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"T"
],
"last": "Possing",
"suffix": ""
},
{
"first": "Kristen",
"middle": [
"A"
],
"last": "Mckiernan",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Medler",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Cognitive Neuroscience",
"volume": "17",
"issue": "6",
"pages": "905--922",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey R. Binder, Chris F. Westbury, Edward T. Possing, Kristen A. McKiernan, and David A. Medler. 2005. Distinct brain systems for pro- cessing concrete and abstract concepts. Journal of Cognitive Neuroscience, 17(6):905-17.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. CoRR, abs/1508.05326.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A fast unified model for parsing and sentence understanding",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Gupta",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1466--1477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016, aug. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1466-1477.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Germany",
"middle": [],
"last": "Berlin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Task alters category representations in prefrontal but not high-level visual cortex",
"authors": [
{
"first": "Lior",
"middle": [],
"last": "Bugatus",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"S"
],
"last": "Weiner",
"suffix": ""
},
{
"first": "Kalanit",
"middle": [],
"last": "Grill-Spector",
"suffix": ""
}
],
"year": 2017,
"venue": "Neuroimage",
"volume": "",
"issue": "",
"pages": "155437--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lior Bugatus, Kevin S. Weiner, and Kalanit Grill-Spector. 2017. Task alters category repre- sentations in prefrontal but not high-level visual cortex. Neuroimage, 155437-449.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain",
"authors": [
{
"first": "Luana",
"middle": [],
"last": "Bulat",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1092--1102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luana Bulat, Stephen Clark, and Ekaterina Shutova. 2017, September. Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages1092-1102. Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Representational similarity mapping of distributional semantics in left inferior frontal, middle temporal, and motor cortex",
"authors": [
{
"first": "Francesca",
"middle": [],
"last": "Carota",
"suffix": ""
},
{
"first": "Nikolaus",
"middle": [],
"last": "Kriegeskorte",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Nili",
"suffix": ""
},
{
"first": "Friedemann",
"middle": [],
"last": "Pulvermuller",
"suffix": ""
}
],
"year": 2017,
"venue": "Cerebral Cortex",
"volume": "27",
"issue": "1",
"pages": "294--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesca Carota, Nikolaus Kriegeskorte, Hamed Nili, and Friedemann Pulvermuller. 2017. Rep- resentational similarity mapping of distribu- tional semantics in left inferior frontal, middle temporal, and motor cortex. Cerebral Cortex, 27(1):294-309.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Quantitative modeling of the neural representation of objects: How semantic feature norms can account for fMRI activation",
"authors": [
{
"first": "Kai-Min Kevin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Marcel",
"middle": [
"Adam"
],
"last": "Just",
"suffix": ""
}
],
"year": 2010,
"venue": "Neuroimage: Special Issue on Multi-variate Deciding and Brain Reading",
"volume": "56",
"issue": "2",
"pages": "716--727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-min Kevin Chang, Tom Mitchell, and Marcel Adam Just. 2010. Quantitative model- ing of the neural representation of objects: How semantic feature norms can account for fMRI activation. Neuroimage: Special Issue on Multi-variate Deciding and Brain Reading, 56(2):716-727.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The neural career of sensory-motor metaphors",
"authors": [
{
"first": "H",
"middle": [],
"last": "Rutvik",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Desai",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"L"
],
"last": "Binder",
"suffix": ""
},
{
"first": "Quintino",
"middle": [
"R"
],
"last": "Conant",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Mano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Seidenberg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Cognitive Neuroscience",
"volume": "23",
"issue": "9",
"pages": "2376--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rutvik H. Desai, Jeffrey R. Binder, Lisa L. Conant, Quintino R. Mano, and Mark S. Seidenberg. 2011. The neural career of sensory-motor me- taphors. Journal of Cognitive Neuroscience, 23(9):2376-86.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A piece of the action: Modulation of sensorymotor regions by action idioms and metaphors",
"authors": [
{
"first": "H",
"middle": [],
"last": "Rutvik",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"L"
],
"last": "Desai",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Conant",
"suffix": ""
},
{
"first": "Haeil",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Park",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Seidenberg",
"suffix": ""
}
],
"year": 2013,
"venue": "NeuroImage",
"volume": "83",
"issue": "",
"pages": "862--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rutvik H. Desai, Lisa L. Conant, Jeffrey R. Binder, Haeil Park, and Mark S. Seidenberg. 2013. A piece of the action: Modulation of sensory- motor regions by action idioms and metaphors. NeuroImage, 83:862-69.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using fMRI activation to conceptual stimuli to evaluate methods for extracting conceptual representations from corpora",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Devereux",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 First Workshop on Computational Neurolinguistics",
"volume": "",
"issue": "",
"pages": "70--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barry Devereux, Colin Kelly, and Anna Korhonen. 2010. Using fMRI activation to conceptual stimuli to evaluate methods for extracting conceptual representations from corpora. In Proceedings of the NAACL HLT 2010 First Workshop on Computational Neuro- linguistics, pages 70-78. Los Angeles, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Centre for Speech, Language and the Brain (CSLB) concept property norms",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barry",
"suffix": ""
},
{
"first": "Lorraine",
"middle": [
"K"
],
"last": "Devereux",
"suffix": ""
},
{
"first": "Jeroen",
"middle": [],
"last": "Tyler",
"suffix": ""
},
{
"first": "Billi",
"middle": [],
"last": "Geertzen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Randall",
"suffix": ""
}
],
"year": 2014,
"venue": "Behavior Research Methods",
"volume": "46",
"issue": "4",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barry J. Devereux, Lorraine K. Tyler, Jeroen Geertzen, and Billi Randall. 2014. The Centre for Speech, Language and the Brain (CSLB) concept property norms. Behavior Research Methods, 46(4):1-9.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Elisabeth Wehling, Benjamin Bergen, and Lisa Aziz-Zadeh. forthcoming. Affirmation and negation of metaphorical actions in the brain",
"authors": [
{
"first": "G",
"middle": [],
"last": "Vesna",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Djokic",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vesna G. Djokic, Ekaterina Shutova, Elisabeth Wehling, Benjamin Bergen, and Lisa Aziz- Zadeh. forthcoming. Affirmation and negation of metaphorical actions in the brain.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Lexical and syntactic representations in the brain: An fMRI investigation with multivoxel pattern analyses",
"authors": [
{
"first": "Evalina",
"middle": [],
"last": "Fedorenko",
"suffix": ""
},
{
"first": "Alfonso",
"middle": [],
"last": "Nieto-Castanon",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Kanwisher",
"suffix": ""
}
],
"year": 2012,
"venue": "Neuropsychologia",
"volume": "4",
"issue": "50",
"pages": "499--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evalina Fedorenko, Alfonso Nieto-Castanon, and Nancy Kanwisher. 2012. Lexical and syn- tactic representations in the brain: An fMRI investigation with multivoxel pattern analyses. Neuropsychologia, 4(50):499-513.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Functional specificity for high-level linguistic processing in the human brain",
"authors": [
{
"first": "Evelina",
"middle": [],
"last": "Fedorenko",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"K"
],
"last": "Behra",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Kanwisher",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the National Academy of Sciences of the United States of America",
"volume": "108",
"issue": "",
"pages": "16428--16461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evelina Fedorenko, Michael K. Behra, and Nancy Kanwisher. 2011. Functional specificity for high-level linguistic processing in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 108(39):16428-33.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Predicting brain activation patterns associated with individual lexical concepts based on five sensory-motor attributes",
"authors": [
{
"first": "Leonardo",
"middle": [],
"last": "Fernandino",
"suffix": ""
},
{
"first": "Colin",
"middle": [
"J"
],
"last": "Humphries",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "William",
"middle": [
"L"
],
"last": "Gross",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"L"
],
"last": "Conant",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Binder",
"suffix": ""
}
],
"year": 2015,
"venue": "Neuropsychologia",
"volume": "76",
"issue": "",
"pages": "17--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonardo Fernandino, Colin J. Humphries, Mark S. Seidenberg, William L. Gross, Lisa L. Conant, and Jeffrey R. Binder. 2015. Pre- dicting brain activation patterns associated with individual lexical concepts based on five sensory-motor attributes. Neuropsychologia, 76:17-26.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The career of metaphor",
"authors": [
{
"first": "Dedre",
"middle": [],
"last": "Gentner",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"F"
],
"last": "Bowdle",
"suffix": ""
}
],
"year": 2005,
"venue": "Psychological Review",
"volume": "112",
"issue": "1",
"pages": "193--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dedre Gentner and Brian F. Bowdle. 2005. The career of metaphor. Psychological Review, 112(1):193-216.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The psycholinguistics of metaphor",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Glucksberg",
"suffix": ""
}
],
"year": 2003,
"venue": "Trends in Cognitive Sciences",
"volume": "2",
"issue": "7",
"pages": "92--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Glucksberg. 2003. The psycholinguistics of metaphor. Trends in Cognitive Sciences, 2(7):92-96.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": null,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Differing contributions of inferior prefrontal and anterior temporal cortex to concrete and abstract conceptual knowledge",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"J"
],
"last": "Binney",
"suffix": ""
},
{
"first": "Lambon Ralph",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Cortex",
"volume": "63",
"issue": "",
"pages": "250--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Hoffman, Richard J. Binney, and Lambon Ralph Matthew A. 2015. Differing contribu- tions of inferior prefrontal and anterior tempo- ral cortex to concrete and abstract conceptual knowledge. Cortex, 63:250-66.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Natural speech reveals the semantic maps that title human cerebral cortex",
"authors": [
{
"first": "Frederic",
"middle": [
"E"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Jack",
"middle": [
"L"
],
"last": "Theunissen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gallant",
"suffix": ""
}
],
"year": 2016,
"venue": "Nature",
"volume": "532",
"issue": "7600",
"pages": "453--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, Frederic E. Theunissen, and Jack L. Gallant. 2016. Natural speech reveals the se- mantic maps that title human cerebral cortex. Nature, 532(7600):453-458.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Incorporating context into language encoding models for fmri",
"authors": [
{
"first": "Shailee",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Huth",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Grauman",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Cesa-Bianchi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31",
"volume": "",
"issue": "",
"pages": "6628--6637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shailee Jain and Alexander Huth. 2018, Incor- porating context into language encoding models for fmri, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Pro- cessing Systems 31, pages 6628-6637, Curran Associates, Inc.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural representations of the concepts in simple sentences: Concept activation prediction and context effects",
"authors": [
{
"first": "Marcel",
"middle": [
"A"
],
"last": "Just",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [
"L"
],
"last": "Cherkassy",
"suffix": ""
}
],
"year": 2017,
"venue": "Neuroimage",
"volume": "157",
"issue": "",
"pages": "511--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcel A. Just, Jing Wang, and Vladimir L. Cherkassy. 2017. Neural representations of the concepts in simple sentences: Concept acti- vation prediction and context effects. Neuro- image, 157:511-520.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Neuroanatomical distribution of five semantic components of verbs evidence from fmri",
"authors": [
{
"first": "David",
"middle": [],
"last": "Kemmerer",
"suffix": ""
},
{
"first": "Javier",
"middle": [
"G"
],
"last": "Castillo",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Talavage",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Patterson",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Wiley",
"suffix": ""
}
],
"year": 2008,
"venue": "Brain Language",
"volume": "107",
"issue": "1",
"pages": "16--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Kemmerer, Javier G. Castillo, Thomas Talavage, Stephanie Patterson, and Cynthia Wiley. 2008. Neuroanatomical distribution of five semantic components of verbs evidence from fmri. Brain Language, 107(1):16-43.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "MMFeat: A toolkit for extracting multi-modal features",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL-2016 System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela. 2016. MMFeat: A toolkit for ex- tracting multi-modal features. In Proceedings of ACL-2016 System Demonstrations, pages 55-60, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Improving multi-modal representations using image dispersion: Why less is sometimes more",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "835--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representations using image dispersion: Why less is sometimes more. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 835-841. Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Metaphors We Live By",
"authors": [
{
"first": "George",
"middle": [],
"last": "Lakoff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Lakoff. 1980. Metaphors We Live By, University of Chicago Press, Chicago.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Semantic feature production norms for a large set of living and nonliving things",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "George",
"middle": [
"S"
],
"last": "Cree",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mcnorgan",
"suffix": ""
}
],
"year": 2005,
"venue": "Behavior Research Methods",
"volume": "37",
"issue": "4",
"pages": "547--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken McRae, George S. Cree, Mark S. Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, 37(4):547-559.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv, abs/1301.3781v3",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Predicting human brain activity associated with the meanings of nouns",
"authors": [
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [
"V"
],
"last": "Shinkareva",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Kai-Min",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Vicente",
"middle": [
"L"
],
"last": "Malave",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"A"
],
"last": "Mason",
"suffix": ""
},
{
"first": "Marcel",
"middle": [
"Adam"
],
"last": "Just",
"suffix": ""
}
],
"year": 2008,
"venue": "Science",
"volume": "320",
"issue": "5880",
"pages": "1191--1195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom M. Mitchell, Svetlana V. Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L. Malave, Robert A. Mason, and Marcel Adam Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320(5880):1191-1195.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Metaphor as a medium for emotion: An empirical study",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "23--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016. Metaphor as a medium for emotion: An empirical study. In Proceed- ings of the Fifth Joint Conference on Lexi- cal and Computational Semantics, pages 23-33.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Germany",
"middle": [],
"last": "Berlin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Imagery and Verbal Processes",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Paivio",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan Paivio. 1971. Imagery and Verbal Pro- cesses, Holt, Rinehart, & Winston, New York.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Using Wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Botvinick",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Detre",
"suffix": ""
}
],
"year": 2013,
"venue": "Artificial Intelligence",
"volume": "194",
"issue": "",
"pages": "240--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Pereira, Matthew Botvinick, and Greg Detre. 2013. Using Wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments. Artificial Intel- ligence, 194:240-252.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Toward a universal decoder of linguistic meaning from brain activation",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Brianna",
"middle": [],
"last": "Pritchett",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Samuuel",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kanwisher",
"suffix": ""
},
{
"first": "Evelina",
"middle": [],
"last": "Botvinick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fedorenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Nature Communications",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Pereira, Bin Lou, Brianna Pritchett, Samuel Ritter, Samuuel J. Gershman, Nancy Kanwisher, Matthew Botvinick, and Evelina Fedorenko. 2018. Toward a universal decoder of linguistic meaning from brain activation. Nature Communications, 9:963.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Brain mechanisms linking language and action",
"authors": [
{
"first": "Friedemann",
"middle": [],
"last": "Pulvermuller",
"suffix": ""
}
],
"year": 2005,
"venue": "Nature Reviews Neuroscience",
"volume": "6",
"issue": "",
"pages": "576--582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Friedemann Pulvermuller. 2005. Brain mecha- nisms linking language and action. Nature Re- views Neuroscience, 6:576-582.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "ImageNet Large Scale Visual Recognition Challenge. IJCV",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "115",
"issue": "",
"pages": "211--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211-252.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Modulation of the semantic system by word imageability",
"authors": [
{
"first": "David",
"middle": [
"S"
],
"last": "Sabsevitz",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Medler",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"R"
],
"last": "Binder",
"suffix": ""
}
],
"year": 2005,
"venue": "NeuroImage",
"volume": "27",
"issue": "1",
"pages": "188--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David S. Sabsevitz, David A. Medler, Michael Seidenberg, and Jeffrey R. Binder. 2005. Mod- ulation of the semantic system by word image- ability. NeuroImage, 27(1):188-200.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Event participant modelling with neural networks",
"authors": [
{
"first": "Ottokar",
"middle": [],
"last": "Tilk",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Asad",
"middle": [],
"last": "Sayeed",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "171--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ottokar Tilk, Vera Demberg, Asad Sayeed, Dietrich Klakow, and Stefan Thater. 2016. Event participant modelling with neural net- works. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 171-182. Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Predicting the brain activation pattern associated with the propositional content of a sentence: Modeling neural representations of events and states",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [
"L"
],
"last": "Cherkassky",
"suffix": ""
},
{
"first": "Marcel",
"middle": [
"A"
],
"last": "Just",
"suffix": ""
}
],
"year": 2017,
"venue": "Human Brain Mapping",
"volume": "38",
"issue": "10",
"pages": "4865--4881",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Wang, Vladimir L. Cherkassky, and Marcel A. Just. 2017. Predicting the brain activa- tion pattern associated with the propositional content of a sentence: Modeling neural repre- sentations of events and states. Human Brain Mapping, 38(10):4865-4881.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Wehbe",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Alona",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "Aaditya",
"middle": [],
"last": "Ramdas",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "PLoS ONE",
"volume": "9",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. 2014. Simultaneously uncovering the patterns of brain regions involved in differ- ent story reading subprocesses. PLoS ONE, 9(1):e112575.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Intrinsic functional network architecture of human semantic processing: Modules and hubs",
"authors": [
{
"first": "Yangwen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qixiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zaizhu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yanchao",
"middle": [],
"last": "Bi",
"suffix": ""
}
],
"year": 2016,
"venue": "NeuroImage",
"volume": "132",
"issue": "",
"pages": "542--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangwen Xu, Qixiang Lin, Zaizhu Han, Yong He, and Yanchao Bi. 2016. Intrinsic functional network architecture of human semantic pro- cessing: Modules and hubs. NeuroImage, 132:542-55.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Sample stimuli for the verb push.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "Literal Decoding. Leave-2-out decoding accuracies, significant values (p < 0.05) surviving FDR correction for multiple comparisons indicated in bold by an asterisk (*).",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>. A</td></tr></table>",
"text": "",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"text": "Region of Interest: Literal Decoding. Leave-2-out decoding accuracies, significant values (p > 0.05) surviving FDR correction for multiple comparisons for the ROIs indicated in bold by an asterisk (*).",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"text": "Region of Interest: Metaphor Decoding. Leave-2-out decoding accuracies, significant values (p < 0.05) surviving FDR correction for multiple comparisons for the ROIs indicated in bold by an asterisk (*).",
"html": null,
"num": null
}
}
}
}