ACL-OCL / Base_JSON /prefixQ /json /Q17 /Q17-1002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q17-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:12:04.222739Z"
},
"title": "Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Anderson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rochester",
"location": {}
},
"email": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Essex",
"location": {}
},
"email": "poesio@essex.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.",
"pdf_parse": {
"paper_id": "Q17-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Since the work of Mitchell et al. (2008) , there has been increasing interest in using computational semantic models to interpret neural activity patterns scanned as participants engage in conceptual tasks. This research has almost exclusively focused on brain activity elicited as participants comprehend concrete nouns as experimental stimuli. Different modelling approaches -predominantly distributional semantic models (Mitchell et al., 2008; Devereux et al., 2010; Murphy et al., 2012; Pereira et al., 2013; Carlson et al., 2014) and semantic models based on human behavioural estimation of conceptual features (Palatucci et al., 2009; Sudre et al., 2012; Chang et al., 2010; Bruffaerts et al., 2013; Fernandino et al., 2015) -have elucidated how different brain regions contribute to semantic representation of concrete nouns; however, how these results extend to non-concrete nouns is unknown.",
"cite_spans": [
{
"start": 18,
"end": 40,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF27"
},
{
"start": 423,
"end": 446,
"text": "(Mitchell et al., 2008;",
"ref_id": "BIBREF27"
},
{
"start": 447,
"end": 469,
"text": "Devereux et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 470,
"end": 490,
"text": "Murphy et al., 2012;",
"ref_id": "BIBREF28"
},
{
"start": 491,
"end": 512,
"text": "Pereira et al., 2013;",
"ref_id": "BIBREF31"
},
{
"start": 513,
"end": 534,
"text": "Carlson et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 616,
"end": 640,
"text": "(Palatucci et al., 2009;",
"ref_id": "BIBREF30"
},
{
"start": 641,
"end": 660,
"text": "Sudre et al., 2012;",
"ref_id": "BIBREF37"
},
{
"start": 661,
"end": 680,
"text": "Chang et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 681,
"end": 705,
"text": "Bruffaerts et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 706,
"end": 730,
"text": "Fernandino et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In computational modelling there has been increasing importance attributed to grounding semantic models in sensory modalities, e.g., Bruni et al. (2014) , Kiela and Bottou (2014) . Andrews et al. (2009) demonstrated that multi-modal models formed by combining text-based distributional information with behaviourally generated conceptual properties (as a surrogate for perceptual experience) provide a better proxy for human-like intelligence. However, both the text-based and behaviourallybased components of their model were ultimately derived from linguistic information. Since then, in analyses of brain data, Anderson et al. (2013) have applied multi-modal models incorporating features that are truly grounded in natural image statistics to further support this claim. In addition, Anderson et al. (2015) have demonstrated that visually grounded models describe brain activity associated with internally induced visual features of objects as the ob-jects names are read and comprehended.",
"cite_spans": [
{
"start": 133,
"end": 152,
"text": "Bruni et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 155,
"end": 178,
"text": "Kiela and Bottou (2014)",
"ref_id": "BIBREF21"
},
{
"start": 181,
"end": 202,
"text": "Andrews et al. (2009)",
"ref_id": "BIBREF4"
},
{
"start": 614,
"end": 636,
"text": "Anderson et al. (2013)",
"ref_id": "BIBREF0"
},
{
"start": 788,
"end": 810,
"text": "Anderson et al. (2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Having both image-and text-based models of semantic representation, and neural activity patterns associated with concrete and abstract nouns, enables a natural test of Dual coding theory (Paivio, 1971) . Dual coding posits that concrete concepts are represented in the brain in terms of a visual and linguistic code, whereas abstract concepts are only represented by a linguistic code. Whereas previous work has demonstrated that image-and text-based semantic models contribute to explaining neural activity patterns associated with concrete nouns, it remains unclear whether either text-or image-based semantic models can decode neural activity patterns associated with abstract words.",
"cite_spans": [
{
"start": 187,
"end": 201,
"text": "(Paivio, 1971)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We extend previous work by applying image-and text-based computational semantic models to decode an fMRI data set spanning a diverse set of nouns of varying concreteness. The 70-word stimuli for the fMRI experiment (listed in Table 1 ) are semantically structured according to taxonomic categories and domains embedded in WordNet (Fellbaum, 1998 ) and its extensions. Participants read the noun and were instructed to imagine a situation that they personally associate with the noun. In this sense, the data solicited was targetting deep thought patterns (deeper than might be anticipated for rapid semantic processing required in conversations and many real time interactions with the world). In the analysis we split the fMRI data set into the most concrete and most abstract words based on behavioural concreteness ratings. Our key contribution is in demonstrating a decoding advantage for text-based semantic models over the image-based models when decoding the more abstract nouns. In line with the previous results of Anderson et al. (2013) and Anderson et al. (2015) , both visual and textual models decode the more concrete nouns.",
"cite_spans": [
{
"start": 330,
"end": 345,
"text": "(Fellbaum, 1998",
"ref_id": null
},
{
"start": 1024,
"end": 1046,
"text": "Anderson et al. (2013)",
"ref_id": "BIBREF0"
},
{
"start": 1051,
"end": 1073,
"text": "Anderson et al. (2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 226,
"end": 233,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The image-and text-based computational models we use have recently been developed using neural networks (Mikolov et al., 2013; Jia et al., 2014) . The image-based model is built using a deep convolutional neural network approach, similar in nature to those recently used to study neural representations of visual stimuli (see Kriegeskorte (2015) , although note this is the first application to study word elicited neural activation known to the authors). For decoding we use a recently introduced algorithm ) that abstracts the decoding task to representational similarity space, and achieve decoding accuracies on par with those conventionally achieved through discriminating concrete nouns (and higher if we combine data to exploit grouplevel regularities).",
"cite_spans": [
{
"start": 104,
"end": 126,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF26"
},
{
"start": 127,
"end": 144,
"text": "Jia et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 326,
"end": 345,
"text": "Kriegeskorte (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because the fMRI experiments were performed in Italian on native Italians, and because approximately comparable text corpora in content were available in English and Italian (English and Italian Wikipedia), we were able to compare how well English and Italian text-based semantic models can decode neural activity patterns. Whilst Italian Wikipedia could reasonably be expected to be advantaged by supporting culturally appropriate nuances of semantic structure, it is disadvantaged by being considerably smaller than English Wikipedia. Taking inspiration from previous work exploiting cross-lingual resources (Richman and Schone, 2008; Shi et al., 2010; Darwish, 2013) we combined Italian and English text-based models in our decoding analyses in an attempt to leverage the benefits of both.",
"cite_spans": [
{
"start": 610,
"end": 636,
"text": "(Richman and Schone, 2008;",
"ref_id": "BIBREF33"
},
{
"start": 637,
"end": 654,
"text": "Shi et al., 2010;",
"ref_id": "BIBREF34"
},
{
"start": 655,
"end": 669,
"text": "Darwish, 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although combined language and English models tended to yield marginally better decoding accuracies, there were no significant differences between the different language models. Whilst we expect semantic structure on a grand scale to broadly straddle language boundaries for most concrete and abstract concepts (albeit with cultural specificities), this is proof of principle that cross linguistic commonalities are reflected in neural activity patterns measurable with current technology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We reanalyze the fMRI data originally collected by Anderson et al. (2014) , who investigated the relevance of different taxonomic categories and domains embedded in WordNet to the organization of conceptual knowledge in the brain.",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "Anderson et al. (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Brain Data",
"sec_num": "2"
},
{
"text": "Anderson et al. (2014) systematically selected a list of 70 words intended to be representative of a broad range of abstract and concrete nouns. These were organised according to the domains of law and music, cross-classified with seven taxonomic categories. They began by identifying low-concreteness LAW MUSIC Ur-abstracts giustizia justice musica music liberta' liberty blues blues legge law jazz jazz corruzione corruption canto singing refurtiva loot punk punk Attribute giurisdizione jurisdiction sonorita' sonority cittadinanza citizenship ritmo rhythm impunita' impunity melodia melody legalita' legality tonality' tonality illegalita illegality intonazione pitch Communication divieto prohibition words in the norms of Barca et al. (2002) . They then linked these to WordNet to identify the taxonomic category of the dominant sense of each word. Six taxonomic categories that were heavily populated with abstract words, as well as one unambiguously concrete category, were chosen. All categories supported ample coverage of Law and Music domains (determined according to WordNet Domains (Bentivogli et al., 2004) ). Five law words and five music words were selected from each taxonomic category. Taxonomic categories and example stimulus words (translated into English) are as below:",
"cite_spans": [
{
"start": 772,
"end": 791,
"text": "Barca et al. (2002)",
"ref_id": "BIBREF5"
},
{
"start": 1140,
"end": 1165,
"text": "(Bentivogli et al., 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 312,
"end": 749,
"text": "Ur-abstracts giustizia justice musica music liberta' liberty blues blues legge law jazz jazz corruzione corruption canto singing refurtiva loot punk punk Attribute giurisdizione jurisdiction sonorita' sonority cittadinanza citizenship ritmo rhythm impunita' impunity melodia melody legalita' legality tonality' tonality illegalita illegality intonazione pitch Communication divieto prohibition",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word stimuli",
"sec_num": "2.1"
},
{
"text": "Ur-abstract: Anderson et al.'s term for concepts that are classified as abstract in WordNet but do not belong to a clear subcategory, e.g., law or music. At-tribute: A construct whereby objects or individuals can be distinguished, e.g., legality, tonality. Communication: Something that is communicated by, to or between groups, e.g., accusation, symphony. Event/action: Something that happens at a given place and time, e.g., crime, festival. Person/Socialrole: Individual, someone, somebody, mortal, e.g., judge, musician. Location: Points or extents in space, e.g., court, theatre. Object/Tool: A class of unambiguously concrete nouns, e.g., handcuffs, violin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word stimuli",
"sec_num": "2.1"
},
{
"text": "The full list of stimuli is in Table 1 . We split the stimulus nouns into the 35 most concrete and 35 most abstract words according to the behavioural concreteness ratings from Anderson et al. (2014) .",
"cite_spans": [
{
"start": 177,
"end": 199,
"text": "Anderson et al. (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Word stimuli",
"sec_num": "2.1"
},
{
"text": "Participants Nine right-handed native Italian speakers aged between 19 and 38 years (3 women) were recruited to take part in the study. Two were scanned after Anderson et al. (2014) to match the number of participants analysed by Mitchell et al. (2008) . Scanning had previously been halted at 7 instead of the planned 9 participants for a period due to equipment failure. All had normal or correctedto-normal vision.",
"cite_spans": [
{
"start": 159,
"end": 181,
"text": "Anderson et al. (2014)",
"ref_id": "BIBREF1"
},
{
"start": 230,
"end": 252,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "fMRI Experiment",
"sec_num": "2.2"
},
{
"text": "The 70 stimulus words were presented as written words, in 5 runs (all runs were collected in one participant visit), with the order of presentations randomised across runs. In each run, a randomly selected word was presented every 10 seconds, and remained on screen for 3 seconds. On reading a stimulus word, participants thought of a situation that they individually associated with the noun. This process is similar to previous concrete noun tasks, e.g., Mitchell et al. (2008) , where participants were instructed to think of the properties of the noun. However, as people encounter difficulties eliciting properties of non-concrete concepts, compared to thinking of situations in which concepts played a role (Wiemer-Hastings and Xu, 2005) , the experimental paradigm was adapted to imagining situations. fMRI acquisition and preprocessing Anderson et al. (2014) recorded fMRI images on a 4T Bruker MedSpec MRI scanner. They used an Echo Planar Imaging (EPI) pulse sequence with a 1000 msec repetition time, an echo time of 33 msec, and a 26 \u2022 flip angle. A 64\u00d764 acquisition matrix was used, and 17 slices were imaged with a between-slice gap of 1 mm. Voxels had dimensions of 3mm\u00d73mm\u00d75mm. fMRI data were corrected for head motion, unwarped, and spatially normalized to the Montreal Neurological Institute and Hospital (MNI) template. Only voxels estimated to be grey matter were included in the subsequent analysis. For each participant, for each scanning run (where a run is a complete presentation of 70 words), voxel activity was corrected by removing linear trend and transformed to z scores (within each run). Each stimulus word was represented as a single volume by taking the voxel-wise mean of the 4 sec of data offset by 4 sec from the stimulus onset (to account for hemodynamic response).",
"cite_spans": [
{
"start": 457,
"end": 479,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF27"
},
{
"start": 713,
"end": 743,
"text": "(Wiemer-Hastings and Xu, 2005)",
"ref_id": "BIBREF38"
},
{
"start": 844,
"end": 866,
"text": "Anderson et al. (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "fMRI Experiment",
"sec_num": "2.2"
},
{
"text": "Voxel selection The 500 most stable grey matter voxels per participant were selected for analysis. This was undertaken within the leave-2-wordout decoding procedure detailed later in Section 4 using the same method as Mitchell et al. (2008) : Pearson's correlation of each voxel's activity between matched word lists in all scanning run pairs (10 unique run pairs giving 10 correlation coefficients of 68/70 words, where the other 2 words were test words to be decoded) was computed. The mean coefficient was used as stability measure. Voxels associated with the 500 largest stability measures were selected.",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "fMRI Experiment",
"sec_num": "2.2"
},
{
"text": "3 Semantic Models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "fMRI Experiment",
"sec_num": "2.2"
},
{
"text": "Following previous work in multi-modal semantics (Bergsma and Van Durme, 2011; , we obtain a total of 20 images for each of the stimulus words from Google Images 1 . Images from Google have been shown to yield representations that are competitive in quality compared to alternative resources (Bergsma and Van Durme, 2011; Fergus et al., 2005) . Image representations are obtained by extracting the pre-softmax layer from a forward pass in a convolutional neural network (CNN) that has been trained on the ImageNet classification task using Caffe (Jia et al., 2014) . This approach is similar to e.g., Kriegeskorte (2015) , except that we only use the pre-softmax layer, which has been found to work particularly well in semantic tasks (Razavian et al., 2014; Kiela and Bottou, 2014) . Such CNNderived image representations have been found to be of higher quality than traditional bag of visual words models (Sivic and Zisserman, 2003) that were previously used in multi-modal semantics (Bruni et al., 2014; Kiela and Bottou, 2014) . We aggregate images associated with a stimulus word into an overall visually grounded representation by taking the mean of the individual image representations.",
"cite_spans": [
{
"start": 49,
"end": 78,
"text": "(Bergsma and Van Durme, 2011;",
"ref_id": "BIBREF8"
},
{
"start": 292,
"end": 321,
"text": "(Bergsma and Van Durme, 2011;",
"ref_id": "BIBREF8"
},
{
"start": 322,
"end": 342,
"text": "Fergus et al., 2005)",
"ref_id": "BIBREF17"
},
{
"start": 546,
"end": 564,
"text": "(Jia et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 601,
"end": 620,
"text": "Kriegeskorte (2015)",
"ref_id": "BIBREF25"
},
{
"start": 735,
"end": 758,
"text": "(Razavian et al., 2014;",
"ref_id": "BIBREF32"
},
{
"start": 759,
"end": 782,
"text": "Kiela and Bottou, 2014)",
"ref_id": "BIBREF21"
},
{
"start": 907,
"end": 934,
"text": "(Sivic and Zisserman, 2003)",
"ref_id": "BIBREF36"
},
{
"start": 986,
"end": 1006,
"text": "(Bruni et al., 2014;",
"ref_id": "BIBREF10"
},
{
"start": 1007,
"end": 1030,
"text": "Kiela and Bottou, 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image-based semantic models",
"sec_num": "3.1"
},
{
"text": "Image search for abstract nouns The validity and success of the following analyses are dependent on having built the image-based models from a set of images that are indeed relevant to the abstract words. The Google Image searches we used to build the image-based models largely returned a selection of images systematically associated with our most abstract nouns. For instance, 'corruption' returns suited figures covertly exchanging money; 'law', 'justice', 'music', 'tonality' return pictures of gavels, weighing scales, musical notes and circles of fifths, respectively. For 'jurisdiction', the image search returns maps and law-related objects. However, there were also misleading cases such as 'pitch' where the image search, whilst returning potentially useful pictures of sinusoidal graphs, was heavily contaminated by images of football pitches. This problem is not exclusive to images, and the current text-based models are also not immune to the multiple senses of polysemous words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image-based semantic models",
"sec_num": "3.1"
},
{
"text": "For linguistic input, we use the continuous vector representations from the skip-gram model of Mikolov et al. (2013) . Specifically, we obtained 300-dimensional word embeddings by training a skip-gram model using negative sampling on recent Italian and English Wikipedia dumps (using Gensim with preprocessing from word2vec's demo script). For English, representations were built for the English translations of the 70 stimuli provided by Anderson et al. (2014) . The English model was trained for 1 iteration, whereas the Italian was trained for 5, since the Italian Wikipedia dump was smaller (5.2 vs 1.3 billion words respectively).",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF26"
},
{
"start": 439,
"end": 461,
"text": "Anderson et al. (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based semantic models",
"sec_num": "3.2"
},
{
"text": "Following previous work exploiting cross-lingual textual resources (Richman and Schone, 2008; Shi et al., 2010; Darwish, 2013) , we also applied Italian and English text-based models in combination. Model combination was achieved at the analysis stage, by fusing decoding outputs of Italian and English models as described in Section 4.1.",
"cite_spans": [
{
"start": 67,
"end": 93,
"text": "(Richman and Schone, 2008;",
"ref_id": "BIBREF33"
},
{
"start": 94,
"end": 111,
"text": "Shi et al., 2010;",
"ref_id": "BIBREF34"
},
{
"start": 112,
"end": 126,
"text": "Darwish, 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based semantic models",
"sec_num": "3.2"
},
{
"text": "We decoded word-level fMRI representations using the semantic models following the procedure introduced by . The process of matching models to words is abstracted to representational similarity space: For both models and brain data, words are semantically re-represented by their similarities to other words by correlating all word pairs within the native model or brain space, using Pearson's correlation (see Figure 1 ). The result is two square matrices of word pair correlations: one for the fMRI data, another for the model. In the similarity space, each word is a vector of correlations with all other words, thereby allowing model and brain words (similarity vectors) to be directly matched to each other. In decoding, models were matched to fMRI data as follows (see Figure 2 ). Two test words were chosen. The 500 voxels estimated to have the most stable signal were selected using the strategy described in Section 2.2. Voxel selection was based on the fMRI data of the other 68/70 words. Selection on 68/70 rather than all 70 words was to allay any concern that voxel selection could have systematically biased the fMRI correlation structure (calculated next) to look like that of the semantic model, and consequently biased decoding performance. However, as similarity-based decoding does not optimise a mapping between fMRI data and semantic model, it is not prone to modelling and decoding fMRI noise as in classic cases of double dipping (Kriegeskorte et al., 2009) . Indeed, as we report later in this section, there were no significant differences in decoding accuracy arising from tests using voxel selection on 68/70 versus 70 words.",
"cite_spans": [
{
"start": 1453,
"end": 1480,
"text": "(Kriegeskorte et al., 2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 411,
"end": 419,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 775,
"end": 783,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "A single representation of each word was built by taking the voxel-wise mean of all five presentations of the word for the 500 selected voxels. An fMRI similarity matrix for all 70 words was then calculated. Similarity vectors for the two test words were drawn from both the model and fMRI similarity matrices. Entries corresponding to the two test words in both model and fMRI similarity vectors were removed because these values could reveal the correct answer to decoding. The two model similarity vectors were then compared to the two fMRI similarity vectors by correlation, resulting in four correlation values. These correlation values were transformed using Fisher's r to z (arctanh). If the sum of z-transformed correlations between the correctly matched pair exceeded the sum of correlations for the incongruent pair, decoding was scored a success, otherwise a failure. This process was then repeated for all word pairs, with the mean accuracy of all test iterations giving a final measure of success.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "Fisher's r to z transform (arctanh) is typically used to test for differences between correlation coefficients. It transforms the correlation coefficient r to a value z, where z has amplified values at the tails of the correlation coefficient (r otherwise ranges between -1 and 1). This is to make the sampling distribution of z normally distributed, with approximately constant variance values across the population correlation coefficient. In the similarity-decoding method used here, z is evaluated in decoding because it is a more principled metric to compare and combine (as later undertaken in Section 4.1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "However, under most circumstances r to z is not critical to the procedure. z noticeably differs from r only when correlations exceed .5, and r to z changes decoding behaviour in select circumstances. Specifically r to z can influence how word labels are as- signed to similarity vectors by upweighting high value correlation coefficients at the final stage of decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "A hypothetical scenario to illustrate the above point is as follows. Let Pearson(X,Y) denote Pearson's correlation of vectors X and Y, and brainA correspond to a brain similarity vector \"A\" for an unknown word label, and model1 to a semantic model similarity vector for a known word label \"1\". In the final stage of analysis, there are two decoding alternatives given by (i) Pearson(brainA,model2)=.9 and Pearson(brainB,model1)=.9, which when summed gives 1.8; (ii) Pearson(brainA,model1)=.89, Pearson(brainB,model2)=.91. Here the sum is also 1.8 and therefore (i) and (ii) are identical. Applying the r to z transform would result in selection of (ii) because arctanh(.9)+arctanh(.9)=2.94, whereas arctanh(.89)+arctanh(.91)=2.95.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "Statistical significance of decoding accuracy was determined by permutation testing. Decoding was repeated multiple times using the following procedure: creating a vector of word-label indices and randomly shuffling these indices; applying the vector of shuffled indices to reorder both rows and columns of only one of the similarity matrices (whilst keeping the original correct row/column labels so that word-labels now mismatch matrix contents); and repeating the entire pair-matching decoding procedure described above. If word labels are randomly assigned to similarity vectors, we expect a chance-level decoding accuracy of 50%. Repetition of this process (here 10,000 repeats) supplies a null distribution of decoding accuracies achieved by chance. The p-value of decoding accuracy is calculated as the proportion of chance accuracies that are greater than or equal to the observed decoding accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "For permutation testing only, voxel selection was undertaken a single time, per participant, on all 70 words (rather than on 68/70 words in each leave-2out decoding iteration). This was to reduce computation time that would otherwise have been prohibitive. This is very unlikely to have yielded any discernible difference in outcome. Unlike decoding strategies, that involve fitting a classification/encoding model to fMRI data (and are prone to fitting and subsequently decoding fMRI noise), similarity-based decoding does not learn a mapping between semantic-model and fMRI data and is robust to \"double dipping\" giving spurious decoding accuracies (see Kriegeskorte et al. (2009) for problems associated with double dipping).",
"cite_spans": [
{
"start": 656,
"end": 682,
"text": "Kriegeskorte et al. (2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "As an empirical demonstration, we reran all of our 21 actual (non-permuted) model-based decoding analyses, that are reported later in Section 5.2, whilst selecting voxels from all 70 words (as opposed to leave-2-out voxel-selection on 68/70 words). Specifically, decoding analyses were repeated for all 7 model combinations, and tested first on all words, then for the most concrete words only, and finally the most abstract words only. Mean decoding accuracies for the 9 participants yielded with and without leave-2-out voxel selection were compared using paired t-tests. There were no significant differences across all 21 tests. The most different (non-significant) individual result was t=1.87, p=.09 (2-tailed), and in this case leave-2-out voxel selection gave the higher accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representational similarity-based decoding of brain activity",
"sec_num": "4"
},
{
"text": "To test whether the three different semantic models (image-based, Italian/English text-based) carried complementary information, we combined the models in evaluation, thus allowing us to test whether accuracies achieved using model combinations were higher than those achieved with isolated models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model combination by ensemble averaging",
"sec_num": "4.1"
},
{
"text": "To combine the different models, we used an ensemble averaging strategy and ran the similaritybased decoding analyses as described above in parallel with each of the three semantic models. At each leave-2-out test iteration, this gave three arctanh transformed 2*2 correlation matrices (one for each semantic model) that were used to evaluate decoding. Model combination was achieved by fusing the respective arctanh transformed correlation matrices by summing them together. Evaluation of the resulting 2\u00d72 summation matrix proceeded as previously by first summing the two congruent values on the main-diagonal of the matrix, then summing the two incongruent scores on the counter-diagonal. If the congruent sum was greater than the incongruent sum, decoding was a success, otherwise a failure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model combination by ensemble averaging",
"sec_num": "4.1"
},
{
"text": "We split the stimulus nouns into the 35 most concrete and 35 most abstract words according to the behavioural concreteness ratings from Anderson et al. (2014) , and ran analyses on all words combined and these two subsets. Due to limitations in word coverage of the semantic models, 'melody' was missing from the abstract words, and 'skeleton-key' and 'police-station' were missing from the most concrete words (hence 67/70 words were analysed).",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "Anderson et al. (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Dual coding theory (Paivio, 1971) leads to the following hypotheses: (1) The text-based models will decode the more abstract nouns' neural activity patterns with higher accuracy than the image-based model; (2) both image and text-based models will decode the more concrete nouns' neural activity.",
"cite_spans": [
{
"start": 19,
"end": 33,
"text": "(Paivio, 1971)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "5.1"
},
{
"text": "We also compared the decoding accuracy for the most concrete nouns achieved using the combined image-and text-based models to the unimodal models in isolation. Whilst previous analyses have observed advantages of multimodal models in describing concrete noun fMRI, it is not clear whether this effect will carry over to our noun data set. One reason is because many of the most concrete half are \"less concrete\" than those of previous studies: according to Brysbaert et al. (2014) 's concreteness norms (where words were rated on a scale from 1 to 5), the mean \u00b1 SD rating of the 60 concrete nouns analysed by Mitchell et al. (2008) (and subsequently by Anderson et al. (2015) ) is 4.87\u00b1.12, whereas the mean \u00b1 SD of the \"most concrete\" nouns analysed in the current article, when tested with an independent samples t-test, was significantly smaller at 4.42\u00b1.44 (t = 7.4, p < .0001, 2-tail). A second reason is that the experimental task required participants to imagine a situation associated with the noun, rather than think of object properties. Therefore this analysis was of a more exploratory nature.",
"cite_spans": [
{
"start": 457,
"end": 480,
"text": "Brysbaert et al. (2014)",
"ref_id": "BIBREF11"
},
{
"start": 610,
"end": 632,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF27"
},
{
"start": 654,
"end": 676,
"text": "Anderson et al. (2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypotheses",
"sec_num": "5.1"
},
{
"text": "Decoding analyses were run using the image-based model and Italian and English text-based models in isolation, and also all combinations of these models as described in Section 4. Results are in Figure 3 . In this section we use the abbreviations Img for the image-based model and TXit and TXen, for the Italian and English text-based models, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Decoding Analysis",
"sec_num": "5.2"
},
{
"text": "In all tests, chance-level decoding accuracy (the expected accuracy if word labelling is random) is 50%. Mean\u00b1SE accuracies across all participants are displayed in the leftmost column of plots for all 7 model combinations. Individual-level results are displayed for only three model combinations to avoid cluttering the graphs (Img only, the combined TXit&TXen, and the combined Img&TXit&TXen). To simplify the following discussion of results, we mainly focus on these three models. The choice to focus on TXit&TXen, rather than the Italian model, was made following the rationale that the language combination would leverage cultural nuances of semantic structure found in the Italian textcorpora jointly with the more extensive coverage of the larger English Wikipedia. Although TXit&TXen and TXen tended to produce higher decoding accuracies, there were no significant differences between either TXit or TXen tested in isolation, or any model combination incorporating them. Mean results are displayed for all model combinations in Figure 3 and key results are tabulated in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1036,
"end": 1044,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1078,
"end": 1085,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoding Analysis",
"sec_num": "5.2"
},
{
"text": "With respect to hypothesis 1 (an advantage for the text-based models decoding abstract neural activity patterns), the key difference to observe in Figure 3 is the drop in relative decoding accuracy between the image-based model and text-based models when decoding the most abstract nouns. The nine participant's mean decoding accuracies for the most abstract nouns were compared between the Img, TXit, TXen and TXit&TXen models using Repeated Measures ANOVA. Combinations of image and textbased models (e.g. Img&TXen) were not directly relevant to this analysis (because they integrate visual and textual data) and consequently these models were excluded. Bartlett's test was used to verify that there was no evidence against homogeneity of variances prior to analysis (\u03c7 2 =1.77, p = .62). The ANOVA indicated a statistically significant difference between models: F(3,24) = 5.06, p < .01. Post hoc comparisons conducted using the Tukey Honest Significant Difference (HSD) test revealed that decoding accuracies achieved using TXen and the Table 2 . p=.05 lines were empirically estimated as described in Section 4 and apply to decoding an individual's fMRI data (not multiple individuals).",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 155,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1041,
"end": 1048,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "An advantage for the textual model on abstract nouns",
"sec_num": "5.3"
},
{
"text": "Most concrete Most abstract Img 67\u00b13%, 7/9 (<.001) 70\u00b13%, 7/9 (<.001) 58\u00b14%, 2/9 (.07) TXit&TXen 76\u00b15%, 7/9 (<.001) 76\u00b16%, 7/9 (<.001) 68\u00b15%, 6/9 (<.001) Img&TXit&TXen 77\u00b15%, 8/9 (<.001) 77\u00b15%, 8/9 (<.001) 68\u00b15%, 5/9 (<.001) Table 2 : Key decoding accuracies from Section 5.2 (see also Figure 3 ). Each cell shows mean\u00b1SE decoding accuracy, the number (n) of participants decoded at a level significantly above chance (p<.05), and in round brackets, the cumulative binomial probability of achieving \u2265 n significant results at p=.05.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 232,
"text": "Table 2",
"ref_id": null
},
{
"start": 286,
"end": 294,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "All words combined",
"sec_num": null
},
{
"text": "TXit&TXen model were significantly different (and larger than) Img (both p < .05). There were no other significant differences (including between Img and TXit). One possible reason for the weaker performance of TXit than TXen is that Italian Wikipedia is a less rich source of information due to being smaller in size than English Wikipedia (despite it presumably containing semantic information that is more relevant to Italian culture).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "All words combined",
"sec_num": null
},
{
"text": "That both image-and text-based models significantly decoded the most concrete nouns is consistent with hypothesis 2. To test for differences between image-and text-based models, mean decoding accuracies for the nine participants on the most concrete nouns were compared for the Img, TXit, TXen and TXit&TXen models using Repeated Measures ANOVA. Combinations of image-and textbased models (e.g. Img&TXen) were not directly relevant to this analysis (because they integrate visual and textual data) and so these models were excluded. Bartlett's test was used to verify homogeneity of variances prior to analysis (\u03c7 2 = 2.86, p = .41). The ANOVA detected no statistically significant differences between the models: F(3,24) = 1.56, p = .22. Therefore when decoding the most concrete nouns there was no significant difference in accuracy between image-based and any text-based model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Both image and text-based models decode the more concrete nouns",
"sec_num": "5.4"
},
{
"text": "The third exploratory test compared the accuracy of the multimodal combination of image-and textbased models to the unimodal models when decoding the more concrete neural activity patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No overall advantage for multimodal models on the more concrete nouns",
"sec_num": "5.5"
},
{
"text": "For the most concrete words, the highest scoring combination across all models was Img&TXen (mean\u00b1SE=77\u00b14%). Whilst this proved to be significantly greater than Img (t = 3.13, p <= .02, df = 8, 2-tail), it was not significantly greater than TXen (t = .81, p = .44, df = 8, 2-tail). Turning to the analogous case for the Italian models, Img&TXit (mean\u00b1SE=75\u00b14%) was not significantly greater than Img (t = 1.74, p = .12, df = 8, 2-tail), or TXit (t = 1.09, p = 0.31, df = 8, 2-tail). Therefore, although multimodal combinations returned higher accuracies than either the image-and text-based models in isolation (for concrete words), decoding accuracy was not significantly higher than either image-or text-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No overall advantage for multimodal models on the more concrete nouns",
"sec_num": "5.5"
},
{
"text": "Previous work decoding neural activity associated with concrete nouns has found image-based models to supply complementary information to textbased models (Anderson et al., 2015) . We suggest three reasons that image-based models may have been disadvantaged in the current study compared to these past analyses. Firstly, Anderson et al. focused on fMRI data elicited by unambiguously concrete nouns, whereas the experimental nouns analysed in the current article were mostly intended to be 'less than concrete' (of the seven taxonomic categories investigated only 'objects/tools' was designed to be unambiguously concrete). Secondly, Anderson et al. used more images to build noun representations (on average 350 images per noun compared to 20 used here), and nouns in the ImageNet images were segmented according to bounding boxes. Consequently their input may have been less noisy than Google Images (which we used because of its wider coverage). Finally, the experimental task of the previous analyses required participants to actively think about the properties of objects, whereas the current data set was elicited as participants imagined situ-ations associated with nouns (and hence may have invoked neural representations with more contextual elements).",
"cite_spans": [
{
"start": 155,
"end": 178,
"text": "(Anderson et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "No overall advantage for multimodal models on the more concrete nouns",
"sec_num": "5.5"
},
{
"text": "The lack of a significant increase in decoding accuracy achieved by pairing image-and text-based models allows us to infer that the text-based model contained many aspects of the visual semantic structure found in the image-based model. Of course we expect modal structure in text-based models commensurate with what people are inclined to report in writing; e.g., it is easy to convey in text that both bananas and lemons are yellow and curvy, and lightbulbs and pears have similar shapes. Therefore we would anticipate correspondences in semantic similarities between image and text-based models and for these correspondences to extend to match neural similarities, e.g., as induced by participants viewing pictures of objects (Carlson et al., 2014) .",
"cite_spans": [
{
"start": 729,
"end": 751,
"text": "(Carlson et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "No overall advantage for multimodal models on the more concrete nouns",
"sec_num": "5.5"
},
{
"text": "The similarity-based decoding approach we have applied enables group-level neural representations to be built simply by taking the mean similarity matrix over participants. Values in the correlation matrix were r to z (arctanh) transformed prior to averaging, then the average values were back transformed to the original range using tanh. This was because averaging z-transformed values (and back transforming) tends to yield less biased estimates of the population value than averaging the raw coefficients (Silver and Dunlap, 1987) . However, in the current analysis results obtained with z-transformation versus without it were virtually identical.",
"cite_spans": [
{
"start": 509,
"end": 534,
"text": "(Silver and Dunlap, 1987)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "Building group-level representations by averaging correlation matrices side-steps potential problems surrounding the obvious alternative method of averaging data in fMRI space, where anatomical/functional differences between different peoples' brains may result in relatively similar activity patterns being spatially mismatched in the standardised fMRI space. The motivation behind building group-level neural representations is that we might expect these to better match the computational semantic models than individual-level data. This is because the models are also built at group-level, created from the photographs and text of many individuals. However building group-level neural representations will only be beneficial if there exist group-level commonalities in representational similarity (when combining data will reduce noise) as opposed to individual semantic representational schemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "Accuracies achieved using models to decode the group-level neural similarity matrices are displayed in the final column of the bar charts at the right of Figure 3 . Specifically, decoding accuracies were:",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 162,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "For all words combined: Img=84.8%, TXit&TXen=96.9% and Img&TXit&TXen=97.3%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "For the most concrete words: Img=87.5%, TXit&TXen=95.8% and Img&TXit&TXen=95.8%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "For the most abstract words: Img=70.2%, TXit&TXen=85.2% and Img&TXit&TXen=84.8%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "To statistically test whether group-level decoding accuracies surpassed those of the individuallevel results, we compared the set of individuallevel mean accuracies to the corresponding grouplevel mean accuracy using one sample t-tests. In all tests (see Table 3 ) the individual-level accuracies were significantly different (lower) than the grouplevel accuracy (corrected for multiple comparisons using false discovery rate (Benjamini and Hochberg, 1995) ). This is indicative of group-level regularities in semantic similarity for both concrete and abstract nouns and also their combination.",
"cite_spans": [
{
"start": 426,
"end": 456,
"text": "(Benjamini and Hochberg, 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "A qualitative observation is that the differences between group and individual-level accuracy appear to be greater for concrete nouns. This could be consistent with participants having a more subjective semantic representation of abstract nouns; however we did not attempt to statistically test this claim. This is because a meaningful comparison would require concrete and abstract words to be controlled by being at least equally discriminable at individual level and this does not appear to be the case with this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-level decoding analysis",
"sec_num": "5.6"
},
{
"text": "This article has demonstrated that neural activity patterns elicited in mental situations of abstract nouns can be decoded using text-based computational semantic models, thus demonstrating that computational semantic models can make a contribution to interpreting the semantic structure of neural activity patterns associated with abstract nouns. Furthermore, by comparing how well visually grounded and textual semantic models de-All words combined Most concrete Most abstract Img -5.6 (.004) -5.2 (.004) -3.0 (.02) TXit&TXen -4.2 (.007) -3.6 (.010) -3.4 (.01) Img&TXit&TXen -4.4 (.007) -3.9 (.008) -3.4 (.01) Table 3 : Results of one sample t-tests comparing the set of individual-level mean decoding accuracies to the grouplevel accuracy (see Section 5.6). All tests were 2-tailed with df=8. The first number in each cell is the t-statistic, the second number in round brackets is the p-value (corrected according to false discovery rate).",
"cite_spans": [],
"ref_spans": [
{
"start": 612,
"end": 619,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "code brain activity associated with concrete or abstract nouns, we have observed a selective advantage for textual over visual models in decoding the more abstract nouns. This has therefore provided initial model-based brain decoding evidence that is broadly in line with the predictions of dual coding theory (Paivio, 1971) . However, results should be interpreted in light of the following two factors. First, the dataset analysed was for a small sample of 67 words, and it is reasonable to conjecture that some of these words are also encoded in modalities other than vision and language. For example, musical words may be encoded in acoustic and motor features (see also Fernandino et al. (2015) ). Future work will be necessary to verify that the findings generalise more broadly to words from domains beyond law and music. In work in progress the authors are undertaking more focused analyses on the current dataset, using textual, visual and newly developed audio semantic modes (Kiela and Clark, 2015) to tease apart linguistic, visual and acoustic contributions to semantic representation and how these vary throughout different regions of the brain.",
"cite_spans": [
{
"start": 310,
"end": 324,
"text": "(Paivio, 1971)",
"ref_id": "BIBREF29"
},
{
"start": 675,
"end": 699,
"text": "Fernandino et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 986,
"end": 1009,
"text": "(Kiela and Clark, 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "A second limitation of the current approach, as pointed out by a reviewer, is that the Google image search algorithm (the workings of which are unknown to the authors) may not perform as well for abstract words as it does for concrete words. Consequently, the visual model may have been handicapped compared to the textual model when decoding neural representations associated with more abstract words. We have no current measure of the degree of this effect, but it may be possible to alleviate it in future work, by having participants manually select images that they associate with abstract stimulus words, and using computational representations derived from these images in the analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Secondary results are that we have exploited rep-resentational similarity space to build group-level neural representations which better match our inherently group-level computational semantic models. In so doing, this exposes group-level commonalities in neural representation for both concrete and abstract words. Such group-level representations may prove both a useful test-bed for evaluating computational semantic models, as well as a potentially useful information source to incorporate into computational models (see Fyshe et al. (2014) for related work).",
"cite_spans": [
{
"start": 525,
"end": 544,
"text": "Fyshe et al. (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Finally we have demonstrated that English and Italian text-based models are roughly interchangeable in our neural decoding task. That the English text-based model tended to return marginally higher results on our Italian brain data than the Italian model provides a cautionary note for future studies wishing to use semantic models from different languages to identify culturally specific aspects of neural semantic representation e.g., as a follow up to . However we also note that the English Wikipedia data was larger than the corresponding Italian corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 5, pp. 17-30, 2017. Action Editor: Daichi Mochihashi.Submission batch: 2/2016; Revision batch: 7/2016; Published 1/2017. c 2017 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.google.com/imghp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank three anonymous reviewers for their insightful comments and suggestions, Brian Murphy for his involvement in the configuration, collection and preprocessing of the original dataset, and Marco Baroni and Elia Bruni for early conversations on some of the ideas presented. Stephen Clark is supported by ERC Starting Grant DisCoTex (306920).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Of words, eyes and brains: Correlating image-based distributional semantic models with neural representations of concepts",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Bordignon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1960--1970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. J. Anderson, E. Bruni, U. Bordignon, M. Poesio, and M. Baroni. 2013. Of words, eyes and brains: Correlat- ing image-based distributional semantic models with neural representations of concepts. In Proceedings of EMNLP, pages 1960-1970, Seattle, WA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discriminating taxonomic categories and domains in mental simulations of concepts of varying concreteness",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Cognitive Neuroscience",
"volume": "26",
"issue": "3",
"pages": "658--681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. J. Anderson, B. Murphy, and M. Poesio. 2014. Discriminating taxonomic categories and domains in mental simulations of concepts of varying concrete- ness. J. Cognitive Neuroscience, 26(3):658-681.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lopopolo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "NeuroImage",
"volume": "120",
"issue": "",
"pages": "309--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. J. Anderson, E. Bruni, A. Lopopolo, M. Poesio, and M. Baroni. 2015. Reading visually embodied mean- ing from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text. NeuroImage, 120:309-322.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Zinszer",
"suffix": ""
},
{
"first": "R",
"middle": [
"D S"
],
"last": "Raizada",
"suffix": ""
}
],
"year": 2016,
"venue": "NeuroImage",
"volume": "128",
"issue": "",
"pages": "44--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. J. Anderson, B. D Zinszer, and R. D. S. Raizada. 2016. Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities. NeuroImage, 128:44-53.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Integrating experiential and distributional data to learn semantic representations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vigliocco",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vinson",
"suffix": ""
}
],
"year": 2009,
"venue": "Psychological Review",
"volume": "116",
"issue": "3",
"pages": "463--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Andrews, G. Vigliocco, and D. Vinson. 2009. In- tegrating experiential and distributional data to learn semantic representations. Psychological Review, 116(3):463-498.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Word naming times and psycholinguistic norms for Italian nouns",
"authors": [
{
"first": "L",
"middle": [],
"last": "Barca",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Burani",
"suffix": ""
},
{
"first": "L",
"middle": [
"S"
],
"last": "Arduino",
"suffix": ""
}
],
"year": 2002,
"venue": "Behavior Research Methods, Instruments, & Computers",
"volume": "34",
"issue": "",
"pages": "424--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Barca, C. Burani, and L. S. Arduino. 2002. Word naming times and psycholinguistic norms for Italian nouns. Behavior Research Methods, Instruments, & Computers, 34:424-434.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Controlling the false discovery rate: A practical and powerful approach to multiple testing",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Benjamini",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Hochberg",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of the Royal Statistical Society, Series B (Methodological)",
"volume": "57",
"issue": "1",
"pages": "289--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Benjamini and Y. Hochberg. 1995. Controlling the false discovery rate: A practical and powerful ap- proach to multiple testing. Journal of the Royal Sta- tistical Society, Series B (Methodological), 57(1):289- 300.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Revising the WordNet Domains Hierarchy: Semantics, coverage, and balancing",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Forner",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Pianta",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Workshop on Multilingual Linguistic Resources",
"volume": "",
"issue": "",
"pages": "101--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Bentivogli, P. Forner, B. Magnini, and E. Pianta. 2004. Revising the WordNet Domains Hierarchy: Seman- tics, coverage, and balancing. In Proceedings of the Workshop on Multilingual Linguistic Resources, pages 101-108, Geneva, Switzerland.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning bilingual lexicons using the visual similarity of labeled web images",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "1764--1769",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bergsma and B. Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web im- ages. In IJCAI, pages 1764-1769.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Similarity of fMRI activity patterns in left perirhinal cortex reflects similarity between words",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bruffaerts",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Peeters",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "De Deyne",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Storms",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vandenberghe",
"suffix": ""
}
],
"year": 2013,
"venue": "J. Neuroscience",
"volume": "33",
"issue": "47",
"pages": "18597--18607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bruffaerts, P. Dupont, R. Peeters, S. De Deyne, G. Storms, and R. Vandenberghe. 2013. Similar- ity of fMRI activity patterns in left perirhinal cortex reflects similarity between words. J. Neuroscience, 33(47):18597-18607.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "N",
"middle": [
"K"
],
"last": "Tran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artifical Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Bruni, N. K. Tran, and M. Baroni. 2014. Multimodal distributional semantics. Journal of Artifical Intelli- gence Research, 49:1-47.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Concreteness ratings for 40 thousand generally known English word lemmas. Behavior research methods",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "A",
"middle": [
"B"
],
"last": "Warriner",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Kuperman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "46",
"issue": "",
"pages": "904--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Brysbaert, A. B. Warriner, and V. Kuperman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior research methods, 46(3):904-911.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The emergence of semantic meaning in the ventral temporal pathway",
"authors": [
{
"first": "T",
"middle": [
"A"
],
"last": "Carlson",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Simmons",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kriegeskorte",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Cognitive Neuroscience",
"volume": "26",
"issue": "1",
"pages": "120--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. A. Carlson, R.A. Simmons, and N. Kriegeskorte. 2014. The emergence of semantic meaning in the ventral temporal pathway. J. Cognitive Neuroscience, 26(1):120-131.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Quantitative modeling of the neural representations of objects: How semantic feature norms can account for fMRI activation",
"authors": [
{
"first": "K",
"middle": [
"M"
],
"last": "Chang",
"suffix": ""
},
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Just",
"suffix": ""
}
],
"year": 2010,
"venue": "NeuroImage: Special Issue on Multivariate Decoding and Brain Reading",
"volume": "56",
"issue": "",
"pages": "716--727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. M. Chang, T. M. Mitchell, and M. A. Just. 2010. Quantitative modeling of the neural representations of objects: How semantic feature norms can account for fMRI activation. NeuroImage: Special Issue on Mul- tivariate Decoding and Brain Reading, 56:716-727.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Named entity recognition using cross-lingual resources: Arabic as an example",
"authors": [
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "1558--1567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Darwish. 2013. Named entity recognition using cross-lingual resources: Arabic as an example. In Proc. ACL, pages 1558-1567.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using fMRI activation to conceptual stimuli to evaluate methods for extracting conceptual representations from corpora",
"authors": [
{
"first": "B",
"middle": [],
"last": "Devereux",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT First Workshop on Computational Neurolinguistics",
"volume": "",
"issue": "",
"pages": "70--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Devereux, C. Kelly, and A. Korhonen. 2010. Us- ing fMRI activation to conceptual stimuli to evalu- ate methods for extracting conceptual representations from corpora. In Proceedings of the NAACL HLT First Workshop on Computational Neurolinguistics, pages 70-78, Los Angeles, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "WordNet: An Electronic Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning object categories from Google's image search",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2005,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "1816--1823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. 2005. Learning object categories from Google's im- age search. In ICCV, pages 1816-1823.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Prediction of brain activation patterns associated with individual lexical concepts based on five sensory-motor attributes",
"authors": [
{
"first": "L",
"middle": [],
"last": "Fernandino",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Humphries",
"suffix": ""
},
{
"first": "M",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "W",
"middle": [
"L"
],
"last": "Gross",
"suffix": ""
},
{
"first": "L",
"middle": [
"L"
],
"last": "Conant",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Binder",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.neuropsychologia.2015.04.009"
]
},
"num": null,
"urls": [],
"raw_text": "L. Fernandino, C. J. Humphries, M. S. Seidenberg, W. L. Gross, L. L. Conant, and J. R. Binder. 2015. Prediction of brain activation patterns as- sociated with individual lexical concepts based on five sensory-motor attributes. Neuropsycholigia. doi:10.1016/j.neuropsychologia.2015.04.009.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Interpretable semantic vectors from a joint model of brain-and text-based meaning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "P",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "489--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Fyshe, P. P. Talukdar, B. Murphy, and T. M. Mitchell. 2014. Interpretable semantic vectors from a joint model of brain-and text-based meaning. In Proceed- ings of ACL, pages 489-499, Baltimore, MD.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Caffe: Convolutional architecture for fast feature embedding",
"authors": [
{
"first": "E",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shelhamer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Karayev",
"suffix": ""
},
{
"first": "R",
"middle": [
"B"
],
"last": "Long",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Guadarrama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2014,
"venue": "ACM Multimedia",
"volume": "",
"issue": "",
"pages": "675--678",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell. 2014. Caffe: Convolutional architecture for fast feature em- bedding. In ACM Multimedia, pages 675-678.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning image embeddings using convolutional neural networks for improved multi-modal semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Kiela and L. Bottou. 2014. Learning image em- beddings using convolutional neural networks for im- proved multi-modal semantics. In Proceedings of EMNLP, pages 36-45, Doha, Qatar.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multi-and cross-modal semantics beyond vision: Grounding in auditory perception",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference (EMNLP 2015)",
"volume": "",
"issue": "",
"pages": "2461--2470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Kiela and S. Clark. 2015. Multi-and cross-modal semantics beyond vision: Grounding in auditory per- ception. In Proceedings of the Empirical Methods in Natural Language Processing Conference (EMNLP 2015), pages 2461-2470, Lisbon, Portugal.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving multi-modal representations using image dispersion: Why less is sometimes more",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Kiela, F. Hill, A. Korhonen, and S. Clark. 2014. Im- proving multi-modal representations using image dis- persion: Why less is sometimes more. In Proceedings of ACL 2014.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Circular analysis in systems neuroscience: The dangers of double dipping",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kriegeskorte",
"suffix": ""
},
{
"first": "W",
"middle": [
"K"
],
"last": "Simmons",
"suffix": ""
},
{
"first": "P",
"middle": [
"S F"
],
"last": "Bellgowan",
"suffix": ""
},
{
"first": "C",
"middle": [
"I"
],
"last": "Baker",
"suffix": ""
}
],
"year": 2009,
"venue": "Nature Neuroscience",
"volume": "12",
"issue": "",
"pages": "535--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Kriegeskorte, W. K. Simmons, P. S. F. Bellgowan, and C. I. Baker. 2009. Circular analysis in systems neuro- science: The dangers of double dipping. Nature Neu- roscience, 12:535-540.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep neural networks: A new framework for modeling biological vision and brain information processing",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kriegeskorte",
"suffix": ""
}
],
"year": 2015,
"venue": "Annual Review of Vision Science",
"volume": "1",
"issue": "",
"pages": "417--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Kriegeskorte. 2015. Deep neural networks: A new framework for modeling biological vision and brain information processing. Annual Review of Vision Sci- ence, 1:417-446.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of ICLR, Scottsdale, Arizona, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Predicting human brain activity associated with the meaning of nouns",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "S",
"middle": [
"V"
],
"last": "Shinkareva",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "K.-M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "V",
"middle": [
"L"
],
"last": "Malave",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Mason",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Just",
"suffix": ""
}
],
"year": 2008,
"venue": "Science",
"volume": "320",
"issue": "",
"pages": "1191--1195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Mitchell, S. V. Shinkareva, A. Carlson, K.-M. Chang, V. L. Malave, R. A. Mason, and M. A. Just. 2008. Predicting human brain activity associated with the meaning of nouns. Science, 320:1191-1195.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Selecting corpus-semantic models for neurolinguistic decoding",
"authors": [
{
"first": "B",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "114--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Murphy, P. Talukdar, and T. Mitchell. 2012. Selecting corpus-semantic models for neurolinguistic decoding. In Proceedings of the First Joint Conference on Lexi- cal and Computational Semantics (*SEM), pages 114- 123, Montreal, Canada.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Imagery and verbal processes",
"authors": [
{
"first": "A",
"middle": [],
"last": "Paivio",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Paivio, editor. 1971. Imagery and verbal processes. Holt, Rinehart, and Winston, New York.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Zero-shot learning with semantic output codes",
"authors": [
{
"first": "M",
"middle": [],
"last": "Palatucci",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pomerleau",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2009,
"venue": "Neural Information Processing Systems",
"volume": "22",
"issue": "",
"pages": "1410--1418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Palatucci, D. Pomerleau, G. Hinton, and T. Mitchell. 2009. Zero-shot learning with semantic output codes. Neural Information Processing Systems, 22:1410- 1418.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Using Wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Botvinick",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Detre",
"suffix": ""
}
],
"year": 2013,
"venue": "Artif. Intell",
"volume": "194",
"issue": "",
"pages": "240--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pereira, M. Botvinick, and G. Detre. 2013. Using Wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments. Artif. Intell., 194:240-252.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "CNN features off-the-shelf: An astounding baseline for recognition",
"authors": [
{
"first": "A",
"middle": [
"S"
],
"last": "Razavian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Azizpour",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sullivan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Carlsson",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops",
"volume": "",
"issue": "",
"pages": "512--519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carls- son. 2014. CNN features off-the-shelf: An astound- ing baseline for recognition. In IEEE Conference on Computer Vision and Pattern Recognition Workshops 2014, pages 512-519.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Mining wiki resources for multilingual named entity recognition",
"authors": [
{
"first": "A",
"middle": [
"E"
],
"last": "Richman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schone",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. E. Richman and P. Schone. 2008. Mining wiki re- sources for multilingual named entity recognition. In Proc. ACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Crosslanguage text classification by model translation and semi-supervised learning",
"authors": [
{
"first": "R",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi, R. Mihalcea, and M. Tian. 2010. Cross- language text classification by model translation and semi-supervised learning. In Proc. EMNLP.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Averaging correlation coefficients: Should Fisher's z transformation be used?",
"authors": [
{
"first": "N",
"middle": [
"C"
],
"last": "Silver",
"suffix": ""
},
{
"first": "W",
"middle": [
"P"
],
"last": "Dunlap",
"suffix": ""
}
],
"year": 1987,
"venue": "J. Applied Psychology",
"volume": "72",
"issue": "1",
"pages": "146--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. C. Silver and W. P. Dunlap. 1987. Averaging correla- tion coefficients: Should Fisher's z transformation be used? J. Applied Psychology, 72(1):146-148.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Video google: A text retrieval approach to object matching in videos",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sivic",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2003,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "1470--1477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Sivic and A. Zisserman. 2003. Video google: A text retrieval approach to object matching in videos. In ICCV, pages 1470-1477.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Tracking neural coding of perceptual and semantic features of concrete nouns",
"authors": [
{
"first": "G",
"middle": [],
"last": "Sudre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pomerleau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palatucci",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Wehbe",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Salmelin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "NeuroImage",
"volume": "62",
"issue": "",
"pages": "451--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Sudre, D. Pomerleau, M. Palatucci, L. Wehbe, A. Fyshe, R. Salmelin, and T. Mitchell. 2012. Track- ing neural coding of perceptual and semantic features of concrete nouns. NeuroImage, 62:451-463.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Content differences for abstract and concrete concepts",
"authors": [
{
"first": "K",
"middle": [],
"last": "Wiemer-Hastings",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive Science",
"volume": "29",
"issue": "",
"pages": "719--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Wiemer-Hastings and X. Xu. 2005. Content differ- ences for abstract and concrete concepts. Cognitive Science, 29:719-736.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Semantic structural alignment of neural representational spaces enables translation between English and Chinese words",
"authors": [
{
"first": "B",
"middle": [
"D"
],
"last": "Zinszer",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wheatley",
"suffix": ""
},
{
"first": "R",
"middle": [
"D S"
],
"last": "Raizada",
"suffix": ""
}
],
"year": 2016,
"venue": "J. Cognitive Neuroscience",
"volume": "28",
"issue": "11",
"pages": "1749--1759",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. D. Zinszer, A. J. Anderson, O. Kang, T. Wheatley, and R. D. S. Raizada. 2016. Semantic structural align- ment of neural representational spaces enables transla- tion between English and Chinese words. J. Cognitive Neuroscience, 28(11):1749-1759.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Representing brain and semantic model vectors in similarity space.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Similarity-decoding algorithm (adapted from.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Results of the decoding analysis from Section 5.2. See also",
"uris": null,
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Italian stimulus words and English translations, divided into law and music domains (columns), and taxo-nomic categories (groups of 5 rows). The most concrete half of the words are indicated in bold font. Strike-throughs indicate words for which we did not have semantic model coverage.</td></tr></table>",
"num": null
}
}
}
}