ACL-OCL / Base_JSON /prefixD /json /D14 /D14-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:56:15.487960Z"
},
"title": "Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": "",
"affiliation": {},
"email": "douwe.kiela@cl.cam.ac.uk"
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.",
"pdf_parse": {
"paper_id": "D14-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent works have shown that multi-modal semantic representation models outperform unimodal linguistic models on a variety of tasks, including modeling semantic relatedness and predicting compositionality (Feng and Lapata, 2010; Leong and Mihalcea, 2011; Bruni et al., 2012; Roller and Schulte im Walde, 2013; . These results were obtained by combining linguistic feature representations with robust visual features extracted from a set of images associated with the concept in question. This extraction of visual features usually follows the popular computer vision approach consisting of computing local features, such as SIFT features (Lowe, 1999) , and aggregating them as bags of visual words (Sivic and Zisserman, 2003) .",
"cite_spans": [
{
"start": 205,
"end": 228,
"text": "(Feng and Lapata, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 229,
"end": 254,
"text": "Leong and Mihalcea, 2011;",
"ref_id": "BIBREF23"
},
{
"start": 255,
"end": 274,
"text": "Bruni et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 275,
"end": 309,
"text": "Roller and Schulte im Walde, 2013;",
"ref_id": "BIBREF30"
},
{
"start": 638,
"end": 650,
"text": "(Lowe, 1999)",
"ref_id": "BIBREF26"
},
{
"start": 698,
"end": 725,
"text": "(Sivic and Zisserman, 2003)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Meanwhile, deep transfer learning techniques have gained considerable attention in the computer vision community. First, a deep convolutional neural network (CNN) is trained on a large labeled dataset (Krizhevsky et al., 2012) . The convolutional layers are then used as mid-level feature extractors on a variety of computer vision tasks (Oquab et al., 2014; Girshick et al., 2013; Zeiler and Fergus, 2013; Donahue et al., 2014) . Although transferring convolutional network features is not a new idea (Driancourt and Bottou, 1990) , the simultaneous availability of large datasets and cheap GPU co-processors has contributed to the achievement of considerable performance gains on a variety computer vision benchmarks: \"SIFT and HOG descriptors produced big performance gains a decade ago, and now deep convolutional features are providing a similar breakthrough\" (Razavian et al., 2014) .",
"cite_spans": [
{
"start": 201,
"end": 226,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF20"
},
{
"start": 338,
"end": 358,
"text": "(Oquab et al., 2014;",
"ref_id": "BIBREF28"
},
{
"start": 359,
"end": 381,
"text": "Girshick et al., 2013;",
"ref_id": "BIBREF17"
},
{
"start": 382,
"end": 406,
"text": "Zeiler and Fergus, 2013;",
"ref_id": "BIBREF40"
},
{
"start": 407,
"end": 428,
"text": "Donahue et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 502,
"end": 531,
"text": "(Driancourt and Bottou, 1990)",
"ref_id": "BIBREF11"
},
{
"start": 865,
"end": 888,
"text": "(Razavian et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work reports on results obtained by using CNN-extracted features in multi-modal semantic representation models. These results are interesting in several respects. First, these superior features provide the opportunity to increase the performance gap achieved by augmenting linguistic features with multi-modal features. Second, this increased performance confirms that the multimodal performance improvement results from the information contained in the images and not the information used to select which images to use to represent a concept. Third, our evaluation reveals an intriguing property of the CNN-extracted features. Finally, since we use the skip-gram approach of to generate our linguistic features, we believe that this work represents the first approach to multimodal distributional semantics that exclusively relies on deep learning for both its linguistic and visual components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multi-modal models are motivated by parallels with human concept acquisition. Standard se-mantic space models extract meanings solely from linguistic data, even though we know that human semantic knowledge relies heavily on perceptual information (Louwerse, 2011) . That is, there exists substantial evidence that many concepts are grounded in the perceptual system (Barsalou, 2008) . One way to do this grounding in the context of distributional semantics is to obtain representations that combine information from linguistic corpora with information from another modality, obtained from e.g. property norming experiments (Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013) or from processing and extracting features from images (Feng and Lapata, 2010; Leong and Mihalcea, 2011; Bruni et al., 2012) . This approach has met with quite some success .",
"cite_spans": [
{
"start": 247,
"end": 263,
"text": "(Louwerse, 2011)",
"ref_id": "BIBREF25"
},
{
"start": 366,
"end": 382,
"text": "(Barsalou, 2008)",
"ref_id": "BIBREF2"
},
{
"start": 623,
"end": 650,
"text": "(Silberer and Lapata, 2012;",
"ref_id": "BIBREF32"
},
{
"start": 651,
"end": 685,
"text": "Roller and Schulte im Walde, 2013)",
"ref_id": "BIBREF30"
},
{
"start": 741,
"end": 764,
"text": "(Feng and Lapata, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 765,
"end": 790,
"text": "Leong and Mihalcea, 2011;",
"ref_id": "BIBREF23"
},
{
"start": 791,
"end": 810,
"text": "Bruni et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Multi-Modal Distributional Semantics",
"sec_num": "2"
},
{
"text": "Other examples that apply multi-modal deep learning use restricted Boltzmann machines (Srivastava and Salakhutdinov, 2012; Feng et al., 2013) , auto-encoders (Wu et al., 2013) or recursive neural networks (Socher et al., 2014) . Multimodal models with deep learning components have also successfully been employed in crossmodal tasks (Lazaridou et al., 2014) . Work that is closely related in spirit to ours is by Silberer and Lapata (2014) . They use a stacked auto-encoder to learn combined embeddings of textual and visual input. Their visual inputs consist of vectors of visual attributes obtained from learning SVM classifiers on attribute prediction tasks. In contrast, our work keeps the modalities separate and follows the standard multi-modal approach of concatenating linguistic and visual representations in a single semantic space model. This has the advantage that it allows for separate data sources for the individual modalities. We also learn visual representations directly from the images (i.e., we apply deep learning directly to the images), as opposed to taking a higher-level representation as a starting point. Frome et al. (2013) jointly learn multimodal representations as well, but apply them to a visual object recognition task instead of concept meaning.",
"cite_spans": [
{
"start": 86,
"end": 122,
"text": "(Srivastava and Salakhutdinov, 2012;",
"ref_id": "BIBREF36"
},
{
"start": 123,
"end": 141,
"text": "Feng et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 158,
"end": 175,
"text": "(Wu et al., 2013)",
"ref_id": "BIBREF39"
},
{
"start": 205,
"end": 226,
"text": "(Socher et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 334,
"end": 358,
"text": "(Lazaridou et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 414,
"end": 440,
"text": "Silberer and Lapata (2014)",
"ref_id": "BIBREF33"
},
{
"start": 1134,
"end": 1153,
"text": "Frome et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modal Deep Learning",
"sec_num": "2.2"
},
{
"text": "A flurry of recent results indicates that image descriptors extracted from deep convolutional neural networks (CNNs) are very powerful and consistently outperform highly tuned state-of-the-art systems on a variety of visual recognition tasks (Razavian et al., 2014) . Embeddings from stateof-the-art CNNs (such as Krizhevsky et al. (2012) ) have been applied successfully to a number of problems in computer vision (Girshick et al., 2013; Zeiler and Fergus, 2013; Donahue et al., 2014) . This contribution follows the approach described by Oquab et al. (2014) : they train a CNN on 1512 ImageNet synsets (Deng et al., 2009) , use the first seven layers of the trained network as feature extractors on the Pascal VOC dataset, and achieve state-of-the-art performance on the Pascal VOC classification task.",
"cite_spans": [
{
"start": 242,
"end": 265,
"text": "(Razavian et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 314,
"end": 338,
"text": "Krizhevsky et al. (2012)",
"ref_id": "BIBREF20"
},
{
"start": 415,
"end": 438,
"text": "(Girshick et al., 2013;",
"ref_id": "BIBREF17"
},
{
"start": 439,
"end": 463,
"text": "Zeiler and Fergus, 2013;",
"ref_id": "BIBREF40"
},
{
"start": 464,
"end": 485,
"text": "Donahue et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 540,
"end": 559,
"text": "Oquab et al. (2014)",
"ref_id": "BIBREF28"
},
{
"start": 604,
"end": 623,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Convolutional Neural Networks",
"sec_num": "2.3"
},
{
"text": "3 Improving Multi-Modal Representations Figure 1 illustrates how our system computes multi-modal semantic representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Deep Convolutional Neural Networks",
"sec_num": "2.3"
},
{
"text": "The perceptual component of standard multimodal models that rely on visual data is often an instance of the bag-of-visual-words (BOVW) representation (Sivic and Zisserman, 2003) . This approach takes a collection of images associated with words or tags representing the concept in question. For each image, keypoints are laid out as a dense grid. Each keypoint is represented by a vector of robust local visual features such as SIFT (Lowe, 1999), SURF (Bay et al., 2008) and HOG (Dalal and Triggs, 2005) , as well as pyramidal variants of these descriptors such as PHOW (Bosch et al., 2007) . These descriptors are subsequently clustered into a discrete set of \"visual words\" using a standard clustering algorithm like k-means and quantized into vector representations by comparing the local descriptors with the cluster centroids. Visual representations are obtained by taking the average of the BOVW vectors for the images that correspond to a given word. We use BOVW as a baseline. Our approach similarly makes use of a collection of images associated with words or tags representing a particular concept. Each image is processed by the first seven layers of the convolutional network defined by Krizhevsky et al. (2012) and adapted by Oquab et al. (2014) 1 . This network takes 224 \u00d7 224 pixel RGB images and applies five successive convolutional layers followed by three fully connected layers. Its eighth and last",
"cite_spans": [
{
"start": 150,
"end": 177,
"text": "(Sivic and Zisserman, 2003)",
"ref_id": "BIBREF34"
},
{
"start": 452,
"end": 470,
"text": "(Bay et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 479,
"end": 503,
"text": "(Dalal and Triggs, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 570,
"end": 590,
"text": "(Bosch et al., 2007)",
"ref_id": "BIBREF5"
},
{
"start": 1199,
"end": 1223,
"text": "Krizhevsky et al. (2012)",
"ref_id": "BIBREF20"
},
{
"start": 1239,
"end": 1258,
"text": "Oquab et al. (2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perceptual Representations",
"sec_num": "3.1"
},
{
"text": "Training visual features (after Oquab et al., 2014)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C1-C2-C3-C4-C5",
"sec_num": null
},
{
"text": "Fully-connected layers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layers",
"sec_num": null
},
{
"text": "African elephant",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6144-dim feature vector",
"sec_num": null
},
{
"text": "Imagenet labels \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wall clock",
"sec_num": null
},
{
"text": "100-dim word projections layer produces a vector of 1512 scores associated with 1000 categories of the ILSVRC-2012 challenge and the 512 additional categories selected by Oquab et al. (2014) . This network was trained using about 1.6 million ImageNet images associated with these 1512 categories. We then freeze the trained parameters, chop the last network layer, and use the remaining seventh layer as a filter to compute a 6144-dimensional feature vector on arbitrary 224 \u00d7 224 input images.",
"cite_spans": [
{
"start": 171,
"end": 190,
"text": "Oquab et al. (2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FC6 FC7 FC8",
"sec_num": null
},
{
"text": "w(t) w(t+1) w(t+2) w(t-2) w(t-2) C1-C2-C3-C4-C5 FC7 FC6 100-dim word projections",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FC6 FC7 FC8",
"sec_num": null
},
{
"text": "We consider two ways to aggregate the feature vectors representing each image.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FC6 FC7 FC8",
"sec_num": null
},
{
"text": "putes the average of all feature vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first method (CNN-Mean) simply com-",
"sec_num": "1."
},
{
"text": "2. The second method (CNN-Max) computes the component-wise maximum of all feature vectors. This approach makes sense because the feature vectors extracted from this particular network are quite sparse (about 22% non-zero coefficients) and can be interpreted as bags of visual properties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first method (CNN-Mean) simply com-",
"sec_num": "1."
},
{
"text": "For our linguistic representations we extract 100dimensional continuous vector representations using the log-linear skip-gram model of trained on a corpus consisting of the 400M word Text8 corpus of Wikipedia text 2 together with the 100M word British National Corpus (Leech et al., 1994) . We also experimented with dependency-based skip-grams (Levy and Goldberg, 2014) but this did not improve results. The skip-gram model learns high quality semantic representations based on the distributional properties of words in text, and outperforms standard distributional models on a variety of semantic similarity and relatedness tasks. However we note that have recently reported an even better performance for their linguistic component using a standard distributional model, although this may have been tuned to the task.",
"cite_spans": [
{
"start": 268,
"end": 288,
"text": "(Leech et al., 1994)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic representations",
"sec_num": "3.2"
},
{
"text": "Following Bruni et al. 2014, we construct multimodal semantic representations by concatenating the centered and L 2 -normalized linguistic and perceptual feature vectors v ling and v vis ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modal Representations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v concept = \u03b1 \u00d7 v ling || (1 \u2212 \u03b1) \u00d7 v vis ,",
"eq_num": "(1)"
}
],
"section": "Multi-modal Representations",
"sec_num": "3.3"
},
{
"text": "where || denotes the concatenation operator and \u03b1 is an optional tuning parameter. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modal Representations",
"sec_num": "3.3"
},
{
"text": "We carried out experiments using visual representations computed using two canonical image datasets. The resulting multi-modal concept representations were evaluated using two well-known semantic relatedness datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We carried out experiments using two distinct sources of images to compute the visual representations. The ImageNet dataset (Deng et al., 2009 ) is a large-scale ontology of images organized according to the hierarchy of WordNet (Fellbaum, 1999) . The dataset was constructed by manually re-labelling candidate images collected using web searches for each WordNet synset. The images tend to be of high quality with the designated object roughly centered in the image. Our copy of ImageNet contains about 12.5 million images organized in 22K synsets. This implies that Ima-geNet covers only a small fraction of the existing 117K WordNet synsets.",
"cite_spans": [
{
"start": 124,
"end": 142,
"text": "(Deng et al., 2009",
"ref_id": "BIBREF9"
},
{
"start": 229,
"end": 245,
"text": "(Fellbaum, 1999)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Data",
"sec_num": "4.1"
},
{
"text": "The ESP Game dataset (Von Ahn and Dabbish, 2004) was famously collected as a \"game with a purpose\", in which two players must independently and rapidly agree on a correct word label for randomly selected images. Once a word label has been used sufficiently frequently for a given image, that word is added to the image's tags. This dataset contains 100K images, but with every image having on average 14 tags, that amounts to a coverage of 20,515 words. Since players are encouraged to produce as many terms per image, the dataset's increased coverage is at the expense of accuracy in the word-to-image mapping: a dog in a field with a house in the background might be a golden retriever in ImageNet and could have tags dog, golden retriever, grass, field, house, door in the ESP Dataset. In other words, images in the ESP dataset do not make a distinction between objects in the foreground and in the background, or between the relative size of the objects (tags for images are provided in a random order, so the top tag is not necessarily the best one).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Data",
"sec_num": "4.1"
},
{
"text": "Figures 2 and 3 show typical examples of images belonging to these datasets. Both datasets have attractive properties. On the one hand, Ima-geNet has higher quality images with better labels. On the other hand, the ESP dataset has an interesting coverage because the MEN task (see section 4.4) was specifically designed to be covered by the ESP dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Data",
"sec_num": "4.1"
},
{
"text": "Since ImageNet follows the WordNet hierarchy, we would have to include almost all images in the dataset to obtain representations for high-level concepts such as entity, object and animal. Doing so is both computationally expensive and unlikely to improve the results. For this reason, we randomly sample up to N distinct images from the subtree associated with each concept. When this returns less than N images, we attempt to increase coverage by sampling images from the subtree of the concept's hypernym instead. In order to allow for a fair comparison, we apply the same method of sampling up to N on the ESP Game dataset. In all following experiments, N = 1.000. We used the WordNet lemmatizer from NLTK (Bird et al., 2009) to lemmatize tags and concept words so as to further improve the dataset's coverage.",
"cite_spans": [
{
"start": 710,
"end": 729,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Selection",
"sec_num": "4.2"
},
{
"text": "The ImageNet images were preprocessed as described by (Krizhevsky et al., 2012) . The largest centered square contained in each image is resam-pled to form a 256 \u00d7 256 image. The CNN input is then formed by cropping 16 pixels off each border and subtracting 128 to the image components. The ESP Game images were preprocessed slightly differently because we do not expect the objects to be centered. Each image was rescaled to fit inside a 224 \u00d7 224 rectangle. The CNN input is then formed by centering this image into the 224 \u00d7 224 input field, subtracting 128 to the image components, and zero padding.",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Processing",
"sec_num": "4.3"
},
{
"text": "The BOVW features were obtained by computing DSIFT descriptors using VLFeat (Vedaldi and Fulkerson, 2008) . These descriptors were subsequently clustered using mini-batch k-means (Sculley, 2010) with 100 clusters. Each image is then represented by a bag of clusters (visual words) quantized as a 100-dimensional feature vector. These vectors were then combined into visual concept representations by taking their mean.",
"cite_spans": [
{
"start": 76,
"end": 105,
"text": "(Vedaldi and Fulkerson, 2008)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Processing",
"sec_num": "4.3"
},
{
"text": "We evaluate our multi-modal word representations using two semantic relatedness datasets widely used in distributional semantics (Agirre et al., 2009; Feng and Lapata, 2010; Bruni et al., 2012; .",
"cite_spans": [
{
"start": 129,
"end": 150,
"text": "(Agirre et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 151,
"end": 173,
"text": "Feng and Lapata, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 174,
"end": 193,
"text": "Bruni et al., 2012;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.4"
},
{
"text": "WordSim353 (Finkelstein et al., 2001 ) is a selection of 353 concept pairs with a similarity rating provided by human annotators. Since this is probably the most widely used evaluation dataset for distributional semantics, we include it for comparison with other approaches. WordSim353 has some known idiosyncracies: it includes named entities, such as OPEC, Arafat, and Maradona, as well as abstract words, such as antecedent and credibility, for which it may be hard to find corresponding images. Multi-modal representations are often evaluated on an unspecified subset of WordSim353 (Feng and Lapata, 2010; Bruni et al., 2012; , making it impossible to compare the reported scores. In this work, we report scores on the full WordSim353 dataset (W353) by setting the visual vector v vis to zero for concepts without images. We also report scores on the subset (W353-Relevant) of pairs for which both concepts have both ImageNet and ESP Game images using the aforementioned selection procedure.",
"cite_spans": [
{
"start": 11,
"end": 36,
"text": "(Finkelstein et al., 2001",
"ref_id": "BIBREF15"
},
{
"start": 586,
"end": 609,
"text": "(Feng and Lapata, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 610,
"end": 629,
"text": "Bruni et al., 2012;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.4"
},
{
"text": "MEN (Bruni et al., 2012) was in part designed to alleviate the WordSim353 problems. It was constructed in such a way that only frequent words with at least 50 images in the ESP Game dataset were included in the evaluation pairs. The MEN dataset has been found to mirror the aggregate score over a variety of tasks and similarity datasets . It is also much larger, with 3000 words pairs consisting of 751 individual words. Although MEN was constructed so as to have at least a minimum amount of images available in the ESP Game dataset for each concept, this is not the case for ImageNet. Hence, similarly to WordSim353, we also evaluate on a subset (MEN-Relevant) for which images are available in both datasets.",
"cite_spans": [
{
"start": 4,
"end": 24,
"text": "(Bruni et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.4"
},
{
"text": "We evaluate the models in terms of their Spearman \u03c1 correlation with the human relatedness ratings. The similarity between the representations associated with a pair of words is calculated using the cosine similarity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.4"
},
{
"text": "cos(v 1 , v 2 ) = v 1 \u2022 v 2 v 1 v 2 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.4"
},
{
"text": "We evaluate on the two semantic relatedness datasets using solely linguistic, solely visual and multi-modal representations. In the case of MEN-Relevant and W353-Relevant, we report scores for BOVW, CNN-Mean and CNN-Max visual representations. For all datasets we report the scores obtained by BOVW, CNN-Mean and CNN-Max multi-modal representations. Since we have full coverage with the ESP Game dataset on MEN, we are able to report visual representation scores for the entire dataset as well. The results can be seen in Table 1 . There are a number of questions to ask. First of all, do CNNs yield better visual representations? Second, do CNNs yield better multi-modal representations? And third, is there a difference between the high-quality low-coverage ImageNet and the low-quality higher-coverage ESP Game dataset representations?",
"cite_spans": [],
"ref_spans": [
{
"start": 522,
"end": 529,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In all cases, CNN-generated visual representations perform better or as good as BOVW representations (we report results for BOVW-Mean, which performs slightly better than taking the elementwise maximum). This confirms the motivation outlined in the introduction: by applying state-ofthe-art approaches from computer vision to multimodal semantics, we obtain a signficant perfor- mance increase over standard multi-modal models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Representations",
"sec_num": "5.1"
},
{
"text": "Higher-quality perceptual input leads to betterperforming multi-modal representations. In all cases multi-modal models with CNNs outperform multi-modal models with BOVW, occasionally by quite a margin. In all cases, multi-modal representations outperform purely linguistic vectors that were obtained using a state-of-the-art system. This re-affirms the importance of multi-modal representations for distributional semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modal Representations",
"sec_num": "5.2"
},
{
"text": "Since the ESP Game images come with a multitude of word labels, one could question whether a performance increase of multi-modal models based on that dataset comes from the images themselves, or from overlapping word labels. It might also be possible that similar concepts are more likely to occur in the same image, which encodes relatedness information without necessarily taking the image data itself into account. In short, it is a natural question to ask whether the performance gain is due to image data or due to word label associations? We conclusively show that the image data matters in two ways: (a) using a different dataset (ImageNet) also results in a performance boost, and (b) using higher-quality image features on the ESP game images increases the performance boost without changing the association between word labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Contribution of Images",
"sec_num": "5.3"
},
{
"text": "It is important to ask whether the source image dataset has a large impact on performance. Although the scores for the visual representation in some cases differ, performance of multimodal representations remains close for both image datasets. This implies that our method is robust over different datasets. It also suggests that it is beneficial to train on high-quality datasets like ImageNet and to subsequently generate embeddings for other sets of images like the ESP Game dataset that are more noisy but have better coverage. The results show the benefit of transfering convolutional network features, corroborating recent results in computer vision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image Datasets",
"sec_num": "5.4"
},
{
"text": "There is an interesting discrepancy between the two types of network with respect to dataset performance: CNN-Mean multi-modal models tend to perform best on MEN and MEN-Relevant, while CNN-Max multi-modal models perform better on W353 and W353-Relevant. There also appears to be some interplay between the source corpus, the evaluation dataset and the best performing CNN: the performance leap on W353- Relevant for CNN-Max is much larger using ESP Game images than with ImageNet images. We speculate that this is because CNN-Max performs better than CNN-Mean on a somewhat different type of similarity. It has been noted (Agirre et al., 2009) that WordSim353 captures both similarity (as in tiger-cat, with a score of 7.35) as well as relatedness (as in Maradona-football, with a score of 8.62). MEN, however, is explicitly designed to capture semantic relatedness only (Bruni et al., 2012) . CNN-Max using sparse feature vectors means that we treat the dominant components as definitive of the concept class, which is more suited to similarity. CNN-Mean averages over all the feature components, and as such might be more suited to relatedness. We conjecture that the performance increase on WordSim353 is due to increased performance on the similarity subset of that dataset.",
"cite_spans": [
{
"start": 623,
"end": 644,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 872,
"end": 892,
"text": "(Bruni et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity/Relatedness Datasets",
"sec_num": "5.5"
},
{
"text": "The concatenation scheme in Equation 1 allows for a tuning parameter \u03b1 to weight the relative contribution of the respective modalities. Previous work on MEN has found that the optimal parameter for that dataset is close to 0.5 . We have found that this is indeed the case. On WordSim353, however, we have found the parameter for optimal performance to be shifted to the right, meaning that optimal performance is achieved when we include less of the visual input compared to the linguistic input. Figure 4 shows what happens when we vary alpha over the four datasets. There are a number of observations to be made here.",
"cite_spans": [],
"ref_spans": [
{
"start": 498,
"end": 506,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Tuning",
"sec_num": "5.6"
},
{
"text": "First of all, we can see that the performance peak for the MEN datastes is much higher than for the WordSim353 ones, and that its peak is relatively higher as well. This indicates that MEN is in a sense a more balanced dataset. There are two possible explanations: as indicated earlier, Word-Sim353 contains slightly idiosyncratic word pairs which may have a detrimental effect on performance; or, WordSim353 was not constructed with multi-modal semantics in mind, and contains a substantial amount of abstract words that would not benefit at all from including visual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "5.6"
},
{
"text": "Due to the nature of the datasets and the tasks at hand, it is arguably much more important that CNNs beat standard bag-of-visual-words representations on MEN than on W353, and indeed we see that there exists no \u03b1 for which BOVW would beat any of the CNN networks. Mean multi-modal vectors. The most accurate pairs are consistently the same across the two image datasets. There are some clear differences between the least accurate pairs, however. The MEN words potatoes and tomato probably have low quality ImageNet-derived representations, because they occur often in the bottom pairs for that dataset. The MEN words dessert, bread and fruit occur in the bottom 5 for both image datasets, which implies that their linguistic representations are probably not very good. For WordSim353, the bottom pairs on ImageNet could be said to be similarity mistakes; while the ESP Game dataset contains more relatedness mistakes (king and queen would evaluate similarity, while stock and market would evaluate relatedness). It is difficult to say anything conclusive about this discrepancy, but it is clearly a direction for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "5.6"
},
{
"text": "To facilitate further research on image embeddings and multi-modal semantics, we publicly release embeddings for all the image labels occurring in the ESP Game dataset. Please see the fol-lowing web page: http://www.cl.cam.ac. uk/\u02dcdk427/imgembed.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image embeddings",
"sec_num": "7"
},
{
"text": "We presented a novel approach to improving multi-modal representations using deep convolutional neural network-extracted features. We reported high results on two well-known and widely-used semantic relatedness benchmarks, with increased performance both in the separate visual representations and in the combined multimodal representations. Our results indicate that such multi-modal representations outperform both linguistic and standard bag-of-visual-words multimodal representations. We have shown that our approach is robust and that CNN-extracted features from separate image datasets can succesfully be applied to semantic relatedness. In addition to improving multi-modal representations, we have shown that the source of this improvement is due to image data and is not simply a result of word label associations. We have shown this by obtaining performance improvements on two different image datasets, and by obtaining higher performance with higher-quality image features on the ESP game images, without changing the association between word labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "In future work, we will investigate whether our system can be further improved by including concreteness information or a substitute metric such as image dispersion, as has been suggested by other work on multi-modal semantics . Furthermore, a logical next step to increase performance would be to jointly learn multi-modal representations or to learn weighting parameters. Another interesting possibility would be to examine multi-modal distributional compositional semantics, where multi-modal representations are composed to obtain phrasal representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "http://www.di.ens.fr/willow/research/cnn/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://mattmahoney.net/dc/textdata.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Maxime Oquab for providing the feature extraction code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In Proceed- ings of Human Language Technologies: The 2009",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, NAACL '09, pages 19-27, Boulder, Colorado.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Grounded cognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"W"
],
"last": "Barsalou",
"suffix": ""
}
],
"year": 2008,
"venue": "Annual Review of Psychology",
"volume": "59",
"issue": "",
"pages": "617--845",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence W. Barsalou. 2008. Grounded cognition. Annual Review of Psychology, 59:617-845.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SURF: Speeded Up Robust Features",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Bay",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Ess",
"suffix": ""
},
{
"first": "Tinne",
"middle": [],
"last": "Tuytelaars",
"suffix": ""
},
{
"first": "Luc",
"middle": [],
"last": "Van Gool",
"suffix": ""
}
],
"year": 2008,
"venue": "Computer Vision and Image Understanding (CVIU)",
"volume": "110",
"issue": "",
"pages": "346--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. 2008. SURF: Speeded Up Robust Features. In Computer Vision and Image Under- standing (CVIU), volume 110, pages 346-359.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Edward Loper, and Ewan Klein. 2009. Natural Language Processing with Python. O'Reilly Media Inc.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Image classification using random forests and ferns",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Munoz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Bosch, Andrew Zisserman, and Xavier Munoz. 2007. Image classification using random forests and ferns. In Proceedings of ICCV.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distributional semantics in technicolor",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 136-145. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam",
"middle": [
"Khanh"
],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research, 49:1-47.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Histograms of oriented gradients for human detection",
"authors": [
{
"first": "Navneet",
"middle": [],
"last": "Dalal",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Triggs",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05",
"volume": "1",
"issue": "",
"pages": "886--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navneet Dalal and Bill Triggs. 2005. Histograms of oriented gradients for human detection. In Pro- ceedings of the 2005 IEEE Computer Society Con- ference on Computer Vision and Pattern Recogni- tion (CVPR'05) -Volume 1 -Volume 01, CVPR '05, pages 886-893.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Imagenet: A large-scale hierarchical image database",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hi- erarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Con- ference on, pages 248-255. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Judy",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Tzeng",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoff- man, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. In Inter- national Conference on Machine Learning (ICML 2014).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "TDNNextracted features",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Driancourt",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of Neuro Nimes 90",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Driancourt and L\u00e9on Bottou. 1990. TDNN- extracted features. In Proceedings of Neuro Nimes 90, Nimes, France. EC2.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Visual information in semantic representation",
"authors": [
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yansong Feng and Mirella Lapata. 2010. Visual infor- mation in semantic representation. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 91-99. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Constructing hierarchical image-tags bimodal representations for word tags alternative choice",
"authors": [
{
"first": "Fangxiang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ruifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaojie",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangxiang Feng, Ruifan Li, and Xiaojie Wang. 2013. Constructing hierarchical image-tags bimodal repre- sentations for word tags alternative choice. CoRR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "DeViSE: A Deep Visual-Semantic Embedding Model",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Frome",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Marc\u00e1urelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Frome, Greg Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc\u00c1urelio Ranzato, and Tomas Mikolov. 2013. DeViSE: A Deep Visual- Semantic Embedding Model. In NIPS.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Girshick, J. Donahue, T. Darrell, and J. Malik. 2013. Rich feature hierarchies for accurate ob- ject detection and semantic segmentation. arXiv preprint:1311.2524, November.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Systematic Study of Semantic Vector Space Model Parameters",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EACL 2014, Workshop on Continuous Vector Space Models and their Compositionality (CVSC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and Stephen Clark. 2014. A Systematic Study of Semantic Vector Space Model Parameters. In Proceedings of EACL 2014, Workshop on Contin- uous Vector Space Models and their Compositional- ity (CVSC).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving Multi-Modal Representations Using Image Dispersion: Why Less is Sometimes More",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving Multi-Modal Representa- tions Using Image Dispersion: Why Less is Some- times More. In Proceedings of ACL 2014.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "1106--1114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In NIPS, pages 1106- 1114.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Is this a wampimuk? cross-modal mapping between distributional semantics and the visual world",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? cross-modal map- ping between distributional semantics and the visual world. In Proceedings of ACL 2014.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Claws4: the tagging of the British National Corpus",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Leech",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Garside",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Leech, Roger Garside, and Michael Bryant. 1994. Claws4: the tagging of the British National Corpus. In Proceedings of the 15th conference on Computational linguistics-Volume 1, pages 622- 628. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Going Beyond Text: A Hybrid Image-Text Approach for Measuring Word Relatedness",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Joint International Conference on Natural Language Processing (IJCNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Leong and Rada Mihalcea. 2011. Going Beyond Text: A Hybrid Image-Text Approach for Measuring Word Relatedness. In Proceedings of Joint Interna- tional Conference on Natural Language Processing (IJCNLP), Chiang Mai, Thailand.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of ACL 2014.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Symbol interdependency in symbolic and embodied cognition",
"authors": [
{
"first": "M",
"middle": [
"M"
],
"last": "Louwerse",
"suffix": ""
}
],
"year": 2011,
"venue": "TopiCS in Cognitive Science",
"volume": "3",
"issue": "",
"pages": "273--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. M. Louwerse. 2011. Symbol interdependency in symbolic and embodied cognition. TopiCS in Cog- nitive Science, 3:273-302.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Object recognition from local scale-invariant features",
"authors": [
{
"first": "G",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the International Conference on Computer Vision",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David G. Lowe. 1999. Object recognition from local scale-invariant features. In Proceedings of the Inter- national Conference on Computer Vision-Volume 2 - Volume 2, ICCV '99.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of International Conference of Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word repre- sentations in vector space. In Proceedings of Inter- national Conference of Learning Representations, Scottsdale, Arizona, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning and transferring mid-level image representations using convolutional neural networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Oquab",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Laptev",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sivic",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Oquab, L. Bottou, I. Laptev, and J. Sivic. 2014. Learning and transferring mid-level image represen- tations using convolutional neural networks. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "CNN features off-the-shelf: an astounding baseline for recognition",
"authors": [
{
"first": "A",
"middle": [
"S"
],
"last": "Razavian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Azizpour",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sullivan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Carlsson",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.S. Razavian, H. Azizpour, J. Sullivan, and S. Carls- son. 2014. CNN features off-the-shelf: an astounding baseline for recognition. arXiv preprint:1403.6382.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A multimodal LDA model integrating textual, cognitive and visual modalities",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1146--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A multimodal LDA model integrating textual, cog- nitive and visual modalities. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1146-1157, Seattle, Washington, USA, October. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Web-scale k-means clustering",
"authors": [
{
"first": "",
"middle": [],
"last": "Sculley",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th international conference on World wide web",
"volume": "",
"issue": "",
"pages": "1177--1178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Sculley. 2010. Web-scale k-means clustering. In Proceedings of the 19th international conference on World wide web, pages 1177-1178. ACM.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Grounded models of semantic representation",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1423--1433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1423-1433. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning Grounded Meaning Representations with Autoencoders",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL 2014",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2014. Learning Grounded Meaning Representations with Autoen- coders. In Proceedings of ACL 2014, Baltimore, MD.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Video Google: a text retrieval approach to object matching in videos",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sivic",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Ninth IEEE International Conference on Computer Vision",
"volume": "2",
"issue": "",
"pages": "1470--1477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Sivic and A. Zisserman. 2003. Video Google: a text retrieval approach to object matching in videos. In Proceedings of the Ninth IEEE International Con- ference on Computer Vision, volume 2, pages 1470- 1477, Oct.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Grounded Compositional Semantics for Finding and Describing Images with Sentences",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Andrej Karpathy, Quoc V. Le, Christo- pher D. Manning, and Andrew Y. Ng. 2014. Grounded Compositional Semantics for Finding and Describing Images with Sentences. Transactions of the Association for Computational Linguistics (TACL 2014).",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Multimodal learning with deep boltzmann machines",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "25",
"issue": "",
"pages": "2222--2230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava and Ruslan Salakhutdinov. 2012. Multimodal learning with deep boltzmann ma- chines. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Infor- mation Processing Systems 25, pages 2222-2230.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "VLFeat: An open and portable library of computer vision algorithms",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Fulkerson",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Vedaldi and B. Fulkerson. 2008. VLFeat: An open and portable library of computer vision algorithms. http://www.vlfeat.org/.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Labeling images with a computer game",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Von Ahn",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Dabbish",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the SIGCHI conference on Human factors in computing systems",
"volume": "",
"issue": "",
"pages": "319--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 319-326. ACM.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Online multimodal deep similarity learning with application to image retrieval",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Steven",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Hoi",
"suffix": ""
},
{
"first": "Peilin",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Dayong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miao",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 21st ACM International Conference on Multimedia, MM '13",
"volume": "",
"issue": "",
"pages": "153--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Wu, Steven C.H. Hoi, Hao Xia, Peilin Zhao, Dayong Wang, and Chunyan Miao. 2013. Online multimodal deep similarity learning with application to image retrieval. In Proceedings of the 21st ACM International Conference on Multimedia, MM '13, pages 153-162.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Visualizing and understanding convolutional networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Zeiler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler and Rob Fergus. 2013. Visualizing and understanding convolutional networks. CoRR.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Computing word feature vectors."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Examples of dog in the ESP Game dataset."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Examples of golden retriever in ImageNet."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Varying the \u03b1 parameter for MEN, MEN-Relevant, WordSim353 and WordSim353-Relevant, respectively."
},
"TABREF1": {
"num": null,
"text": "",
"content": "<table><tr><td>shows the top 5 best and top 5 worst scor-</td></tr><tr><td>ing word pairs for the two datasets using CNN-</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"num": null,
"text": "The top 5 best and top 5 worst scoring pairs with respect to the gold standard.",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}