ACL-OCL / Base_JSON /prefixD /json /D15 /D15-1015.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:25:54.831296Z"
},
"title": "Visual Bilingual Lexicon Induction with Transferred ConvNet Features",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Laboratory University of Cambridge",
"location": {}
},
"email": "douwe.kiela@cl.cam.ac.uk"
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": "",
"affiliation": {},
"email": "ivan.vulic@cs.kuleuven.be"
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": "stephen.clark@cl.cam.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper is concerned with the task of bilingual lexicon induction using imagebased features. By applying features from a convolutional neural network (CNN), we obtain state-of-the-art performance on a standard dataset, obtaining a 79% relative improvement over previous work which uses bags of visual words based on SIFT features. The CNN image-based approach is also compared with state-of-the-art linguistic approaches to bilingual lexicon induction, even outperforming these for one of three language pairs on another standard dataset. Furthermore, we shed new light on the type of visual similarity metric to use for genuine similarity versus relatedness tasks, and experiment with using multiple layers from the same network in an attempt to improve performance.",
"pdf_parse": {
"paper_id": "D15-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper is concerned with the task of bilingual lexicon induction using imagebased features. By applying features from a convolutional neural network (CNN), we obtain state-of-the-art performance on a standard dataset, obtaining a 79% relative improvement over previous work which uses bags of visual words based on SIFT features. The CNN image-based approach is also compared with state-of-the-art linguistic approaches to bilingual lexicon induction, even outperforming these for one of three language pairs on another standard dataset. Furthermore, we shed new light on the type of visual similarity metric to use for genuine similarity versus relatedness tasks, and experiment with using multiple layers from the same network in an attempt to improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Bilingual lexicon induction is the task of finding words that share a common meaning across different languages. It plays an important role in a variety of tasks in information retrieval and natural language processing, including cross-lingual information retrieval (Lavrenko et al., 2002; Levow et al., 2005) and statistical machine translation (Och and Ney, 2003) . Although parallel corpora have been used successfully for inducing bilingual lexicons for some languages (Och and Ney, 2003) , these corpora are either too small or unavailable for many language pairs. Consequently, mono-lingual approaches that rely on comparable instead of parallel corpora have been developed (Fung and Yee, 1998; Koehn and Knight, 2002) . These approaches work by mapping language pairs to a shared bilingual space and ex-tracting lexical items from that space. Bergsma and Van Durme (2011) showed that this bilingual space need not be linguistic in nature: they used labeled images from the Web to obtain bilingual lexical translation pairs based on the visual features of corresponding images. Local features are computed using SIFT (Lowe, 2004) and color histograms (Deselaers et al., 2008) and aggregated as bags of visual words (BOVW) (Sivic and Zisserman, 2003) to get bilingual representations in a shared visual space. Their highest performance is obtained by combining these visual features with normalized edit distance, an orthographic similarity metric (Navarro, 2001 ).",
"cite_spans": [
{
"start": 266,
"end": 289,
"text": "(Lavrenko et al., 2002;",
"ref_id": "BIBREF29"
},
{
"start": 290,
"end": 309,
"text": "Levow et al., 2005)",
"ref_id": "BIBREF32"
},
{
"start": 346,
"end": 365,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF40"
},
{
"start": 473,
"end": 492,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF40"
},
{
"start": 680,
"end": 700,
"text": "(Fung and Yee, 1998;",
"ref_id": "BIBREF16"
},
{
"start": 701,
"end": 724,
"text": "Koehn and Knight, 2002)",
"ref_id": "BIBREF26"
},
{
"start": 850,
"end": 878,
"text": "Bergsma and Van Durme (2011)",
"ref_id": "BIBREF5"
},
{
"start": 1123,
"end": 1135,
"text": "(Lowe, 2004)",
"ref_id": "BIBREF36"
},
{
"start": 1157,
"end": 1181,
"text": "(Deselaers et al., 2008)",
"ref_id": "BIBREF10"
},
{
"start": 1228,
"end": 1255,
"text": "(Sivic and Zisserman, 2003)",
"ref_id": "BIBREF51"
},
{
"start": 1453,
"end": 1467,
"text": "(Navarro, 2001",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are several advantages to having a visual rather than a linguistic intermediate bilingual space: First, while labeled images are readily available for many languages through resources such as Google Images, language pairs that have sizeable comparable, let alone parallel, corpora are relatively scarce. Second, it has been found that meaning is often grounded in the perceptual system, and that the quality of semantic representations improves significantly when they are grounded in the visual modality (Silberer and Lapata, 2012; . Having an intermediate visual space means that words in different languages can be grounded in the same space. Third, it is natural to use vision as an intermediate: when we communicate with someone who does not speak our language, we often communicate by directly referring to our surroundings. Languages that are linguistically far apart will, by cognitive necessity, still refer to objects in the same visual space. While some approaches to bilingual lexicon induction rely on orthographic properties (Haghighi et al., 2008; Koehn and Knight, 2002) or properties of frequency distributions (Schafer and Yarowsky, 2002) that will work only for closely related languages, a visual space can work for any language, whether it's English or Chinese, Arabic or Icelandic, or all Greek to you.",
"cite_spans": [
{
"start": 511,
"end": 538,
"text": "(Silberer and Lapata, 2012;",
"ref_id": "BIBREF48"
},
{
"start": 1045,
"end": 1068,
"text": "(Haghighi et al., 2008;",
"ref_id": "BIBREF19"
},
{
"start": 1069,
"end": 1092,
"text": "Koehn and Knight, 2002)",
"ref_id": "BIBREF26"
},
{
"start": 1134,
"end": 1162,
"text": "(Schafer and Yarowsky, 2002)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It has recently been shown, however, that much better performance can be achieved on semantic similarity and relatedness tasks by using visual representations from deep convolutional neural networks (CNNs) instead of BOVW features (Kiela and Bottou, 2014) . In this paper we apply such CNN-derived visual features to the task of bilingual lexicon induction. To obtain a translation of a word in a source language, we find the nearest neighbours from words in the target language, where words in both languages reside in a shared visual space made up of CNN-based features. Nearest neighbours are found by applying similarity metrics from both Kiela and Bottou (2014) and Bergsma and Van Durme (2011) . In summary, the contributions of this paper are:",
"cite_spans": [
{
"start": 231,
"end": 255,
"text": "(Kiela and Bottou, 2014)",
"ref_id": "BIBREF23"
},
{
"start": 643,
"end": 666,
"text": "Kiela and Bottou (2014)",
"ref_id": "BIBREF23"
},
{
"start": 671,
"end": 699,
"text": "Bergsma and Van Durme (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We obtain a relative improvement of 79% over Bergsma and Van Durme (2011) on a standard dataset based on fifteen language pairs. \u2022 We shed new light on the question of whether genuine similarity versus semantic relatedness tasks require different similarity metrics for optimal performance (Kiela and Bottou, 2014) . \u2022 We experiment with using different layers of the CNN and find that performance is not affected significantly in either case, obtaining a slight improvement for the relatedness task but no improvement for genuine similarity. \u2022 Finally, we show that the visual approach outperforms the linguistic approaches on one of the three language pairs on a standard dataset. To our knowledge this is the first work to provide a comparison of visual and state-of-theart linguistic approaches to bilingual lexicon induction.",
"cite_spans": [
{
"start": 47,
"end": 75,
"text": "Bergsma and Van Durme (2011)",
"ref_id": "BIBREF5"
},
{
"start": 292,
"end": 316,
"text": "(Kiela and Bottou, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bilingual lexicon learning is the task of automatically inducing word translations from raw data, and is an attractive alternative to the timeconsuming and expensive process of manually building high-quality resources for a wide variety of language pairs and domains. Early approaches relied on limited and domain-restricted parallel data, and the induced lexicons were typically a by-product of word alignment models (Och and Ney, 2003) . To alleviate the issue of low coverage, a large body of work has been dedicated to lexicon learning from more abundant and less restricted comparable data, e.g., (Fung and Yee, 1998; Rapp, 1999; Gaussier et al., 2004; Shezaf and Rappoport, 2010; Tamura et al., 2012) . However, these models typically rely on the availability of bilingual seed lexicons to produce shared bilingual spaces, as well as large repositories of comparable data. Therefore, several approaches attempt to learn lexicons from large monolingual data sets in two languages (Koehn and Knight, 2002; Haghighi et al., 2008) , but their performance again relies on language pair-dependent clues such as orthographic similarity. A further approach removed the requirement of seed lexicons, and induced lexicons using bilingual spaces spanned by multilingual probabilistic topic models (Vuli\u0107 et al., 2011; Liu et al., 2013; Vuli\u0107 and Moens, 2013b) . However, these models require document alignments as initial bilingual signals.",
"cite_spans": [
{
"start": 418,
"end": 437,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF40"
},
{
"start": 602,
"end": 622,
"text": "(Fung and Yee, 1998;",
"ref_id": "BIBREF16"
},
{
"start": 623,
"end": 634,
"text": "Rapp, 1999;",
"ref_id": "BIBREF42"
},
{
"start": 635,
"end": 657,
"text": "Gaussier et al., 2004;",
"ref_id": "BIBREF17"
},
{
"start": 658,
"end": 685,
"text": "Shezaf and Rappoport, 2010;",
"ref_id": "BIBREF47"
},
{
"start": 686,
"end": 706,
"text": "Tamura et al., 2012)",
"ref_id": "BIBREF54"
},
{
"start": 985,
"end": 1009,
"text": "(Koehn and Knight, 2002;",
"ref_id": "BIBREF26"
},
{
"start": 1010,
"end": 1032,
"text": "Haghighi et al., 2008)",
"ref_id": "BIBREF19"
},
{
"start": 1292,
"end": 1312,
"text": "(Vuli\u0107 et al., 2011;",
"ref_id": "BIBREF58"
},
{
"start": 1313,
"end": 1330,
"text": "Liu et al., 2013;",
"ref_id": "BIBREF34"
},
{
"start": 1331,
"end": 1354,
"text": "Vuli\u0107 and Moens, 2013b)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Lexicon Learning",
"sec_num": "2.1"
},
{
"text": "In this work, following recent research in multi-modal semantics and image representation learning-in particular deep learning and convolutional neural networks-we test the ability of purely visual data to induce shared bilingual spaces and to consequently learn bilingual word correspondences in these spaces. By compiling images related to linguistic concepts given in different languages, the potentially prohibitive data requirements and language pair-dependence from prior work is removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Lexicon Learning",
"sec_num": "2.1"
},
{
"text": "Deep convolutional neural networks (CNNs) have become extremely popular in the computer vision community. These networks currently provide state-of-the-art performance for a variety of key computer vision tasks such as object recognition (Razavian et al., 2014) . They tend to be relatively deep, consisting of a number of rectified linear unit layers (Nair and Hinton, 2010 ) and a series of convolutional layers (Krizhevsky et al., 2012) . Recently, such layers have been used in transfer learning techniques, where they are used as mid-level features in other computer vision tasks (Oquab et al., 2014) . Although the idea of transferring CNN features is not new (Driancourt and Bottou, 1990) , the simultaneous availability of Figure 1 : Illustration of calculating similarity between images from different languages. massive amounts of data and cheap GPUs has led to considerable advances in computer vision, similar in scale to those witnessed with SIFT and HOG descriptors a decade ago (Razavian et al., 2014).",
"cite_spans": [
{
"start": 238,
"end": 261,
"text": "(Razavian et al., 2014)",
"ref_id": "BIBREF43"
},
{
"start": 352,
"end": 374,
"text": "(Nair and Hinton, 2010",
"ref_id": "BIBREF38"
},
{
"start": 414,
"end": 439,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF27"
},
{
"start": 585,
"end": 605,
"text": "(Oquab et al., 2014)",
"ref_id": "BIBREF41"
},
{
"start": 666,
"end": 695,
"text": "(Driancourt and Bottou, 1990)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 731,
"end": 739,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deep Convolutional Neural Networks",
"sec_num": "2.2"
},
{
"text": "Multi-modal semantics is motivated by parallels with human concept acquisition. It has been found that semantic knowledge, from a very early age, relies heavily on perceptual information (Louwerse, 2008), and there exists substantial evidence that many concepts are grounded in the perceptual system (Barsalou, 2008) . One way to accomplish such grounding is by combining linguistic representations with information from a perceptual modality, obtained from, e.g., property norming experiments (Silberer and Lapata, 2012; Silberer et al., 2013; Roller and Schulte im Walde, 2013; or extracting features from raw image data (Feng and Lapata, 2010; Leong and Mihalcea, 2011; . Such multi-modal visual approaches often rely on local descriptors, such as SIFT (Lowe, 2004) , SURF (Bay et al., 2008) , or HOG (Dalal and Triggs, 2005) , as well as pyramidal variants of these descriptors such as PHOW (Bosch et al., 2007) . However, deep CNN features have recently been successfully transferred to multi-modal semantics (Kiela and Bottou, 2014; Shen et al., 2014) . Deep learning techniques have also been successfully employed in cross-modal tasks (Frome et al., 2013; Socher et al., 2014; Lazaridou et al., 2014; Kiros et al., 2014) . Other examples of multi-modal deep learning use restricted Boltzmann machines (Srivastava and Salakhutdinov, 2014) or auto-encoders (Wu et al., 2013; Silberer and Lapata, 2014) .",
"cite_spans": [
{
"start": 300,
"end": 316,
"text": "(Barsalou, 2008)",
"ref_id": "BIBREF2"
},
{
"start": 494,
"end": 521,
"text": "(Silberer and Lapata, 2012;",
"ref_id": "BIBREF48"
},
{
"start": 522,
"end": 544,
"text": "Silberer et al., 2013;",
"ref_id": "BIBREF50"
},
{
"start": 545,
"end": 579,
"text": "Roller and Schulte im Walde, 2013;",
"ref_id": "BIBREF44"
},
{
"start": 623,
"end": 646,
"text": "(Feng and Lapata, 2010;",
"ref_id": "BIBREF12"
},
{
"start": 647,
"end": 672,
"text": "Leong and Mihalcea, 2011;",
"ref_id": "BIBREF31"
},
{
"start": 756,
"end": 768,
"text": "(Lowe, 2004)",
"ref_id": "BIBREF36"
},
{
"start": 776,
"end": 794,
"text": "(Bay et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 804,
"end": 828,
"text": "(Dalal and Triggs, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 895,
"end": 915,
"text": "(Bosch et al., 2007)",
"ref_id": "BIBREF6"
},
{
"start": 1014,
"end": 1038,
"text": "(Kiela and Bottou, 2014;",
"ref_id": "BIBREF23"
},
{
"start": 1039,
"end": 1057,
"text": "Shen et al., 2014)",
"ref_id": "BIBREF46"
},
{
"start": 1143,
"end": 1163,
"text": "(Frome et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 1164,
"end": 1184,
"text": "Socher et al., 2014;",
"ref_id": "BIBREF52"
},
{
"start": 1185,
"end": 1208,
"text": "Lazaridou et al., 2014;",
"ref_id": "BIBREF30"
},
{
"start": 1209,
"end": 1228,
"text": "Kiros et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 1309,
"end": 1345,
"text": "(Srivastava and Salakhutdinov, 2014)",
"ref_id": "BIBREF53"
},
{
"start": 1363,
"end": 1380,
"text": "(Wu et al., 2013;",
"ref_id": "BIBREF59"
},
{
"start": 1381,
"end": 1407,
"text": "Silberer and Lapata, 2014)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Modal Semantics",
"sec_num": "2.3"
},
{
"text": "We assume that the best translation, or matching lexical item, of a word w s (in the source language) is the word w t (in the target language) that is the nearest cross-lingual neighbour to w s in the bilingual visual space. Hence a similarity (or distance) score between lexical items from different languages is required. In this section, we describe: one, how to build image representations from sets of images associated with each lexical item, i.e. how to induce a shared bilingual visual space in which all lexical items are represented; and two, how to compute the similarity between lexical items using their visual representations in the shared bilingual space. We also describe the evaluation datasets and metrics we use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Purely Visual Approach to Bilingual Lexicon Learning",
"sec_num": "3"
},
{
"text": "To facilitate further research, we will make our code and data publicly available. Please see the following webpage: http://www.cl.cam. ac.uk/\u02dcdk427/bli.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Purely Visual Approach to Bilingual Lexicon Learning",
"sec_num": "3"
},
{
"text": "We use Google Images to extract the top n ranked images for each lexical item in the evaluation datasets. It has been shown that images from Google yield higher quality representations than comparable sources such as Flickr (Bergsma and Goebel, 2011) and that Google-derived datasets are competitive with \"hand prepared datasets\" (Fergus et al., 2005) . Google Images also has the advantage that it has full coverage and is multi-lingual, as opposed to other potential image sources such as ImageNet (Deng et al., 2009) or the ESP Game Dataset (von Ahn and Dabbish, 2004) . For each Google search we specify the target language corresponding to the lexical item's language. Figure 2 gives some example images retrieved using the same query terms in different languages. For each image, we extract the presoftmax layer of an AlexNet (Krizhevsky et al., 2012) . The network contains a number of layers, starting with five convolutional layers, two fully connected layers and finally a softmax, and has been pre-trained on the ImageNet classification task using Caffe (Jia et al., 2014) . See Figure 1 for a simple diagram illustrating the approach.",
"cite_spans": [
{
"start": 224,
"end": 250,
"text": "(Bergsma and Goebel, 2011)",
"ref_id": "BIBREF4"
},
{
"start": 330,
"end": 351,
"text": "(Fergus et al., 2005)",
"ref_id": "BIBREF13"
},
{
"start": 500,
"end": 519,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 557,
"end": 571,
"text": "Dabbish, 2004)",
"ref_id": "BIBREF55"
},
{
"start": 832,
"end": 857,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF27"
},
{
"start": 1065,
"end": 1083,
"text": "(Jia et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 674,
"end": 682,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1090,
"end": 1099,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Image Representations",
"sec_num": "3.1"
},
{
"text": "Suppose that, as part of the evaluation, the similarity between bicycle and fiets is required. Each of the two words has n images associated with it -the top n as returned by Google image search, using bicycle and fiets as separate query terms. Hence to calculate the similarity, a measure is required which takes two sets of images as input. The standard approach in multi-modal semantics is to derive a single image representation for each word, e.g., by averaging the n images. An alternative is to take the pointwise maximum across the n image vector representations, also producing a single vector (Kiela and Bottou, 2014) . Kiela and Bottou call these combined representations CNN-MEAN and CNN-MAX, respectively. Cosine is then used to calculate the similarity between the resulting pair of image vectors.",
"cite_spans": [
{
"start": 603,
"end": 627,
"text": "(Kiela and Bottou, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Similarity",
"sec_num": "3.2"
},
{
"text": "An alternative strategy, however, is to consider the similarities between individual images instead of their aggregated representations. Bergsma and Van Durme (2011) propose two similarity metrics based on this principle: taking the average of the maximum similarity scores (AVGMAX), or the maximum of the maximum similarity scores (MAXMAX) between associated images. Continuing with our example, for each of the n images for bicycle, the maximum similarity is found by searching over the n images for fiets. AVGMAX then takes the average of those n maximum similarites; MAXMAX takes the maximum. To avoid confusion, we will refer to the CNN-based models that use these metrics as CNN-AVGMAX and CNN-MAXMAX. Formally, these metrics are defined as in Table 1 . We experiment with both kinds of MAX and find that they optimize for different kinds of similarity.",
"cite_spans": [
{
"start": 137,
"end": 165,
"text": "Bergsma and Van Durme (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 750,
"end": 757,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visual Similarity",
"sec_num": "3.2"
},
{
"text": "Test Sets. Bergsma and Van Durme's primary evaluation dataset consists of a set of five hundred matching lexical items for fifteen language pairs, based on six languages. (The fifteen pairs results from all ways of pairing six languages). The data is publicly available online. 1 In order to get the five hundred lexical items, they first rank nouns by the conditional probability of them occurring in the pattern \"{image,photo,photograph,picture} of {a,an} \" in the web-scale Google N-gram corpus (Lin et al., 2010) , and take the top five hundred words as their English lexicon. For each item",
"cite_spans": [
{
"start": 498,
"end": 516,
"text": "(Lin et al., 2010)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "1 http://www.clsp.jhu.edu/\u02dcsbergsma/LexImg/ AVGMAX 1 n is\u2208I(ws) max it\u2208I(wt) sim(i s , i t ) MAXMAX max is\u2208I(ws) max it\u2208I(wt) sim(i s , i t ) CNN-MEAN sim( 1 n is\u2208I(ws) i s , 1 n it\u2208I(wt) i t ) CNN-MAX sim(max I(w s ), max I(w t ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "Table 1: Visual similarity metrics between two sets of n images. I(w s ) represents the set of images for a given source word w s , I(w t ) the set of images for a given target word w t ; max takes a set of vectors and returns the single element-wise maximum vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "in the English lexicon, they obtain corresponding items in the other languages-Spanish, Italian, French, German and Dutch-through Google Translate. We call this dataset BERGSMA500. In addition to that dataset, we evaluate on a dataset constructed to measure the general performance of bilingual lexicon learning models from comparable Wikipedia data (Vuli\u0107 and Moens, 2013a) . The dataset comprises 1, 000 nouns in three languages: Spanish (ES), Italian (IT), and Dutch (NL), along with their one-to-one goldstandard word translations in English (EN) compiled semi-automatically using Google Translate and manual annotators for each language. We call this dataset VULIC1000 2 . The test set is accompanied with comparable data for training, for the three language pairs ES/IT/NL-EN on which textbased models for bilingual lexicon induction were trained (Vuli\u0107 and Moens, 2013a) .",
"cite_spans": [
{
"start": 350,
"end": 374,
"text": "(Vuli\u0107 and Moens, 2013a)",
"ref_id": "BIBREF56"
},
{
"start": 853,
"end": 877,
"text": "(Vuli\u0107 and Moens, 2013a)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "Given the way that the BERGSMA500 dataset was created, in particular the use of the pattern described above, it contains largely concrete linguistic concepts (since, eg, image of a democracy is unlikely to have a high corpus frequency). In contrast, VULIC1000 was designed to capture general bilingual word correspondences, and contains several highly abstract test examples, such as entendimiento (understanding) and desigualdad (inequality) in Spanish, or scoperta (discovery) and cambiamento (change) in Italian. Using the two evaluation datasets can potentially provide some insight into how purely visual models for bilingual lexicon induction behave with respect to both abstract and concrete concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "Evaluation Metrics. We measure performance in a standard way using mean-reciprocal rank:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MRR = 1 M M i=1 1 rank(w s , w t )",
"eq_num": "(1)"
}
],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "where rank(w s , w t ) denotes the rank of the correct translation w t (as provided in the gold standard) in the ranked list of translation candidates for w s , and M is the number of test cases. We also use precision at N (P@N) (Gaussier et al., 2004; Tamura et al., 2012; Vuli\u0107 and Moens, 2013a) , which measures the proportion of test instances where the correct translation is within the top N highest ranked translations.",
"cite_spans": [
{
"start": 229,
"end": 252,
"text": "(Gaussier et al., 2004;",
"ref_id": "BIBREF17"
},
{
"start": 253,
"end": 273,
"text": "Tamura et al., 2012;",
"ref_id": "BIBREF54"
},
{
"start": 274,
"end": 297,
"text": "Vuli\u0107 and Moens, 2013a)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "3.3"
},
{
"text": "We evaluate the four similarity metrics on the BERGSMA500 dataset and compare the results to the systems of Bergsma and Van Durme, who report results for the AVGMAX function, having concluded that it performs better than MAX-MAX on English-Spanish translations. We report their best-performing visual-only system, which combines SIFT-based descriptors with color histograms, as well as their best-performing overall system, which combines the visual approach with normalized edit distance (NED). Results are averaged over fifteen language pairs. The results can be seen in Table 2 . Each of the CNN-based methods outperforms the B&VD systems. The best performing method overall, CNN-AVGMAX, provides a 79% relative improvement over the B&VD visual-only system on the MRR measure, and a 23% relative improvement over their best-performing approach, which includes non-visual information in the form of orthographic similarity. Moreover, their methods include a tuning parameter \u03bb that governs the contributions of SIFT-based, color histogram and normalized edit distance similarity scores, whilst our approach does not require any parameter tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 573,
"end": 580,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The results in Table 2 indicate that the perimage CNN-AVGMAX metric outperforms the Language Pair Method P@1 P@5 P@10 P@20 MRR Kiela and Bottou (2014) achieved optimal performance using the latter metrics on a well-known conceptual relatedness dataset. It has been noted before that there is a clear distinction between similarity and relatedness. This is one of the reasons that, for example, WordSim353 (Finkelstein et al., 2002) has been criticized: it gives high similarity scores to cases of genuine similarity as well as relatedness (Agirre et al., 2009; . The MEN dataset that Kiela and Bottou (2014) evaluate on explicitly measures word relatedness. In contrast, the current lexicon learning task seems to require something else than relatedness: whilst a chair and table are semantically related, a translation for chair is not a good translation for table. For example, we want to make sure we translate chair to stuhl in German, and not to tisch. In other words, what we are inter-ested in for this particular task is genuine similarity, rather than relatedness.",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "Kiela and Bottou (2014)",
"ref_id": "BIBREF23"
},
{
"start": 405,
"end": 431,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 539,
"end": 560,
"text": "(Agirre et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 584,
"end": 607,
"text": "Kiela and Bottou (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Similarity and Relatedness",
"sec_num": "4.1"
},
{
"text": "Thus, we can evaluate the quality of our similarity metrics by comparing their performance on similarity and relatedness tasks: if a metric performs well at measuring genuine similarity, this is indicative of its performance in the bilingual lexicon induction task. In order to examine this question further, we evaluate performance on the MEN dataset, which measures relatedness , and the nouns-subset of the SimLex-999 dataset, which measures genuine similarity . For each pair in the dataset, we calculate the similarity score and report the Spearman \u03c1 s correlation, which measures how well the ranking of pairs given by the automatic system matches that according to the gold-standard human similarity scores. The results are reported in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 743,
"end": 750,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Similarity and Relatedness",
"sec_num": "4.1"
},
{
"text": "It is clear that the per-image similarity metrics perform better on genuine similarity, as measured by SimLex-999, than on relatedness, as measured by MEN. In fact, the \"aggressive\" CNN-MAXMAX method, which picks out a single pair of images to represent a linguistic pair, works best for SimLex-999, indicating how stringently it focuses on genuine similarity. For the aggregated visual representation-based metrics, we see the opposite effect: they perform better on the relatedness task. This sheds light on a question raised by Kiela and Bottou (2014) , where they speculate that certain errors are a result of whether their visual similarity metric measures genuine similarity on the one hand or relatedness on the other: we are better off using per-image visual metrics for genuine similarity, while aggregated visual representation-based metrics yield better performance on relatedness tasks.",
"cite_spans": [
{
"start": 531,
"end": 554,
"text": "Kiela and Bottou (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity and Relatedness",
"sec_num": "4.1"
},
{
"text": "This section compares our visual-only approach to linguistic approaches for bilingual lexicon induction. Since BERGSMA500 has not been evaluated with such methods, we evaluate on the VULIC1000 dataset (Vuli\u0107 and Moens, 2013a) . This dataset has been used to test the ability of bilingual lexicon induction models to learn translations from comparable data (see sect. 3.3). We do not necessarily expect visual methods to outperform linguistic ones, but it is instructive to see the comparison.",
"cite_spans": [
{
"start": 201,
"end": 225,
"text": "(Vuli\u0107 and Moens, 2013a)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results on VULIC1000",
"sec_num": "4.2"
},
{
"text": "We compare our visual models against the current state-of-the-art lexicon induction model using comparable data (Vuli\u0107 and Moens, 2013b) . This model induces translations from comparable Wikipedia data in two steps: (1) It learns a set of highly reliable one-to-one translation pairs using a shared bilingual space obtained by applying the multilingual probabilistic topic modeling (MuPTM) framework (Mimno et al., 2009) .",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "(Vuli\u0107 and Moens, 2013b)",
"ref_id": "BIBREF57"
},
{
"start": 400,
"end": 420,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results on VULIC1000",
"sec_num": "4.2"
},
{
"text": "(2) These highly reliable one-to-one translation pairs serve as dimensions of a word-based bilingual semantic space (Gaussier et al., 2004; Tamura et al., 2012) . The model then bootstraps from the high-precision seed lexicon of translations and learns new dimensions of the bilingual space until convergence. This model, which we call BOOT-STRAP, obtains the current best results on the evaluation dataset. For more details about the bootstrapping model and its comparison against other approaches, we refer to Vuli\u0107 and Moens (2013b) . Table 4 shows the results for the language pairs in the VULIC1000 dataset. Of the four similarity metrics, CNN-AVGMAX again performs best, as it did for BERGSMA500. The linguistic BOOT-STRAP method outperforms our visual approach for two of the three language pairs, but, for the NL-EN language pair, the visual methods in fact perform better. This can be explained by the observation that Vuli\u0107 and Moens's NL-EN training data for the BOOTSTRAP model is less abundant (2-3 times fewer Wikipedia articles) and of lower Table 5 : Spearman \u03c1 s correlation for the visual similarity metrics on a relatedness (MEN) and a genuine similarity (SimLex-999) dataset using more than one layer from the CNN.",
"cite_spans": [
{
"start": 116,
"end": 139,
"text": "(Gaussier et al., 2004;",
"ref_id": "BIBREF17"
},
{
"start": 140,
"end": 160,
"text": "Tamura et al., 2012)",
"ref_id": "BIBREF54"
},
{
"start": 512,
"end": 535,
"text": "Vuli\u0107 and Moens (2013b)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [
{
"start": 538,
"end": 545,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1057,
"end": 1064,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on VULIC1000",
"sec_num": "4.2"
},
{
"text": "quality than the data for their ES-EN and IT-EN models. We view these results as highly encouraging: while purely visual methods cannot yet reach the peak performance of linguistic approaches that are trained on sufficient amounts of high-quality text data, they outperform linguistic state-of-theart methods when there is less or lower quality text data available -which one might reasonably expect to be the default scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on VULIC1000",
"sec_num": "4.2"
},
{
"text": "The AlexNet (Krizhevsky et al., 2012) from which our image representations are extracted contains a number of layers. Kiela and Bottou (2014) only use the fully connected pre-softmax layer (which we call FC7) for their image representations. It has been found, however, that other layers in the network, especially the preceding fully connected (FC6) and fifth convolutional max pooling (POOL5) layers, also have good properties for usage in transfer learning Yosinski et al., 2014) . Hence we performed a (very) preliminary investigation of whether performance increases with the use of additional layers. In light of our findings concerning the difference between genuine similarity and relatedness, this also gives rise to the question of whether the additional layers might be useful for similarity or relatedness, or both. We hypothesize that the nature of the task matters here: if we are only concerned with genuine similarity, layer FC7 is likely to contain all the necessary information to judge whether two images are similar or not, since the network has been trained for object recognition. If, however, we are interested in relatedness, related properties may just as well be encoded deeper in the network, so in the layers preceding FC7 rather than in FC7 itself. We combined CNN layers with each other by concatenating the normalized layers. For the bilingual lexicon induction tasks, we found that performance did not signficantly increase, which is consistent with our hypothesis (since bilingual lexicon induction requires genuine similarity rather than relatedness, and so only requires FC7). We then tested on the MEN dataset for relatedness and the nouns subset of the SimLex-999 dataset for genuine similarity. The results can be found in Table 5 .",
"cite_spans": [
{
"start": 12,
"end": 37,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF27"
},
{
"start": 118,
"end": 141,
"text": "Kiela and Bottou (2014)",
"ref_id": "BIBREF23"
},
{
"start": 460,
"end": 482,
"text": "Yosinski et al., 2014)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [
{
"start": 1761,
"end": 1768,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adding CNN Layers",
"sec_num": "4.3"
},
{
"text": "The results appear to indicate that adding such additional information does not have a clear effect for genuine similarity, but may lead to a small performance increase for relatedness. This could explain why we did not see increased performance on the bilingual lexicon induction task with additional layers. However, the increase in performance on the relatedness task is relatively minor, and further investigation is required into the utility of the additional layers for relatedness tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding CNN Layers",
"sec_num": "4.3"
},
{
"text": "A possible explanation for the difference in performance between languages and datasets is that some words are more concrete than others: a visual representation for elephant is likely to be of higher quality than one for happiness. Visual representations in multi-modal models have been found to perform much better for concrete than abstract concepts .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Although concreteness ratings are available for (some) English words, this is not the case for other languages, so in order to examine the concreteness of the datasets we use a substitute method that has been shown to closely mirror how abstract a concept is: image dispersion . The image dispersion d of a concept word w is defined as the average pairwise cosine distance between all the image representations {i 1 . . . i n } in the set of images for a given word:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d(w) = 2 n(n \u2212 1) i<j\u2264n 1 \u2212 i j \u2022 i k |i j ||i k |",
"eq_num": "(2)"
}
],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The average image dispersions for the two datasets, broken down by language, are shown in Table 6 . BERGSMA500 has a lower average image dispersion score in general, and thus is more concrete than VULIC1000. It also has less variance. This may explain why we score higher, in absolute terms, on that dataset than on the more abstract one.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "When examining individual languages in the datasets, we note that the worst performing language on VULIC1000 is Italian, which is also the most abstract dataset, with the highest average image dispersion score and the lowest variance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "There is some evidence that abstract concepts are also perceptually grounded (Lakoff and Johnson, 1999) , but in a more complex way, since abstract concepts express more varied situations (Barsalou and Wiemer-Hastings, 2005) . Using an image resource like Google Images that has full coverage for almost any word, means that we can retrieve what we might call \"associated\" images (such as images of voters for words like democracy) as opposed to \"extensional\" images (such as images of cats for cat). This explains why we still obtain good performance on the more abstract VULIC1000 dataset, in some cases outperforming linguistic methods: even abstract concepts can have a clear visual representation, albeit of the associated rather than extensional kind.",
"cite_spans": [
{
"start": 77,
"end": 103,
"text": "(Lakoff and Johnson, 1999)",
"ref_id": "BIBREF28"
},
{
"start": 188,
"end": 224,
"text": "(Barsalou and Wiemer-Hastings, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "However, abstract concepts are overall more likely to yield noisier image sets. Thus, one way to improve results would be to take a multi-modal approach, where we also include linguistic information, if available, especially for abstract concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We have presented a novel approach to bilingual lexicon induction that uses convolutional neural network-derived visual features. Using only such visual features, we outperform existing visual and orthographic systems, and even a state-of-the-art linguistic approach for one language, on standard bilingual lexicon induction tasks. In doing so, we have shed new light on which visual similarity metric to use for similarity or relatedness tasks, and have experimented with using multiple layers from a CNN. The beauty of the current approach is that it is completely language agnostic and closely mirrors how humans would perform bilingual lexicon induction: by referring to the external world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "http://people.cs.kuleuven.be/\u02dcivan.vulic/software/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "DK is supported by EPSRC grant EP/I037512/1. IV is supported by the PARIS project (IWT-SBO 110067) and the PDM Kort postdoctoral fellowship from KU Leuven. SC is supported by ERC Starting Grant DisCoTex (306920) and EPSRC grant EP/I037512/1. We thank Marco Baroni for useful feedback and the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and WordNet-based approaches",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "Keith",
"middle": [
"B"
],
"last": "Hall",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pasca",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Enrique Alfonseca, Keith B. Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and WordNet-based approaches. In NAACL, pages 19-27.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Situating abstract concepts",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Barsalou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiemer-Hastings",
"suffix": ""
}
],
"year": 2005,
"venue": "Grounding cognition: The role of perception and action in memory, language, and thought",
"volume": "",
"issue": "",
"pages": "129--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence W. Barsalou and Katja Wiemer-Hastings. 2005. Situating abstract concepts. In Grounding cognition: The role of perception and action in mem- ory, language, and thought, pages 129-163.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Grounded cognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"W"
],
"last": "Barsalou",
"suffix": ""
}
],
"year": 2008,
"venue": "Annual Review of Psychology",
"volume": "59",
"issue": "1",
"pages": "617--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence W. Barsalou. 2008. Grounded cognition. Annual Review of Psychology, 59(1):617-645.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Computer Vision and Image Understanding",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Bay",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Ess",
"suffix": ""
},
{
"first": "Tinne",
"middle": [],
"last": "Tuytelaars",
"suffix": ""
},
{
"first": "Luc",
"middle": [
"J"
],
"last": "Van Gool",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "110",
"issue": "",
"pages": "346--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc J. Van Gool. 2008. Speeded-up robust features (SURF). Computer Vision and Image Understand- ing, 110(3):346-359.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using visual information to predict lexical preference",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Randy",
"middle": [],
"last": "Goebel",
"suffix": ""
}
],
"year": 2011,
"venue": "RANLP",
"volume": "",
"issue": "",
"pages": "399--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Randy Goebel. 2011. Using vi- sual information to predict lexical preference. In RANLP, pages 399-405.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning bilingual lexicons using the visual similarity of labeled web images",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "1764--1769",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similar- ity of labeled web images. In IJCAI, pages 1764- 1769.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Image classification using random forests and ferns",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
}
],
"year": 2007,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Bosch, Andrew Zisserman, and Xavier Mu\u00f1oz. 2007. Image classification using random forests and ferns. In ICCV, pages 1-8.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artifical Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artifical Intelligence Research, 49:1-47.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Histograms of oriented gradients for human detection",
"authors": [
{
"first": "Navneet",
"middle": [],
"last": "Dalal",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Triggs",
"suffix": ""
}
],
"year": 2005,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "886--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navneet Dalal and Bill Triggs. 2005. Histograms of oriented gradients for human detection. In CVPR, pages 886-893.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ImageNet: A large-scale hierarchical image database",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fei-Fei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248- 255.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Features for image retrieval: An experimental comparison",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Deselaers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Keysers",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "Information Retrieval",
"volume": "11",
"issue": "2",
"pages": "77--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Deselaers, Daniel Keysers, and Hermann Ney. 2008. Features for image retrieval: An experimental comparison. Information Retrieval, 11(2):77-107.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "TDNNextracted features",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Driancourt",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1990,
"venue": "Neuro Nimes 90",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Driancourt and L\u00e9on Bottou. 1990. TDNN- extracted features. In Neuro Nimes 90.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Visual information in semantic representation",
"authors": [
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yansong Feng and Mirella Lapata. 2010. Visual infor- mation in semantic representation. In NAACL, pages 91-99.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning object categories from Google's image search",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "Fei-Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2005,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "1816--1823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Fergus, Fei-Fei Li, Pietro Perona, and Andrew Zisserman. 2005. Learning object categories from Google's image search. In ICCV, pages 1816-1823.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Placing Search in Context: The Concept Revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing Search in Context: The Concept Revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Devise: A deep visualsemantic embedding model",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Frome",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "2121--2129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Frome, Gregory S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc'Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual- semantic embedding model. In NIPS, pages 2121- 2129.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An IR approach for translating new words from nonparallel, comparable texts",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Yee",
"middle": [],
"last": "Lo Yuen",
"suffix": ""
}
],
"year": 1998,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "414--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung and Lo Yuen Yee. 1998. An IR approach for translating new words from nonparallel, compa- rable texts. In ACL, pages 414-420.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A geometric view on bilingual lexicon extraction from comparable corpora",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Jean-Michel",
"middle": [],
"last": "Renders",
"suffix": ""
},
{
"first": "Irina",
"middle": [],
"last": "Matveeva",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "D\u00e9jean",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "526--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Gaussier, Jean-Michel Renders, Irina Matveeva, Cyril Goutte, and Herv\u00e9 D\u00e9jean. 2004. A geometric view on bilingual lexicon extraction from compara- ble corpora. In ACL, pages 526-533.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Jitendra",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2014,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross Girshick, Jeff Donahue, Trevor Darrell, and Jiten- dra Malik. 2014. Rich feature hierarchies for accu- rate object detection and semantic segmentation. In CVPR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning bilingual lexicons from monolingual corpora",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "771--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In ACL, pages 771-779.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning abstract concept embeddings from multi-modal data: Since you probably can't see what I mean",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "255--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill and Anna Korhonen. 2014. Learning ab- stract concept embeddings from multi-modal data: Since you probably can't see what I mean. In EMNLP, pages 255-265.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2014. SimLex-999: Evaluating semantic mod- els with (genuine) similarity estimation. CoRR, abs/1408.3456.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Caffe: Convolutional architecture for fast feature embedding",
"authors": [
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Shelhamer",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Karayev",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Ross",
"middle": [
"B"
],
"last": "Girshick",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Guadarrama",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2014,
"venue": "ACM Multimedia",
"volume": "",
"issue": "",
"pages": "675--678",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross B. Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Con- volutional architecture for fast feature embedding. In ACM Multimedia, pages 675-678.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning image embeddings using convolutional neural networks for improved multi-modal semantics",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and L\u00e9on Bottou. 2014. Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In EMNLP, pages 36-45.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving multi-modal representations using image dispersion: Why less is sometimes more",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "835--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representa- tions using image dispersion: Why less is sometimes more. In ACL, pages 835-841.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multimodal neural language models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2014,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "595--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Multimodal neural language models. In ICML, pages 595-603.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning a translation lexicon from monolingual corpora",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "ULA'02 Workshop",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In ULA'02 Workshop, pages 9-16.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "ImageNet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "1106--1114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hin- ton. 2012. ImageNet classification with deep con- volutional neural networks. In NIPS, pages 1106- 1114.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Philosophy in the flesh: The embodied mind and its challenge to Western thought",
"authors": [
{
"first": "George",
"middle": [],
"last": "Lakoff",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Lakoff and Mark Johnson. 1999. Philosophy in the flesh: The embodied mind and its challenge to Western thought.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Cross-lingual relevance models",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Choquette",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2002,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "175--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Lavrenko, Martin Choquette, and W. Bruce Croft. 2002. Cross-lingual relevance models. In SIGIR, pages 175-182.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Is this a wampimuk? Cross-modal mapping between distributional semantics and the visual world",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1403--1414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? Cross-modal map- ping between distributional semantics and the visual world. In ACL, pages 1403-1414.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Going beyond text: A hybrid image-text approach for measuring word relatedness",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Chee Wee Leong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "1403--1407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chee Wee Leong and Rada Mihalcea. 2011. Going beyond text: A hybrid image-text approach for mea- suring word relatedness. In IJCNLP, pages 1403- 1407.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dictionary-based techniques for crosslanguage information retrieval. Information Processing & Management",
"authors": [
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Oard",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "41",
"issue": "",
"pages": "523--547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina-Anne Levow, Douglas Oard, and Philip Resnik. 2005. Dictionary-based techniques for cross- language information retrieval. Information Pro- cessing & Management, 41:523 -547, 2005/05//.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "New tools for Web-scale N-grams",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Kailash",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Lathbury",
"suffix": ""
},
{
"first": "Vikram",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Kapil",
"middle": [],
"last": "Dalwani",
"suffix": ""
},
{
"first": "Sushant",
"middle": [],
"last": "Narsale",
"suffix": ""
}
],
"year": 2010,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "2221--2227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin, Kenneth Ward Church, Heng Ji, Satoshi Sekine, David Yarowsky, Shane Bergsma, Kailash Patil, Emily Pitler, Rachel Lathbury, Vikram Rao, Kapil Dalwani, and Sushant Narsale. 2010. New tools for Web-scale N-grams. In LREC, pages 2221-2227.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Topic models + word alignment = A flexible framework for extracting bilingual dictionary from comparable corpus",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "212--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Kevin Duh, and Yuji Matsumoto. 2013. Topic models + word alignment = A flexible frame- work for extracting bilingual dictionary from com- parable corpus. In CoNLL, pages 212-221.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Symbol interdependency in symbolic and embodied cognition",
"authors": [
{
"first": "Max",
"middle": [
"M"
],
"last": "Louwerse",
"suffix": ""
}
],
"year": 2008,
"venue": "Topics in Cognitive Science",
"volume": "59",
"issue": "",
"pages": "617--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max M. Louwerse. 2008. Symbol interdependency in symbolic and embodied cognition. Topics in Cogni- tive Science, 59(1):617-645.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Distinctive image features from scale-invariant keypoints",
"authors": [
{
"first": "David",
"middle": [
"G"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 2004,
"venue": "International Journal of Computer Vision",
"volume": "60",
"issue": "2",
"pages": "91--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David G. Lowe. 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Polylingual topic models",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Mimno",
"suffix": ""
},
{
"first": "Hanna",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "880--889",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Mimno, Hanna M. Wallach, Jason Narad- owsky, David A. Smith, and Andrew McCallum. 2009. Polylingual topic models. In EMNLP, pages 880-889.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Rectified linear units improve restricted boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "807--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807-814.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A guided tour to approximate string matching",
"authors": [
{
"first": "Gonzalo",
"middle": [],
"last": "Navarro",
"suffix": ""
}
],
"year": 2001,
"venue": "ACM Computing Surveys",
"volume": "33",
"issue": "1",
"pages": "31--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gonzalo Navarro. 2001. A guided tour to approx- imate string matching. ACM Computing Surveys, 33(1):31-88.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning and transferring mid-level image representations using convolutional neural networks",
"authors": [
{
"first": "Maxime",
"middle": [],
"last": "Oquab",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Laptev",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Sivic",
"suffix": ""
}
],
"year": 2014,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "1717--1724",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxime Oquab, L\u00e9on Bottou, Ivan Laptev, and Josef Sivic. 2014. Learning and transferring mid-level image representations using convolutional neural networks. In CVPR, pages 1717-1724.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Automatic identification of word translations from unrelated English and German corpora",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and Ger- man corpora. In ACL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "CNN features offthe-shelf: an astounding baseline for recognition",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Sharif Razavian",
"suffix": ""
},
{
"first": "Hossein",
"middle": [],
"last": "Azizpour",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Sharif Razavian, Hossein Azizpour, Josephine Sul- livan, and Stefan Carlsson. 2014. CNN features off- the-shelf: an astounding baseline for recognition. CoRR, abs/1403.6382.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A multimodal LDA model integrating textual, cognitive and visual modalities",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1146--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A multimodal LDA model integrating textual, cogni- tive and visual modalities. In EMNLP, pages 1146- 1157.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Inducing translation lexicons via diverse similarity measures and bridge languages",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Schafer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2002,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In CoNLL, pages 1-7.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Learning semantic representations using convolutional neural networks for Web search",
"authors": [
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
}
],
"year": 2014,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "373--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr\u00e9goire Mesnil. 2014. Learning semantic rep- resentations using convolutional neural networks for Web search. In WWW, pages 373-374.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Bilingual lexicon generation using non-aligned signatures",
"authors": [
{
"first": "Daphna",
"middle": [],
"last": "Shezaf",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "98--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphna Shezaf and Ari Rappoport. 2010. Bilingual lexicon generation using non-aligned signatures. In ACL, pages 98-107.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Grounded models of semantic representation",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1423--1433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In EMNLP, pages 1423-1433.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Learning grounded meaning representations with autoencoders",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "721--732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2014. Learn- ing grounded meaning representations with autoen- coders. In ACL, pages 721-732.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Models of semantic representation with visual attributes",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Ferrari",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "572--582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of semantic representation with visual attributes. In ACL, pages 572-582.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Video google: A text retrieval approach to object matching in videos",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Sivic",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2003,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "1470--1477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Sivic and Andrew Zisserman. 2003. Video google: A text retrieval approach to object match- ing in videos. In ICCV, pages 1470-1477.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Grounded compositional semantics for finding and describing images with sentences",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of ACL",
"volume": "2",
"issue": "",
"pages": "207--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Andrej Karpathy, Quoc V. Le, Christo- pher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of ACL, 2:207-218.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Multimodal learning with deep Boltzmann machines",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "2949--2980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava and Ruslan Salakhutdinov. 2014. Multimodal learning with deep Boltzmann ma- chines. Journal of Machine Learning Research, 15(1):2949-2980.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Bilingual lexicon extraction from comparable corpora using label propagation",
"authors": [
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "24--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2012. Bilingual lexicon extraction from comparable corpora using label propagation. In EMNLP, pages 24-36.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Labeling images with a computer game",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Luis Von Ahn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dabbish",
"suffix": ""
}
],
"year": 2004,
"venue": "CHI",
"volume": "",
"issue": "",
"pages": "319--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis von Ahn and Laura Dabbish. 2004. Labeling im- ages with a computer game. In CHI, pages 319-326.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Crosslingual semantic similarity of words as the similarity of their semantic word responses",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2013,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "106--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2013a. Cross- lingual semantic similarity of words as the similarity of their semantic word responses. In NAACL, pages 106-116.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else)",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1613--1624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2013b. A study on bootstrapping bilingual vector spaces from non- parallel data (and nothing else). In EMNLP, pages 1613-1624.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Identifying word translations from comparable corpora using latent topic models",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Wim",
"middle": [
"De"
],
"last": "Smet",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "479--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Wim De Smet, and Marie-Francine Moens. 2011. Identifying word translations from compara- ble corpora using latent topic models. In ACL, pages 479-484.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Online multimodal deep similarity learning with application to image retrieval",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Steven",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Hoi",
"suffix": ""
},
{
"first": "Peilin",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Dayong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miao",
"suffix": ""
}
],
"year": 2013,
"venue": "ACM Multimedia",
"volume": "",
"issue": "",
"pages": "153--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Wu, Steven C. H. Hoi, Hao Xia, Peilin Zhao, Dayong Wang, and Chunyan Miao. 2013. Online multimodal deep similarity learning with ap- plication to image retrieval. In ACM Multimedia, pages 153-162.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "How transferable are features in deep neural networks? In NIPS",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Clune",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Hod",
"middle": [],
"last": "Lipson",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "3320--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In NIPS, pages 3320-3328.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Example images for the languages in the Bergsma and Van Durme dataset."
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Performance on VULIC1000 compared to the linguistic bootstrapping method ofVuli\u0107 and Moens (2013b).",
"html": null,
"content": "<table><tr><td>Method</td><td colspan=\"2\">MEN SimLex-999</td></tr><tr><td>CNN-AVGMAX</td><td>0.56</td><td>0.34</td></tr><tr><td>CNN-MAXMAX</td><td>0.55</td><td>0.36</td></tr><tr><td>CNN-MEAN</td><td>0.61</td><td>0.32</td></tr><tr><td>CNN-MAX</td><td>0.60</td><td>0.27</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>: Spearman \u03c1 s correlation for the visual</td></tr><tr><td>similarity metrics on a relatedness (MEN) and a</td></tr><tr><td>genuine similarity (SimLex-999) dataset.</td></tr><tr><td>aggregated visual representation-based metrics of</td></tr><tr><td>CNN-MEAN and CNN-MAX, despite the fact</td></tr><tr><td>that</td></tr></table>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"text": "Average image dispersion for the datasets, by language.",
"html": null,
"content": "<table/>"
}
}
}
}