ACL-OCL / Base_JSON /prefixN /json /N10 /N10-1011.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N10-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:50:45.416567Z"
},
"title": "Visual Information in Semantic Representation",
"authors": [
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh",
"country": "UK"
}
},
"email": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh",
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the linguistic input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account.",
"pdf_parse": {
"paper_id": "N10-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the linguistic input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The representation and modeling of word meaning has been a central problem in cognitive science and natural language processing. Both disciplines are concerned with how semantic knowledge is acquired, organized, and ultimately used in language processing and understanding. A popular tradition of studying semantic representation has been driven by the assumption that word meaning can be learned from the linguistic environment. Words that are similar in meaning tend to behave similarly in terms of their distributions across different contexts. Semantic space models, among which Latent Semantic Analysis (LSA, Landauer and Dumais 1997) is perhaps known best, operationalize this idea by capturing word meaning quantitatively in terms of simple co-occurrence statistics. Each word w is represented by a k element vector reflecting the local distributional context of w relative to k context words. More recently, topic models have been gaining ground as a more structured representation of word meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast to more standard semantic space models where word senses are conflated into a single representation, topic models assume that words observed in a corpus manifest some latent structureword meaning is a probability distribution over a set of topics (corresponding to coarse-grained senses). Each topic is a probability distribution over words, and the content of the topic is reflected in the words to which it assigns high probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semantic space (and topic) models are extracted from real language corpora, and thus provide a direct means of investigating the influence of the statistics of language on semantic representation. They have been successful in explaining a wide range of behavioral data -examples include lexical priming, deep dyslexia, text comprehension, synonym selection, and human similarity judgments (see Landauer and Dumais 1997 and the references therein) . They also underlie a large number of natural language processing (NLP) tasks including lexicon acquisition, word sense discrimination, text segmentation and notably information retrieval. Despite their popularity, these models offer a somewhat impoverished representation of word meaning based solely on information provided by the linguistic input.",
"cite_spans": [
{
"start": 394,
"end": 446,
"text": "Landauer and Dumais 1997 and the references therein)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many experimental studies in language acquisition suggest that word meaning arises not only from exposure to the linguistic environment but also from our interaction with the physical world. For example, infants are from an early age able to form perceptually-based category representations (Quinn et al., 1993) . Perhaps unsurprisingly, words that refer to concrete entities and actions are among the first words being learned as these are directly observable in the environment (Bornstein et al., 2004) . Experimental evidence also shows that children respond to categories on the basis of visual features, e.g., they generalize object names to new objects often on the basis of similarity in shape (Landau et al., 1998) and texture (Jones et al., 1991) .",
"cite_spans": [
{
"start": 291,
"end": 311,
"text": "(Quinn et al., 1993)",
"ref_id": "BIBREF21"
},
{
"start": 480,
"end": 504,
"text": "(Bornstein et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 701,
"end": 722,
"text": "(Landau et al., 1998)",
"ref_id": "BIBREF13"
},
{
"start": 735,
"end": 755,
"text": "(Jones et al., 1991)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we aim to develop a unified mod-eling framework of word meaning that captures the mutual dependence between the linguistic and visual context. This is a challenging task for at least two reasons. First, in order to emulate the environment within which word meanings are acquired, we must have recourse to a corpus of verbal descriptions and their associated images. Such corpora are in short supply compared to the large volumes of solely textual data. Secondly, our model should integrate linguistic and visual information in a single representation. It is unlikely that we have separate representations for different aspects of word meaning (Rogers et al., 2004) . We meet the first challenge by exploiting multimodal corpora, namely collections of documents that contain pictures. Although large scale corpora with a one-to-one correspondence between words and images are difficult to come by, datasets that contain images and text are ubiquitous. For example, online news documents are often accompanied by pictures. Using this data, we develop a model that combines textual and visual information to learn semantic representations. We assume that images and their surrounding text have been generated by a shared set of latent variables or topics. Our model follows the general rationale of topic models -it is based upon the idea that documents are mixtures of topics. Importantly, our topics are inferred from the joint distribution of textual and visual words. Our experimental results show that a closer correspondence to human word similarity and association can be obtained by taking the visual modality into account.",
"cite_spans": [
{
"start": 657,
"end": 678,
"text": "(Rogers et al., 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The bulk of previous work has focused on models of semantic representation that are based solely on textual data. Many of these models represent words as vectors in a high-dimensional space (e.g., Landauer and Dumais 1997), whereas probabilistic alternatives view documents as mixtures of topics, where words are represented according to their likelihood in each topic (e.g., Steyvers and Griffiths 2007) . Both approaches allow for the estimation of similarity between words. Spatial models compare words using distance metrics (e.g., cosine), while probabilistic models measure similarity between terms according to the degree to which they share the same topic distributions.",
"cite_spans": [
{
"start": 376,
"end": 404,
"text": "Steyvers and Griffiths 2007)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Within cognitive science, the problem of how words are grounded in perceptual representations has attracted some attention. Previous modeling efforts have been relatively small-scale, using either artificial images, or data gathered from a few subjects in the lab. Furthermore, the proposed models work well for the tasks at hand (e.g., either word learning or object categorization) but are not designed as a general-purpose meaning representation. For example, Yu (2005) integrates visual information in a computational model of lexical acquisition and object categorization. The model learns a mapping between words and visual features from data provided by (four) subjects reading a children's story. In a similar vein, Roy (2002) considers the problem of learning which words or word sequences refer to objects in a synthetic image consisting of ten rectangles. Andrews et al. (2009) present a probabilistic model that incorporates perceptual information (indirectly) by combining distributional information gathered from corpus data with speaker generated feature norms 1 (which are also word-based). Much work in computer vision attempts to learn the underlying connections between visual features and words from examples of images annotated with description keywords. The aim here is to enhance image-based applications (e.g., search or retrieval) by developing models that can label images with keywords automatically. Most methods discover the correlations between visual features and words by introducing latent variables. Standard latent semantic analysis (LSA) and its probabilistic variant (PLSA) have been applied to this task (Pan et al., 2004; Hofmann, 2001; Monay and Gatica-Perez, 2007) . More sophisticated approaches estimate the joint distribution of words and regional image features, whilst treating annotation as a problem of statistical inference in a graphical model Barnard et al., 2002) .",
"cite_spans": [
{
"start": 463,
"end": 472,
"text": "Yu (2005)",
"ref_id": "BIBREF29"
},
{
"start": 724,
"end": 734,
"text": "Roy (2002)",
"ref_id": "BIBREF23"
},
{
"start": 867,
"end": 888,
"text": "Andrews et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 1642,
"end": 1660,
"text": "(Pan et al., 2004;",
"ref_id": "BIBREF20"
},
{
"start": 1661,
"end": 1675,
"text": "Hofmann, 2001;",
"ref_id": "BIBREF11"
},
{
"start": 1676,
"end": 1705,
"text": "Monay and Gatica-Perez, 2007)",
"ref_id": "BIBREF18"
},
{
"start": 1894,
"end": 1915,
"text": "Barnard et al., 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our own work aims to develop a model of semantic representation that takes visual context into account. We do not model explicitly the correspondence of words and visual features, or learn a mapping between words and visual features. Rather, we develop a multimodal representation of meaning which is based on visual information and distributional statistics. We hypothesize that visual features are crucial in acquiring and representing meaning Michelle Obama fever hits the UK In the UK on her first visit as first lady, Michelle Obama seems to be making just as big an impact. She has attracted as much interest and column inches as her husband on this London trip; creating a buzz with her dazzling outfits, her own schedule of events and her own fanbase. Outside Buckingham Palace, as crowds gathered in anticipation of the Obamas' arrival, Mrs Obama's star appeal was apparent. and conversely, that linguistic information can be useful in isolating salient visual features. Our model extracts a semantic representation from large document collections and their associated images without any human involvement. Contrary to Andrews et al. (2009) we use visual features directly without relying on speaker generated norms. Furthermore, unlike most work in image annotation, we do not employ any goldstandard data where images have been manually labeled with their description keywords.",
"cite_spans": [
{
"start": 1128,
"end": 1149,
"text": "Andrews et al. (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Much like LSA and the related topic models our model creates semantic representations from large document collections. Importantly, we assume that the documents are paired with images which in turn describe some of the document's content. Our experiments make use of news articles which are often accompanied with images illustrating events, objects or people mentioned in the text. Other datasets with similar properties include Wikipedia entries and their accompanying pictures, illustrated stories, and consumer photo collections. An example news article and its associated image is shown in Table 1 (we provide more detail on the database we used in our experiments in Section 4).",
"cite_spans": [],
"ref_spans": [
{
"start": 595,
"end": 602,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Semantic Representation Model",
"sec_num": "3"
},
{
"text": "Our model exploits the redundancy inherent in this multimodal collection. Specifically, we assume that the images and their surrounding text have been generated by a shared set of topics. A potential stumbling block here is the fact that images and documents represent distinct modalities: images are commonly described by a continuous feature space (e.g., color, shape, texture; Barnard et al. 2002; , whereas words are discrete. Fortunately, we can convert the visual features from a continuous onto a discrete space, thereby rendering image features more like word units. In the following we describe how we do this and then move on to present an extension of Latent Dirichlet Allocation (LDA, ), a topic model that can be used to represent meaning as a probability distribution over a set of multimodal topics. Finally, we discuss how word similarity can be measured under this model.",
"cite_spans": [
{
"start": 380,
"end": 400,
"text": "Barnard et al. 2002;",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Representation Model",
"sec_num": "3"
},
{
"text": "A large number of image processing techniques have been developed in computer vision for extracting meaningful features which are subsequently used in a modeling task. For example, a common first step to all automatic image annotation methods is partitioning the image into regions, using either an image segmentation algorithm (such as normalized cuts; Shi and Malik 2000) or a fixed-grid layout (Feng et al., 2004) . In the first case the image is represented by irregular regions (see Figure 1 (a)), whereas in the second case the image is partitioned into smaller scale regions which are uniformly extracted from a fixed grid (see Figure 1(b) ). The obtained regions are further represented by a standard set of features including color, shape, and texture. These can be treated as continuous vectors or in quantized form (Barnard et al., 2002) .",
"cite_spans": [
{
"start": 397,
"end": 416,
"text": "(Feng et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 826,
"end": 848,
"text": "(Barnard et al., 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 488,
"end": 496,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 635,
"end": 646,
"text": "Figure 1(b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Image Processing",
"sec_num": "3.1"
},
{
"text": "Despite much progress in image segmentation, there is currently no automatic algorithm that can reliably divide an image into meaningful parts. Extracting features from small local regions is thus preferable, especially for image collections that are diverse and have low resolution (this is often the case for news images). In our work we identify local regions using a difference-of-Gaussians point detector (see Figure 1 (11 \u00d7 13) regions, whereas an average of 240 points (depending on the image content) are detected. A non-sparse feature representation is critical in our case, since we usually do not have more than one image per document. We compute local image descriptors using the the Scale Invariant Feature Transform (SIFT) algorithm (Lowe, 1999) . Importantly, SIFT descriptors are designed to be invariant to small shifts in position, changes in illumination, noise, and viewpoint and can be used to perform reliable matching between different views of an object or scene (Mikolajczyk and Schmid, 2003; Lowe, 1999) . We further quantize the SIFT descriptors using the K-means clustering algorithm to obtain a discrete set of visual terms (visiterms) which form our visual vocabulary Voc V . Each entry in this vocabulary stands for a group of image regions which are similar in content or appearance and assumed to originate from similar objects. More formally, each image I is expressed in a bag-of-words format vector,",
"cite_spans": [
{
"start": 747,
"end": 759,
"text": "(Lowe, 1999)",
"ref_id": "BIBREF15"
},
{
"start": 987,
"end": 1017,
"text": "(Mikolajczyk and Schmid, 2003;",
"ref_id": "BIBREF17"
},
{
"start": 1018,
"end": 1029,
"text": "Lowe, 1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 415,
"end": 423,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Image Processing",
"sec_num": "3.1"
},
{
"text": "[v 1 , v 2 , ..., v L ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image Processing",
"sec_num": "3.1"
},
{
"text": ", where v i = n only if I has n regions labeled with v i . Since both images and documents in our corpus are now represented as bags-of-words, and since we assume that the visual and textual modalities express the same content, we can go a step further and represent the document and its associated image as a mixture of verbal and visual words d Mix . We will then learn a topic model on this concatenated representation of visual and textual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image Processing",
"sec_num": "3.1"
},
{
"text": "Latent Dirichlet Allocation Griffiths et al., 2007 ) is a probabilistic model of text gen-eration. LDA models each document using a mixture over K topics, which are in turn characterized as distributions over words. The words in the document are generated by repeatedly sampling a topic according to the topic distribution, and selecting a word given the chosen topic. Under this framework, the problem of meaning representation is expressed as one of statistical inference: given some datatextual and visual words -infer the latent structure from which it was generated. Word meaning is thus modeled as a probability distribution over a set of latent multimodal topics.",
"cite_spans": [
{
"start": 28,
"end": 50,
"text": "Griffiths et al., 2007",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "LDA can be represented as a three level hierarchical Bayesian model. Given a corpus consisting of M documents, the generative process for a document d is as follows. We first draw the mixing proportion over topics \u03b8 d from a Dirichlet prior with parameters \u03b1. Next, for each of the N d words w dn in document d, a topic z dn is first drawn from a multinomial distribution with parameters \u03b8 dn . The probability of a word token w taking on value i given that topic z = j is parametrized using a matrix \u03b2 with b i j = p(w = i|z = j). Integrating out \u03b8 d 's and z dn 's, gives P(D|\u03b1, \u03b2), the probability of a corpus (or document collection):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "M \u220f d=1 Z P(\u03b8 d |\u03b1) N d \u220f n=1 \u2211 z dn P(z dn |\u03b8 d )P(w dn |z dn , \u03b2) d\u03b8 d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "The central computational problem in topic modeling is to compute the posterior distribution P(\u03b8, z|w, \u03b1, \u03b2) of the hidden variables given a document w = (w 1 , w 2 , . . . , w N ). Although this distribution is intractable in general, a variety of ap-proximate inference algorithms have been proposed in the literature including variational inference which our model adopts. introduce a set of variational parameters, \u03b3 and \u03c6, and show that a tight lower bound on the log likelihood of the probability can be found using the following optimization procedure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "(\u03b3 * , \u03c6 * ) = argmin \u03b3,\u03c6 D(q(\u03b8, z|\u03b3, \u03c6)||p(\u03b8, z|w, \u03b1, \u03b2))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "Here, D denotes the Kullback-Leibler (KL) divergence between the true posterior and the variational distribution q(\u03b8, z|\u03b3, \u03c6) defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "q(\u03b8, z|\u03b3, \u03c6) = q(\u03b8|\u03b3) \u220f N n=1 q(z n |\u03c6 n ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "where the Dirichlet parameter \u03b3 and the multinomial parameters (\u03c6 1 , . . . , \u03c6 N ) are the free variational parameters. Notice that the optimization of parameters (\u03b3 * (w), \u03c6 * (w)) is documentspecific (whereas \u03b1 is corpus specific).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "Previous applications of LDA (e.g., to document classification or information retrieval) typically make use of the posterior Dirichlet parameters \u03b3 * (w) associated with a given document. We are not so much interested in \u03b3 as we wish to obtain a semantic representation for a given word across documents. We therefore train the LDA model sketched above on a corpus of multimodal documents {d Mix } consisting of both textual and visual words. We select the number of topics, K, and apply the LDA algorithm to obtain the \u03b2 parameters, where \u03b2 represents the probability of a word w i given a topic z j , p(w i |z j ) = \u03b2 i j . The meaning of w i is thus extracted from \u03b2 and is a K-element vector, whose components correspond to the probability of w i given each latent topic assumed to have generated the document collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Model",
"sec_num": "3.2"
},
{
"text": "The ability to accurately measure the similarity or association between two words is often used as a diagnostic for the psychological validity of semantic representation models. In the topic model described above, the similarity between two words w 1 and w 2 can be intuitively measured by the extent to which they share the same topics (Griffiths et al., 2007) . For example, we may use the KL divergence to measure the difference between the distributions p and q:",
"cite_spans": [
{
"start": 337,
"end": 361,
"text": "(Griffiths et al., 2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "D(p, q) = K \u2211 j=1 p j log 2 p j q j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "where p and q are shorthand for P(w 1 |z j ) and P(w 2 |z j ), respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "The KL divergence is asymmetric and in many applications, it is preferable to apply a symmetric measure such as the Jensen Shannon (JS) divergence. The latter measures the \"distance\" between p and q through (p+q) 2 , the average of p and q:",
"cite_spans": [
{
"start": 207,
"end": 212,
"text": "(p+q)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "JS(p, q) = 1 2 D(p, (p + q) 2 ) + D(q, (p + q) 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "An alternative approach to expressing the similarity between two words is proposed in Griffiths et al. (2007) . The underlying idea is that word association can be expressed as a conditional distribution. If we have seen word w 1 , then we can determine the probability that w 2 will be also generated by computing P(w 2 |w 1 ). Although the LDA generative model allows documents to contain multiple topics, here it is assumed that both w 1 and w 2 came from a single topic:",
"cite_spans": [
{
"start": 86,
"end": 109,
"text": "Griffiths et al. (2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "P(w 2 |w 1 ) = K \u2211 z=1 P(w 2 |z)P(z|w 1 ) P(z|w 1 ) \u221d P(w 1 |z)P(z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "where p(z) is uniform, a single topic is sampled from the distribution P(z|w 1 ), and an overall estimate is obtained by averaging over all topics K. Griffiths et al. (2007) report results on modeling human association norms using exclusively P(w 2 |w 1 ). We are not aware of any previous work that empirically assesses which measure is best at capturing semantic similarity. We undertake such an empirical comparison as it is not a priory obvious how similarity is best modeled under a multimodal representation.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "Griffiths et al. (2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": "3.3"
},
{
"text": "In this section we discuss our experimental design for assessing the performance of the model presented above. We give details on our training procedure and parameter estimation and present the baseline method used for comparison with our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We trained the multimodal topic model on the corpus created in Feng and Lapata (2008) . It contains 3,361 documents that have been downloaded from the BBC News website. 2 Each document comes with an image that depicts some of its content. The images are usually 203 pixels wide and 152 pixels high. The average document length is 133.85 words. The corpus has 542,414 words in total. Our experiments used a vocabulary of 6,253 textual words. These were words that occurred at least five times in the whole corpus, excluding stopwords. The accompanying images were preprocessed as follows. We first extracted SIFT features from each image (150 on average) which we subsequently quantized into a discrete set of visual terms using K-means. As we explain below, we determined an optimal value for K experimentally.",
"cite_spans": [
{
"start": 63,
"end": 85,
"text": "Feng and Lapata (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Evaluation Our evaluation experiments compared the multimodal topic model against a standard textbased topic model trained on the same corpus whilst ignoring the images. Both models were assessed on two related tasks, that have been previously used to evaluate semantic representation models, namely word association and word similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "In order to simulate word association, we used the human norms collected by Nelson et al. (1999) . 3 These were established by presenting a large number of participants with a cue word (e.g., rice) and asking them to name an associate word in response (e.g., Chinese, wedding, food, white). For each word, the norms provide a set of associates and the frequencies with which they were named. We can thus compute the probability distribution over associates for each cue. Analogously, we can estimate the degree of similarity between a cue and its associates using our model (and any of the measures in Section 3.3). And consequently examine (using correlation analysis) the degree of linear relationship between the human cue-associate probabilities and the automatically derived similarity values. We also report how many times the word with the highest probability under the model was the first associate in the norms. The norms contain 10,127 unique words in total. Of these, we created semantic representations for the 3,895 words that appeared in our corpus.",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "Nelson et al. (1999)",
"ref_id": "BIBREF19"
},
{
"start": 99,
"end": 100,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Our word similarity experiment used the Word-Sim353 test collection (Finkelstein et al., 2002) which consists of relatedness judgments for word pairs. For each pair, a similarity judgment (on a scale of 0 to 10) was elicited from human subjects (e.g., tiger-cat are very similar, whereas delay-racism are not). The average rating for each pair represents an estimate of the perceived similarity of the two words. The task varies slightly from word association. Here, participants are asked 3 http://www.usf.edu/Freeassociation. to rate perceived similarity rather than generate the first word that came into their head in response to a cue word. The collection contains similarity ratings for 353 word pairs. Of these, we constructed semantic representations for the 254 that appeared in our corpus. We also evaluated how well model produced similarities correlate with human ratings. Throughout this paper we report correlation coefficients using Pearson's r.",
"cite_spans": [
{
"start": 68,
"end": 94,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Model Selection The multimodal topic model has several parameters that must be instantiated. These include the quantization of the image features, the number of topics, the choice of similarity function, and the values for \u03b1 and \u03b2. We explored the parameter space on held-out data. Specifically, we fit the parameters for the word association and similarity models separately using a third of the association norms and WordSim353 similarity judgments, respectively. As mentioned in Section 3.1 we used K-means to quantize the image features into a discrete set of visual terms. We varied K from 250 to 2000. We also varied the number of topics from 25 to 750 for both the multimodal and text-based topic models. The parameter \u03b1 was set to 0.1 and \u03b2 was initialized randomly. The model was trained using variational Bayes until convergence of its bound on the likelihood objective. This took 1,000 iterations. Figure 2 shows how word association performance varies on the development set with different numbers of topics (t) and visual terms (r) according to three similarity measures: KL divergence, JS divergence, and P(w 2 |w 1 ), the probability of word w 2 given w 1 (see Section 3.3). Figure 3 shows results on the development set for the word similarity task. As far as word association is concerned, we obtain best results with P(w 2 |w 1 ), 750 visual terms and 750 topics (r = 0.188). On word similarity, JS performs best with 500 visual terms and 25 topics (r = 0.374). It is not surprising that P(w 2 |w 1 ) works best for word association. The measure expresses the associative relations between words as a conditional distribution over potential response words w 2 for cue word w 1 . A symmetric function is more appropriate for word similarity as the task involves measuring the degree to which to words share some meaning (expressed as topics in our model) rather than whether a word is likely to be generated as a response to another word. These differences also lead to different parametrizations of the semantic space. A rich visual term vocabulary (750 terms) is needed for modeling association as broader aspects of word meaning are taken into account, whereas a sparser more focused representation (with 500 visual terms and 25 overall topics) is better at isolating the common semantic content between two words. We explored the parameter space for the text-based topic model in a similar fashion. On the word association task the best correlation coefficient was achieved with 750 topics and P(w 2 |w 1 ) (r = 0.139). On word similarity, the best results were obtained with 75 topics and the JS divergence (r = 0.309).",
"cite_spans": [],
"ref_spans": [
{
"start": 909,
"end": 917,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1190,
"end": 1198,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Word Association Word Similarity UpperBnd 0.400 0.545 MixLDA 0.123 0.318 TxtLDA 0.077 0.247 Table 2 : Model performance on word association and similarity (test set). Table 2 summarizes our results on the test set using the optimal set of parameters as established on the development set. The first row shows how well humans agree with each other on the two tasks (UpperBnd). We estimated the intersubject correlation using leave-one-out resampling 4 (Weiss and Kulikowski, 1991) . As can be seen, in all cases the topic model based on textual and visual modalities (MixLDA) outperforms the model relying solely on textual information (TxtLDA). The differences in performance are statistically significant (p < 0.05) using a t-test (Cohen and Cohen, 1983) . Steyvers and Griffiths (2007) also predict word association using Nelson's norms and a state-of-theart LDA model. Although they do not report correlations, they compute how many times the word with the highest probability P(w 2 |w 1 ) under the model was the first associate in the human norms. Using a considerably larger corpus (37,651 documents), they reach an accuracy of 16.15%. Our corpus contains 3,361 documents, the MixLDA model performs at 14.15% and the LDA model at 13.16%. Using a vector-based model trained on the BNC corpus (100M words), Washtell and Markert (2009) report a correlation of 0.167 on the same association data set, whereas our model achieves a correlation of 0.123. With respect to word similarity, Marton et al. (2009) report correlations within the range of 0.31-0.54 using different instantiations of a vector-based model trained on the BNC with a vocabulary of 33,000 words. Our MixLDA model obtains a correlation of 0.318 with a vocabulary five times smaller (6,253 words). Although these results are not strictly comparable due to the different nature and size of the training data, they give some indication of the quality of our model in the context of other approaches that exploit only the textual modality. Besides, our intent is not to report the best performance possible, GAME, CONSOLE, XBOX, SECOND, SONY, WORLD, TIME, JAPAN, JAPANESE, SCHUMACHER, LAP, MI-CROSOFT, ALONSO, RACE, TITLE, WIN, GAMERS, LAUNCH, RENAULT, MARKET PARTY, MINISTER, BLAIR, LABOUR, PRIME, LEADER, GOVERNMENT, TELL, BROW, MP, TONY, SIR, SECRE-TARY, ELECTION but to show that a model of meaning representation is more accurate when taking visual information into account. Table 3 shows some examples of the topics found by our model, which largely form coherent blocks of semantically related words. In general, we observe that the model using image features tends to prefer words that visualize easily (e.g., CONSOLE, XBOX). Furthermore, the visual modality helps obtain crisper meaning distinctions. Here, SCHUMACHER is a very probable world for the \"game\" cluster. This is because the Formula One driver appears as a character in several video games discussed and depicted in our corpus. For comparison the \"game\" cluster for the text-based LDA model contains the words: GAME, USE, INTERNET, SITE, USE, SET, ONLINE, WEB, NETWORK, MUR-RAY, PLAY, MATCH, GOOD, WAY, BREAK, TECH-NOLOGY, WORK, NEW, TIME, SECOND. We believe the model presented here works better than a vanilla text-based topic model for at least three reasons: (1) the visual information helps create better clusters (i.e., conceptual representations) which in turn are used to measure similarity or association; these clusters themselves are amodal but express commonalities across the visual and textual modalities;",
"cite_spans": [
{
"start": 451,
"end": 479,
"text": "(Weiss and Kulikowski, 1991)",
"ref_id": "BIBREF28"
},
{
"start": 732,
"end": 755,
"text": "(Cohen and Cohen, 1983)",
"ref_id": "BIBREF6"
},
{
"start": 758,
"end": 787,
"text": "Steyvers and Griffiths (2007)",
"ref_id": "BIBREF25"
},
{
"start": 1311,
"end": 1338,
"text": "Washtell and Markert (2009)",
"ref_id": "BIBREF27"
},
{
"start": 1487,
"end": 1507,
"text": "Marton et al. (2009)",
"ref_id": "BIBREF16"
},
{
"start": 2074,
"end": 2108,
"text": "GAME, CONSOLE, XBOX, SECOND, SONY,",
"ref_id": null
}
],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 2",
"ref_id": null
},
{
"start": 167,
"end": 174,
"text": "Table 2",
"ref_id": null
},
{
"start": 2109,
"end": 2337,
"text": "WORLD, TIME, JAPAN, JAPANESE, SCHUMACHER, LAP, MI-CROSOFT, ALONSO, RACE, TITLE, WIN, GAMERS, LAUNCH, RENAULT, MARKET PARTY, MINISTER, BLAIR, LABOUR, PRIME, LEADER, GOVERNMENT, TELL, BROW, MP, TONY, SIR, SECRE-TARY, ELECTION",
"ref_id": null
},
{
"start": 2451,
"end": 2458,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "(2) the model is also able to capture perceptual correlations between words. For example, RED is the most frequent associate for APPLE in Nelson's norms. This association is captured in our visual features (pictures with apples cluster with pictures showing red objects) even though RED does not co-occur with APPLE in our data; (3) finally, even in cases where two words are visually very different in terms of shape or color (e.g., BANANA and APPLE), they tend to appear in images with similar structure (e.g., on tables, in bowls, as being held or eaten by someone) and thus often share some common element of meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": null
},
{
"text": "In this paper we developed a computational model that unifies visual and linguistic representations of word meaning. The model learns from natural language corpora paired with images under the assumption that visual terms and words are generated by mixtures of latent topics. We have shown that a closer correspondence to human data can be obtained by explicitly taking the visual modality into account in comparison to a model that estimates the topic structure solely from the textual modality. Beyond word similarity and association, the approach is promising for modeling word learning and categorization as well as a wide range of priming studies. Outwith cognitive science, we hope that some of the work described here might be of relevance to more applied tasks such as thesaurus acquisition, word sense disambiguation, multimodal search, image retrieval, and summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Future improvements include developing a nonparametric version that jointly learns how many visual terms and topics are optimal. Currently, the size of the visual vocabulary and the number of topics are parameters in the model, that must be tuned separately for different tasks and corpora. Another extension concerns the creation of visual terms. Our model assumes that an image is a bag of words. The assumption is convenient for modeling purposes, but clearly false in the context of visual processing. Image descriptors found closely to each other are likely to represent the same object and should form one term rather than several distinct ones (Wang and Grimson, 2007) . Taking the spatial structure among visual words into account would yield better topics and overall better semantic representations. Analogously, we could represent documents by their syntactic structure (Boyd-Graber and Blei, 2009) .",
"cite_spans": [
{
"start": 651,
"end": 675,
"text": "(Wang and Grimson, 2007)",
"ref_id": "BIBREF26"
},
{
"start": 898,
"end": 909,
"text": "Blei, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Participants are given a series of object names and for each object they are asked to name all the properties they can think of that are characteristic of the object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://news.bbc.co.uk/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We correlated the data obtained from each participant with the ratings obtained from all other participants and report the average.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Integrating experiential and distributional data to learn semantic representations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vigliocco",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vinson",
"suffix": ""
}
],
"year": 2009,
"venue": "Psychological Review",
"volume": "116",
"issue": "3",
"pages": "463--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrews, M., G. Vigliocco, and D. Vinson. 2009. In- tegrating experiential and distributional data to learn semantic representations. Psychological Review 116(3):463-498.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Matching words and pictures",
"authors": [
{
"first": "K",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Duygulu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Forsyth",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1107--1135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barnard, K., P. Duygulu, D. Forsyth, N. de Freitas, D. Blei, and M. Jordan. 2002. Matching words and pic- tures. Journal of Machine Learning Research 3:1107- 1135.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modeling annotated data",
"authors": [
{
"first": "D",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 26th Annual International ACM SIGIR Conference",
"volume": "",
"issue": "",
"pages": "127--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, D. and M. Jordan. 2003. Modeling annotated data. In Proceedings of the 26th Annual International ACM SIGIR Conference. Toronto, ON, pages 127-134.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, D. M., A. Y. Ng, and M. I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Re- search 3:993-1022.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cross-linguistic analysis of vocabulary in young children: Spanish, Dutch, French, Hebrew, Italian, Korean, and American English",
"authors": [
{
"first": "M",
"middle": [
"H"
],
"last": "Bornstein",
"suffix": ""
},
{
"first": "L",
"middle": [
"R"
],
"last": "Cote",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Maital",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Painter",
"suffix": ""
},
{
"first": "S.-Y.",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Pascual",
"suffix": ""
}
],
"year": 2004,
"venue": "Child Development",
"volume": "75",
"issue": "4",
"pages": "1115--1139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bornstein, M. H., L. R. Cote, S. Maital, K. Painter, S.- Y. Park, and L. Pascual. 2004. Cross-linguistic analy- sis of vocabulary in young children: Spanish, Dutch, French, Hebrew, Italian, Korean, and American En- glish. Child Development 75(4):1115-1139.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Syntactic topic models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 22nd Conference on Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "185--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boyd-Graber, J. and D. Blei. 2009. Syntactic topic models. In Proceedings of the 22nd Conference on Advances in Neural Information Processing Systems. MIT, Press, Cambridge, MA, pages 185-192.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, J. and P. Cohen. 1983. Applied Multiple Regres- sion/Correlation Analysis for the Behavioral Sciences. Hillsdale, NJ: Erlbaum.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multiple Bernoulli relevance models for image and video annotation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Manmatha",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1002--1009",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, S., V. Lavrenko, and R. Manmatha. 2004. Mul- tiple Bernoulli relevance models for image and video annotation. In Proceedings of the International Con- ference on Computer Vision and Pattern Recognition. Washington, DC, pages 1002-1009.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic image annotation using auxiliary text information",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the ACL-08: HLT. Columbus",
"volume": "",
"issue": "",
"pages": "272--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Y. and M. Lapata. 2008. Automatic image annota- tion using auxiliary text information. In Proceedings of the ACL-08: HLT. Columbus, pages 272-280.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "L",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finkelstein, L., E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. 2002. Placing search in context: The concept revisited. ACM Trans- actions on Information Systems 20(1):116-131.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Topics in semantic representation",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological Review",
"volume": "114",
"issue": "2",
"pages": "211--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, T. L., M. Steyvers, and J. B. Tenenbaum. 2007. Topics in semantic representation. Psychological Re- view 114(2):211-244.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised learning by probabilistic latent semantic analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2001,
"venue": "Machine Learning",
"volume": "41",
"issue": "2",
"pages": "177--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hofmann, T. 2001. Unsupervised learning by proba- bilistic latent semantic analysis. Machine Learning 41(2):177-196.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Object properties and knowledge in early lexical learning",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Jones",
"suffix": ""
},
{
"first": "L",
"middle": [
"B"
],
"last": "Smith",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Landau",
"suffix": ""
}
],
"year": 1991,
"venue": "Child Development",
"volume": "",
"issue": "62",
"pages": "499--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jones, S. S., L. B. Smith, and B. Landau. 1991. Ob- ject properties and knowledge in early lexical learning. Child Development (62):499-516.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Object perception and object naming in early development",
"authors": [
{
"first": "B",
"middle": [],
"last": "Landau",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1998,
"venue": "Trends in Cognitive Science",
"volume": "27",
"issue": "",
"pages": "19--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landau, B., L. Smith, and S. Jones. 1998. Object percep- tion and object naming in early development. Trends in Cognitive Science 27:19-24.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "T",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "2",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landauer, T. and S. T. Dumais. 1997. A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowl- edge. Psychological Review 104(2):211-240.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Object recognition from local scaleinvariant features",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of International Conference on Computer Vision. IEEE Computer Society",
"volume": "",
"issue": "",
"pages": "1150--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lowe, D. 1999. Object recognition from local scale- invariant features. In Proceedings of International Conference on Computer Vision. IEEE Computer So- ciety, pages 1150-1157.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Estimating semantic distance using soft semantic constraints in knowledge-source -corpus hybrid models",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "775--783",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marton, Y., S. Mohammad, and P. Resnik. 2009. Estimat- ing semantic distance using soft semantic constraints in knowledge-source -corpus hybrid models. In Pro- ceedings of the 2009 Conference on Empirical Meth- ods in Natural Language Processing. Singapore, pages 775-783.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A performance evaluation of local descriptors",
"authors": [
{
"first": "K",
"middle": [],
"last": "Mikolajczyk",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 9th International Conference on Computer Vision and Pattern Recognition",
"volume": "2",
"issue": "",
"pages": "257--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolajczyk, K. and C. Schmid. 2003. A performance evaluation of local descriptors. In Proceedings of the 9th International Conference on Computer Vision and Pattern Recognition. Nice, France, volume 2, pages 257-263.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Modeling semantic aspects for cross-media image indexing",
"authors": [
{
"first": "F",
"middle": [],
"last": "Monay",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gatica-Perez",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "29",
"issue": "10",
"pages": "1802--1817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monay, F. and D. Gatica-Perez. 2007. Modeling semantic aspects for cross-media image indexing. IEEE Trans- actions on Pattern Analysis and Machine Intelligence 29(10):1802-1817.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The university of South Florida word association norms",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Nelson",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Mcevoy",
"suffix": ""
},
{
"first": "T",
"middle": [
"A"
],
"last": "Schreiber",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson, D. L., C. L. McEvoy, and T.A. Schreiber. 1999. The university of South Florida word association norms.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic image captioning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Duygulu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Faloutsos",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 International Conference on Multimedia and Expo. Taipei",
"volume": "",
"issue": "",
"pages": "1987--1990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pan, J., H. Yang, P. Duygulu, and C. Faloutsos. 2004. Au- tomatic image captioning. In Proceedings of the 2004 International Conference on Multimedia and Expo. Taipei, pages 1987-1990.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Evidence for representations of perceptually similar natural categories by 3-month and 4-month old infants",
"authors": [
{
"first": "P",
"middle": [],
"last": "Quinn",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Eimas",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenkrantz",
"suffix": ""
}
],
"year": 1993,
"venue": "Perception",
"volume": "22",
"issue": "",
"pages": "463--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinn, P., P. Eimas, and S. Rosenkrantz. 1993. Evidence for representations of perceptually similar natural cate- gories by 3-month and 4-month old infants. Perception 22:463-375.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Structure and deterioration of semantic memory: A neuropsychological and computational investigation",
"authors": [
{
"first": "T",
"middle": [
"T"
],
"last": "Rogers",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Ralph",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Garrard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bozeat",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Mcclelland",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Hodges",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Patterson",
"suffix": ""
}
],
"year": 2004,
"venue": "Psychological Review",
"volume": "111",
"issue": "1",
"pages": "205--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rogers, T. T., M. A. Lambon Ralph, P. Garrard, S. Bozeat, J. L. McClelland, J. R. Hodges, and K. Pat- terson. 2004. Structure and deterioration of semantic memory: A neuropsychological and computational in- vestigation. Psychological Review 111(1):205-235.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning words and syntax for a visual description task",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2002,
"venue": "Computer Speech and Language",
"volume": "16",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy, D. 2002. Learning words and syntax for a visual de- scription task. Computer Speech and Language 16(3).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Normalized cuts and image segmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "22",
"issue": "8",
"pages": "888--905",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi, J. and J. Malik. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8):888-905.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Probabilistic topic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2007,
"venue": "A Handbook of Latent Semantic Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steyvers, M. and T. Griffiths. 2007. Probabilistic topic models. In T. Landauer, D. McNamara, S Dennis, and W Kintsch, editors, A Handbook of Latent Semantic Analysis, Psychology Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Spatial latent Dirichlet allocation",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grimson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th Conference on Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1577--1584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, X. and E. Grimson. 2007. Spatial latent Dirichlet allocation. In Proceedings of the 20th Conference on Advances in Neural Information Processing Systems. MI Press, Cambridge, MA, pages 1577-1584.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A comparison of windowless and window-based computational association measures as predictors of syntagmatic human associations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Washtell",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Markert",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "628--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Washtell, J. and K. Markert. 2009. A comparison of win- dowless and window-based computational association measures as predictors of syntagmatic human associa- tions. In Proceedings of the 2009 Conference on Em- pirical Methods in Natural Language Processing. Sin- gapore, pages 628-637.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "C",
"middle": [
"A"
],
"last": "Kulikowski",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiss, S. M. and C. A. Kulikowski. 1991. Computer Sys- tems that Learn: Classification and Prediction Meth- ods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufmann, San Mateo, CA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The emergence of links between lexical acquisition and object categorization: A computational study",
"authors": [
{
"first": "C",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2005,
"venue": "Connection Science",
"volume": "17",
"issue": "3",
"pages": "381--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, C. 2005. The emergence of links between lexical acquisition and object categorization: A computational study. Connection Science 17(3):381-397.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "(c)). This representation is based on descriptors computed over automatically detected image regions. It provides a much richer (and hopefully more informative) feature space compared to the alternative image representations discussed above. For example, an image segmentation algorithm, would extract at most 20 regions from the image inFigure 1; uniform grid segmentation yields 143 Image partitioned into regions of varying granularity using (a) the normalized cut image segmentation algorithm, (b) uniform grid segmentation, and (c) the SIFT point detector.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Performance of multimodal topic model on predicting word association under varying topics and visual terms (development set).",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Performance of multimodal topic model on predicting word similarity under varying topics and visual terms (development set).",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": ", CONFERENCE, POLICY, NEW, WANT, PUBLIC, SPEECH SCHOOL, CHILD, EDUCATION, STUDENT, WORK, PUPIL, PARENT, TEACHER, GOVERNMENT, YOUNG, SKILL, AGE, NEED, UNIVERSITY, REPORT, LEVEL, GOOD, HELL, NEW, SURVEY",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"num": null,
"content": "<table/>",
"text": "Each article in the document collection contains a document (the title is shown in boldface), and image with related content.",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"text": "Most frequent words in three topics learnt from a corpus of image-document pairs.",
"type_str": "table"
}
}
}
}