| { |
| "paper_id": "N18-1038", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:49:08.678639Z" |
| }, |
| "title": "Learning Visually Grounded Sentence Representations", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "dkiela@fb.com" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "aconneau@fb.com" |
| }, |
| { |
| "first": "Allan", |
| "middle": [], |
| "last": "Jabri", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ajabri@berkeley.edu" |
| }, |
| { |
| "first": "U", |
| "middle": [ |
| "C" |
| ], |
| "last": "Berkeley", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "Nickel", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We investigate grounded sentence representations, where we train a sentence encoder to predict the image features of a given captioni.e., we try to \"imagine\" how a sentence would be depicted visually-and use the resultant features as sentence representations. We examine the quality of the learned representations on a variety of standard sentence representation quality benchmarks, showing improved performance for grounded models over non-grounded ones. In addition, we thoroughly analyze the extent to which grounding contributes to improved performance, and show that the system also learns improved word embeddings.", |
| "pdf_parse": { |
| "paper_id": "N18-1038", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We investigate grounded sentence representations, where we train a sentence encoder to predict the image features of a given captioni.e., we try to \"imagine\" how a sentence would be depicted visually-and use the resultant features as sentence representations. We examine the quality of the learned representations on a variety of standard sentence representation quality benchmarks, showing improved performance for grounded models over non-grounded ones. In addition, we thoroughly analyze the extent to which grounding contributes to improved performance, and show that the system also learns improved word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Following the word embedding upheaval of the past few years, one of NLP's next big challenges has become the hunt for universal sentence representations: generic representations of sentence meaning that can be \"plugged into\" any kind of system or pipeline. Examples include Paragraph2Vec (Le and Mikolov, 2014), C-Phrase (Pham et al., 2015) , SkipThought and Fast-Sent (Hill et al., 2016a) . These representations tend to be learned from large corpora in an unsupervised setting, much like word embeddings, and effectively \"transferred\" to the task at hand.", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 340, |
| "text": "(Pham et al., 2015)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 369, |
| "end": 389, |
| "text": "(Hill et al., 2016a)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Purely text-based semantic models, which represent word meaning as a distribution over other words (Harris, 1954; Turney and Pantel, 2010; Clark, 2015) , suffer from the grounding problem (Harnad, 1990) . It has been shown that grounding leads to improved performance on a variety of word-level tasks (Baroni, 2016; Kiela, 2017) . Unsupervised sentence representation models are often doubly exposed to the grounding problem, especially if they represent sentence mean-1Work done while at Facebook AI Research.", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 113, |
| "text": "(Harris, 1954;", |
| "ref_id": null |
| }, |
| { |
| "start": 114, |
| "end": 138, |
| "text": "Turney and Pantel, 2010;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 139, |
| "end": 151, |
| "text": "Clark, 2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 188, |
| "end": 202, |
| "text": "(Harnad, 1990)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 301, |
| "end": 315, |
| "text": "(Baroni, 2016;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 316, |
| "end": 328, |
| "text": "Kiela, 2017)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "ings as a distribution over other sentences, as in SkipThought .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Here, we examine whether grounding also leads to improved sentence representations. In short, the grounding problem is characterized by the lack of an association between symbols and external information. We address this problem by aligning text with paired visual data and hypothesize that sentence representations can be enriched with external information-i.e., grounded-by forcing them to capture visual semantics. We investigate the performance of these representations and the effect of grounding on a variety of semantic benchmarks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There has been much recent interest in generating actual images from text (Goodfellow et al., 2014; van den Oord et al., 2016; Mansimov et al., 2016) . Our method takes a slightly different approach: instead of predicting actual images, we train a deep recurrent neural network to predict the latent feature representation of images. That is, we are specifically interested in the semantic content of visual representations and how useful that information is for learning sentence representations. One can think of this as trying to imagine, or form a \"mental picture\", of a sentence's meaning (Chrupa\u0142a et al., 2015) . Much like a sentence's meaning in classical semantics is given by its model-theoretic ground truth (Tarski, 1944) , our ground truth is provided by images.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 99, |
| "text": "(Goodfellow et al., 2014;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 100, |
| "end": 126, |
| "text": "van den Oord et al., 2016;", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 127, |
| "end": 149, |
| "text": "Mansimov et al., 2016)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 594, |
| "end": 617, |
| "text": "(Chrupa\u0142a et al., 2015)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 719, |
| "end": 733, |
| "text": "(Tarski, 1944)", |
| "ref_id": "BIBREF63" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Grounding is likely to be more useful for concrete words and sentences: a sentence such as \"democracy is a political system\" does not yield any coherent mental picture. In order to accommodate the fact that much of language is abstract, we take sentence representations obtained using textonly data (which are better for representing abstract meaning) and combine them with the grounded representations that our system learns (which are good for representing concrete meaning), leading to multi-modal sentence representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In what follows, we introduce a system for grounding sentence representations by learning to predict visual content. Although it is not the primary aim of this work, it is important to first examine how well this system achieves what it is trained to do, by evaluating on the COCO5K image and caption retrieval task. We then analyze the performance of grounded representations on a variety of sentence-level semantic transfer tasks, showing that grounding increases performance over textonly representations. We then investigate an important open question in multi-modal semantics: to what extent are improvements in semantic performance due to grounding, rather than to having more data or data from a different distribution?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the remainder, we analyze the role that concreteness plays in representation quality and show that our system learns grounded word embedding projections that outperform non-grounded ones. To the best of our knowledge, this is the first work to comprehensively study grounding for distributed sentence representations on such a wide set of semantic benchmark tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Sentence representations Although there appears to be a consensus with regard to the methodology for learning word representations, this is much more of an open problem for sentence representations. Recent work has ranged from trying to learn to compose word embeddings (Le and Mikolov, 2014; Pham et al., 2015; Wieting et al., 2016; Arora et al., 2017) , to neural architectures for predicting the previous and next sentences or learning representations via largescale supervised tasks (Conneau et al., 2017) . In particular, SkipThought led to an increased interest in learning sentence representations. Hill et al. (2016a) compare a wide selection of unsupervised and supervised methods, including a basic caption prediction system that is similar to ours. That study finds that \"different learning methods are preferable for different intended applications\", i.e., that the matter of optimal universal sentence representations is as of yet far from decided.", |
| "cite_spans": [ |
| { |
| "start": 270, |
| "end": 292, |
| "text": "(Le and Mikolov, 2014;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 293, |
| "end": 311, |
| "text": "Pham et al., 2015;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 312, |
| "end": 333, |
| "text": "Wieting et al., 2016;", |
| "ref_id": "BIBREF69" |
| }, |
| { |
| "start": 334, |
| "end": 353, |
| "text": "Arora et al., 2017)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 487, |
| "end": 509, |
| "text": "(Conneau et al., 2017)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 606, |
| "end": 625, |
| "text": "Hill et al. (2016a)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "InferSent (Conneau et al., 2017) recently showed that supervised sentence representations can be of very high quality. Here, we learn grounded sentence representations in a supervised setting, combine them with standard unsupervised sentence representations, and show how grounding can help for a variety of sentence-level tasks.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 32, |
| "text": "(Conneau et al., 2017)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Multi-modal semantics Language grounding in semantics has been motivated by evidence that human meaning representations are grounded in perceptual experience (Jones et al., 1991; Perfetti, 1998; Andrews et al., 2009; Riordan and Jones, 2011) . That is, despite ample evidence of humans representing meaning with respect to an external environment and sensorimotor experience (Barsalou, 2008; Louwerse, 2008) , standard semantic models rely solely on textual data. This gives rise to an infinite regress in text-only semantic representations, i.e., words are defined in terms of other words, ad infinitum.", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 178, |
| "text": "(Jones et al., 1991;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 179, |
| "end": 194, |
| "text": "Perfetti, 1998;", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 195, |
| "end": 216, |
| "text": "Andrews et al., 2009;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 217, |
| "end": 241, |
| "text": "Riordan and Jones, 2011)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 375, |
| "end": 391, |
| "text": "(Barsalou, 2008;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 392, |
| "end": 407, |
| "text": "Louwerse, 2008)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The field of multi-modal semantics, which aims to address this issue by enriching textual representations with information from other modalities, has mostly been concerned with word representations (Bruni et al., 2014; Baroni, 2016; Kiela, 2017 , and references therein). Learning multi-modal representations that ground text-only representations has been shown to improve performance on a variety of core NLP tasks. This work is most closely related to that of Chrupa\u0142a et al. (2015) , who also aim to ground language by relating images to captions: here, we additionally address abstract sentence meaning; have a different architecture, loss function and fusion strategy; and explicitly focus on grounded universal sentence representations.", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 218, |
| "text": "(Bruni et al., 2014;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 219, |
| "end": 232, |
| "text": "Baroni, 2016;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 233, |
| "end": 244, |
| "text": "Kiela, 2017", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 462, |
| "end": 484, |
| "text": "Chrupa\u0142a et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "There is a large body of work that involves jointly embedding images and text, at the word level (Frome et al., 2013; , the phrase level (Karpathy et al., 2014; Li et al., 2016) , and the sentence level (Karpathy and Fei-Fei, 2015; Klein et al., 2015; Chen and Zitnick, 2015; Reed et al., 2016) . Our model similarly learns to map sentence representations to be consistent with a visual semantic space, and we focus on studying how these grounded text representations transfer to NLP tasks.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 117, |
| "text": "(Frome et al., 2013;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 137, |
| "end": 160, |
| "text": "(Karpathy et al., 2014;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 161, |
| "end": 177, |
| "text": "Li et al., 2016)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 203, |
| "end": 231, |
| "text": "(Karpathy and Fei-Fei, 2015;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 232, |
| "end": 251, |
| "text": "Klein et al., 2015;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 252, |
| "end": 275, |
| "text": "Chen and Zitnick, 2015;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 276, |
| "end": 294, |
| "text": "Reed et al., 2016)", |
| "ref_id": "BIBREF57" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bridging vision and language", |
| "sec_num": null |
| }, |
| { |
| "text": "Moreover, there has been a lot of work in recent years on the task of image caption generation (Bernardi et al., 2016; Vinyals et al., 2015; Mao et al., 2015; Fang et al., 2015) . Here, we do the opposite: we predict the correct image (features) from the caption, rather than the caption from the image (features). Similar ideas were recently successfully applied to multi-modal machine translation (Elliott and K\u00e1d\u00e1r, 2017; Gella et al., 2017; . Recently, Das et al. (2017) trained dialogue agents to communicate about images, trying to predict image features as well.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 118, |
| "text": "(Bernardi et al., 2016;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 119, |
| "end": 140, |
| "text": "Vinyals et al., 2015;", |
| "ref_id": "BIBREF67" |
| }, |
| { |
| "start": 141, |
| "end": 158, |
| "text": "Mao et al., 2015;", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 159, |
| "end": 177, |
| "text": "Fang et al., 2015)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 399, |
| "end": 424, |
| "text": "(Elliott and K\u00e1d\u00e1r, 2017;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 425, |
| "end": 444, |
| "text": "Gella et al., 2017;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 457, |
| "end": 474, |
| "text": "Das et al. (2017)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bridging vision and language", |
| "sec_num": null |
| }, |
| { |
| "text": "In the following, let", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "D = {(I k , C k )} N", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "k=1 be a dataset where each image I k is associated with one or more captions C k = {C 1 , . . . , C | C | k }. A prominent example of such a dataset is COCO (Lin et al., 2014) , which consists of images with up to 5 corresponding captions for each image. The objective of our approach is to encode a given sentence, i.e., a caption C, and learn to ground it in the corresponding image I. To encode the sentence, we train a bidirectional LSTM (BiLSTM) on the caption, where the input is a sequence of projected word embeddings. We combine the final left-to-right and right-to-left hidden states of the LSTM and take the elementwise maximum to obtain a sentence encoding. We then examine three distinct methods for grounding the sentence encoding.", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 176, |
| "text": "(Lin et al., 2014)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In the first method, we try to predict the image features (Cap2Img). That is, we learn to map the caption to the same space as the image features that represent the correct image. We call this strong perceptual grounding, where we take the visual input directly into account.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "An alternative method is to exploit the fact that one image in COCO has multiple captions (Cap2Cap), and to learn to predict which other captions are valid descriptions of the same image. This approach is strictly speaking not perceptually grounded, but exploits the fact that there is an implicit association between the captions and the shared underlying image, and so could be considered a weaker version of grounding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Finally, we experiment with a model that optimizes both these objectives jointly: that is, we predict both images and alternative captions for the same image (Cap2Both). Thus, Cap2Both incorporates both strong perceptual and weak implicit grounding. Please see Figure 1 for an illustration of the various models. In what follows, we discuss them in more technical detail.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 261, |
| "end": 269, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To learn sentence representations, we employ a bidirectional LSTM architecture. In particular, let x = (x 1 , . . . , x T ) be an input sequence where each word is represented via an embedding x t \u2208 R n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Using a standard LSTM (Hochreiter and Schmidhuber, 1997) , the hidden state at time t, denoted h t \u2208 R m , is computed via", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 56, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "h t+1 , c t+1 = LSTM(x t , h t , c t | \u0398)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where c t denotes the cell state of the LSTM and where \u0398 denotes its parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To exploit contextual information in both input directions, we process input sentences using a bidirectional LSTM, that reads an input sequence in both normal and reverse order. In particular, for an input sequence x of length T, we compute the hidden state at time t, h t \u2208 R 2m via", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "h f t+1 = LSTM(x t , h f t , c f t | \u0398 f ) h b t+1 = LSTM(x T \u2212t , h b t , c b t | \u0398 b )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Here, the two LSTMs process x in a forward and a backward order, respectively. We subsequently use max :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "R d \u00d7 R d \u2192 R d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "to combine them into their element-wise maximum, yielding the representation of a caption after it has been processed with the BiLSTM:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "h T = max(h f t , h b t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We use GloVe vectors (Pennington et al., 2014) for our word embeddings. The embeddings are kept fixed during training, which allows a trained sentence encoder to transfer to tasks (and a vocabulary) that it has not yet seen, provided GloVe embeddings are available. Since GloVe representations are not tuned to represent grounded information, we learn a global transformation of GloVe space to grounded word space. Specifically, let x \u2208 R n be the original GloVe embeddings. We then learn a linear map U \u2208 R n\u00d7n such that x = Ux and use x as input to the BiLSTM. The linear map U and the BiLSTM are trained jointly.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 46, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF54" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional LSTM", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Let v \u2208 R I be the latent representation of an image (e.g.the final layer of a ResNet). To ground captions in the images that they describe, we map h T into the latent space of image representations such that their similarity is maximized. In other words, we aim to predict the latent features of an image from its caption. The mapping of caption to image space is performed via a series of projections where \u03c8 denotes a non-linearity such as ReLUs or tanh. By jointly training the BiLSTM with these latent projections, we can then ground the language model in its visual counterpart. In particular, let", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "p 0 = h T p +1 = \u03c8(P p )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u0398 = \u0398 BiLSTM \u222a {P } L", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "=1 be the parameters of the BiLSTM as well as the projection layers. We then minimize the following ranking loss:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "L C2I (\u0398) = (I,C)\u2208 D f rank (I, C) + f rank (C, I) (1) where f rank (a, b) = b \u2208N a [\u03b3 \u2212 sim(a, b) + sim(a, b )] +", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where [x] + = max(0, x) denotes the threshold function at zero and \u03b3 defines the margin. Furthermore, N a denotes the set of negative samples for an image or caption and sim(\u2022, \u2022) denotes a similarity measure between vectors. In the following, we employ the cosine similarity, i.e., sim(a, b) = a, b a b .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Although this loss is not smooth at zero, it can be trained end-to-end using subgradient methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Compared to e.g. an l 2 regression loss, Equation (1) is less susceptible to error incurred by subspaces of the visual representation that are irrelevant to the high level visual semantics. Empirically, we found it to be more robust to overfitting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Img", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Let x = (x 1 , . . . , x T ), y = (y 1 , . . . , y S ) be a caption pair that describes the same image. To learn weakly grounded representations, we employ a standard sequence-to-sequence model (Sutskever et al., 2014) , whose task is to predict y from x.", |
| "cite_spans": [ |
| { |
| "start": 194, |
| "end": 218, |
| "text": "(Sutskever et al., 2014)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Cap", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "As in the Cap2Cap model, let h T be the representation of the input sentence after it has been processed with a BiLSTM. We then model the joint probability of y given x as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Cap", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "p (y | x) = S s=1 p (y s | h T , y 1 , . . . , y s\u22121 , \u0398) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Cap", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To model the conditional probability of y s we use the usual multiclass classification approach over the vocabulary of the corpus V such that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Cap", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "p(y s = k | h T , y 1 , . . . , y s\u22121 , \u0398) = e v k ,y s | V | j=1 e v j ,y s .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Cap", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Here, y s = \u03c8(W V g s + b) and g s is hidden state of the decoder LSTM at time s. To learn the model parameters, we minimize the negative log-likelihood over all caption pairs, i.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Cap", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "L C2C (\u03b8) = \u2212 x,y \u2208 D |y | s=1 log p(y s | h T , y 1 , . . . , y s\u22121 , \u0398).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Cap", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Finally, we also integrate both concepts of grounding into a joint model, where we optimize the following loss function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Both", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "L C2B (\u0398) = L C2I (\u0398) + L C2C (\u0398).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cap2Both", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "On their own, features from this system are likely to suffer from the fact that training on COCO introduces biases: aside from the inherent dataset bias in COCO itself, the system will only have coverage for concrete concepts. COCO is also a much smaller dataset than e.g. the Toronto Books Corpus often used in purely text-based methods . As such, grounded representations are potentially less \"universal\" than text-based alternatives, which also cover abstract concepts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounded universal representations", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "There is evidence that meaning is dually coded in the human brain: while abstract concepts are processed in linguistic areas, concrete concepts are processed in both linguistic and visual areas (Paivio, 1990) . Anderson et al. (2017) recently corroborated this hypothesis using semantic representations and fMRI studies. In our case, we want to be able to accommodate concrete sentence meanings, for which our vision-centric system is likely to help; as well as abstract sentence meanings, where trying to \"imagine\" what \"democracy is a political system\" might look like will probably only introduce noise.", |
| "cite_spans": [ |
| { |
| "start": 194, |
| "end": 208, |
| "text": "(Paivio, 1990)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 211, |
| "end": 233, |
| "text": "Anderson et al. (2017)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounded universal representations", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Hence, we optionally complement our systems' representations with more abstract universal sentence representations trained on language-only data (specifically, the Toronto Books Corpus). Although it would be interesting to examine multitask scenarios where these representations are jointly learned, we leave this for future work. Here, instead, we combine grounded and language-only representations using simple concatenation, i.e., r gs = r gr ounded || r ling\u2212only . Concatenation has been proven to be a strong and straightforward mid-level multi-modal fusion method, previously explored in multi-modal semantics for word representations (Bruni et al., 2014; Kiela and Bottou, 2014) . We call the combined system Ground-Sent (GS), and distinguish between sentences perceptually grounded in images (GroundSent-Img), weakly grounded in captions (GroundSent-Cap) or grounded in both (GroundSent-Both).", |
| "cite_spans": [ |
| { |
| "start": 642, |
| "end": 662, |
| "text": "(Bruni et al., 2014;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 663, |
| "end": 686, |
| "text": "Kiela and Bottou, 2014)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grounded universal representations", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "We use 300-dimensional GloVe (Pennington et al., 2014) embeddings, trained on WebCrawl, for the initial word representations and optimize using Adam (Kingma and Ba, 2015) . We use ELU (Clevert et al., 2016) for the non-linearity in projection layers, set dropout to 0.5 and use a dimensionality of 1024 for the LSTM. The network was initialized with orthogonal matrices for the recurrent layers (Saxe et al., 2014) and He initialization for all other layers. The learning rate and margin were tuned on the validation set using grid search.", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 54, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 149, |
| "end": 170, |
| "text": "(Kingma and Ba, 2015)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 395, |
| "end": 414, |
| "text": "(Saxe et al., 2014)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation details", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "We use the same COCO splits as Karpathy and Fei-Fei (2015) for training (113,287 images), validation (5000 images) and testing (5000 images). Image features for COCO were obtained by transferring the final layer from a ResNet-101 (He et al., 2016) trained on ImageNet (ILSVRC 2015).", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 58, |
| "text": "Karpathy and Fei-Fei (2015)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 230, |
| "end": 247, |
| "text": "(He et al., 2016)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, evaluation and comparison", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We are specifically interested in how well (grounded) universal sentence representations transfer to different tasks. To evaluate this, we perform experiments for a variety of tasks. In all cases, we compare against layer-normalized Skip-Thought vectors, a well-known high-performing sentence encoding method (Ba et al., 2016) . To ensure that we use the exact same evaluations, with identical hyperparameters and settings, we evaluate all systems with the same evaluation pipeline, namely SentEval (Conneau and Kiela, 2018) 2. Following previous work in the field, the idea is to take universal sentence representations and to learn a simple classifier on top for each of the transfer tasks-the higher the quality of the sentence representation, the better the performance on these transfer tasks should be.", |
| "cite_spans": [ |
| { |
| "start": 309, |
| "end": 326, |
| "text": "(Ba et al., 2016)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 499, |
| "end": 524, |
| "text": "(Conneau and Kiela, 2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transfer tasks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We evaluate on the following well-known and widely used evaluations: movie review sentiment (MR) (Pang and Lee, 2005) , product reviews (CR) (Hu and Liu, 2004) , subjectivity classification (SUBJ) (Pang and Lee, 2004) , opinion polarity (MPQA) (Wiebe et al., 2005) , paraphrase identification (MSRP) (Dolan et al., 2004) and sentiment classification (SST, binary version) (Socher et al., 2013) . Accuracy is measured in all cases, except for MRPC, which measures accuracy and the F1score.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 117, |
| "text": "(Pang and Lee, 2005)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 141, |
| "end": 159, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 197, |
| "end": 217, |
| "text": "(Pang and Lee, 2004)", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 244, |
| "end": 264, |
| "text": "(Wiebe et al., 2005)", |
| "ref_id": "BIBREF68" |
| }, |
| { |
| "start": 300, |
| "end": 320, |
| "text": "(Dolan et al., 2004)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 372, |
| "end": 393, |
| "text": "(Socher et al., 2013)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic classification", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "Image Retrieval Model R@1 R@5 R@10 MEDR MR R@1 R@5 R@10 MEDR MR ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COCO5K Caption Retrieval", |
| "sec_num": null |
| }, |
| { |
| "text": "DVSA", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COCO5K Caption Retrieval", |
| "sec_num": null |
| }, |
| { |
| "text": "Recent years have seen an increased interest in entailment classification as an appropriate evaluation of sentence representation quality. We evaluate representations on two well-known entailment, or natural language inference, datasets: the largescale SNLI dataset (Bowman et al., 2015) and the SICK dataset (Marelli et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 266, |
| "end": 287, |
| "text": "(Bowman et al., 2015)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 309, |
| "end": 331, |
| "text": "(Marelli et al., 2014)", |
| "ref_id": "BIBREF50" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Entailment", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "We implement a simple logistic regression on top of the sentence representation. In the cases of SNLI and SICK, as is the standard for these datasets, the representations for the individual sentences u and v are combined by using u, v, u * v, |u \u2212 v| as the input features. We tune the seed and an l 2 penalty on the validation sets for each, and train using Adam (Kingma and Ba, 2015) , with a learning rate of 0.001 and a batch size of 32.", |
| "cite_spans": [ |
| { |
| "start": 364, |
| "end": 385, |
| "text": "(Kingma and Ba, 2015)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementational details", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Although it is not the primary aim of this work to learn a state-of-the-art image and caption retrieval system, it is important to first establish the capability of our system to do what it is trained to do. Table 1 shows the results on the COCO5K caption and image retrieval tasks for the two models that predict image features. We compare our system against several wellknown approaches, namely Deep Visual-Semantic Alignments (DVSA) (Karpathy and Fei-Fei, 2015) , Fisher Vectors (FV) (Klein et al., 2015) and Order Embeddings (OE) (Vendrov et al., 2015) . As the results show, Cap2Img performs very well on this task, outperforming the compared models on caption retrieval and being very close to order embeddings on image retrieval3. The fact that the system outperforms Order Embeddings on caption retrieval suggests that it has a better sentence encoder. Cap2Both does not work as well on this task as the image-only case, probably because interference from the language signal makes the problem harder to optimize. The results indicate that the system has learned to predict image features from captions, and captions from images, at a level exceeding or close to the state-of-the-art on this task.", |
| "cite_spans": [ |
| { |
| "start": 436, |
| "end": 464, |
| "text": "(Karpathy and Fei-Fei, 2015)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 487, |
| "end": 507, |
| "text": "(Klein et al., 2015)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 534, |
| "end": 556, |
| "text": "(Vendrov et al., 2015)", |
| "ref_id": "BIBREF66" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 208, |
| "end": 215, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Having established that we can learn high-quality grounded sentence encodings, the core question we now wish to examine is how well grounded sentence representations transfer. In this section, we combine our grounded features with the 86.6 69.9/79.9 80.3 72.0 78.1 Table 3 : Thorough investigation of the contribution of grounding, ensuring equal number of components and identical architectures, on the variety of sentence-level semantic benchmark tasks. STb=SkipThought-like model with bidirectional LSTM+max. 2\u00d7STb-1024=ensemble of 2 different STb models with different initializations. GroundSent is STb-1024+Cap2Cap/Img/Both. We find that performance improvements are sometimes due to having more parameters, but in most cases due to grounding.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 265, |
| "end": 272, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transfer task performance", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "high-quality layer-normalized SkipThought representations of Ba et al. (2016) , leading to multimodal sentence representations as described in Section 3.5. That is, we concatenate Cap2Cap, Cap2Img or Cap2Both and Skip-Thought with Layer Normalization (ST-LN) representations, yielding GroundSent-Cap, GroundSent-Img and GroundSent-Both representations, respectively. We report performance of ST-LN using SentEval, which led to slightly different numbers than what is reported in their paper4. Table 2 shows the results for the semantic classification and entailment tasks. Note that all systems use the exact same evaluation pipeline, which makes them directly comparable. We can see that in all cases, grounding increases the performance. The question of which type of grounding works best is more difficult: generally, grounding with Cap2Cap and Cap2Both appears to do slightly better on most tasks, but on e.g. SST, Cap2Img works better. The entailment task results (SNLI and SICK in Table 2 ) show a similar picture: in all cases grounding improves performance.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 77, |
| "text": "Ba et al. (2016)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 493, |
| "end": 500, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 987, |
| "end": 994, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transfer task performance", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "It is important to note that, in this work, we are not necessarily concerned with replacing the state-of-the-art on these tasks: there are systems that perform better. We are primarily interested in whether grounding helps relative to text-only baselines. We find that it does. 4This is probably due to different seeds, optimization methods and other minor implementational details that differ between the original work and SentEval.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transfer task performance", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "An important open question is whether the increase in performance in multi-modal semantic models is due to qualitatively different information from grounding, or simply due to the fact that we have more parameters or data from a different distribution. In order to examine this, we implement a SkipThought-like model that also uses a bidirectional LSTM with element-wise max on the final hidden layer (henceforth referred to as STb). This model is architecturally identical to the sentence encoder used before: it can be thought of as Cap2Cap, but where the objective is not to predict an alternative caption, but to predict the previous and next sentence in the Toronto Books Corpus, just like SkipThought .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The contribution of grounding", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We train a 1024-dimensional and 2048dimensional STb model (for one full iteration, with all other hyperparameters identical to Cap2Cap) to compare against: if grounding improves results because it introduces qualitatively different information, rather than just from having more parameters (i.e., a higher embedding dimensionality), we should expect the multi-modal GroundSent models to perform better not only than STb-1024, but also than STb-2048, which has the same number of parameters (recall that GroundSent models are combinations of grounded and linguistic-only representations). In addition, we compare against an \"ensemble\" of two different STb-1024 models (i.e., a concatenation of two separately trained STb-1024), to check that we are not (just) observing an ensemble effect. As Table 3 shows, a more nuanced picture emerges in this comparison: grounding helps more for some datasets than for others. Grounded models outperform the STb-1024 model (which uses much more data-the Toronto Books Corpus is much larger than COCO) in all cases, often already without concatenating the textual modality. The ensemble of two STb-1024 models performs better than the individual one, and so does the higher-dimensional one. In the cases of CR and MRPC (F1), it appears that improved performance is due to having more data or ensemble effects. For the other datasets, grounding clearly yields better results. These results indicate that grounding does indeed capture qualitatively different information, yielding better universal sentence representations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 792, |
| "end": 799, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The contribution of grounding", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "There are a few other important questions to investigate. The average abstractness or concreteness of the evaluation datasets may have a large impact on performance. In addition, word embeddings from the learned projection from GloVe input embeddings, which now provides a generic wordembedding grounding method even for words that are not present in the image-caption training data, can be examined.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "As we have seen, performance across datasets and models can vary substantially. A dataset's concreteness plays an important role in the relative merit of applying grounding: a dataset consisting mostly of abstract words is less likely to benefit from grounding than one that uses mostly concrete words. In order to examine this effect, we calculate the average concreteness of the evalua- tion datasets used in this study. Table 4 shows the average human-annotated concreteness ratings for all words (where available) in each dataset. The ratings were obtained by Brysbaert et al. (2014) in a large-scale study, yielding scores for 40,000 English words. We observe that the two entailment datasets are more concrete, which is due to the fact that the premises are derived from caption datasets (Flickr30K in the case of SNLI; Flickr8K and video captions in the case of SICK). This explains why grounding can clearly be seen to help in these cases. For the semantic classification tasks, the more concrete datasets are MRPC and SST. The picture is less clear for the first, but in SST we see that the grounded representations definitely do work better. Concreteness values make it easier to analyze performance, but are apparently not always direct indicators of improvements with grounding.", |
| "cite_spans": [ |
| { |
| "start": 564, |
| "end": 587, |
| "text": "Brysbaert et al. (2014)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 423, |
| "end": 430, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Concreteness", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Our models contain a projection layer that maps the GloVe word embeddings that they receive as inputs to a different embedding space. There has been a lot of interest in grounded word representations in recent years, so it is interesting to examine what kind of word representations our models learn. We omit Cap2Cap for reasons of space (it performs similarly to Cap2Both). As shown in Table 5 , the grounded word projections that our network learns yield higher-quality word embeddings on four standard lexical semantic similarity benchmarks: MEN (Bruni et al., 2014) , SimLex-999 (Hill et al., 2016b) , Rare Words (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001 ).", |
| "cite_spans": [ |
| { |
| "start": 549, |
| "end": 569, |
| "text": "(Bruni et al., 2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 583, |
| "end": 603, |
| "text": "(Hill et al., 2016b)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 617, |
| "end": 637, |
| "text": "(Luong et al., 2013)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 642, |
| "end": 679, |
| "text": "WordSim-353 (Finkelstein et al., 2001", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 387, |
| "end": 394, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grounded word embeddings", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We have investigated grounding for universal sentence representations. We achieved good performance on caption and image retrieval tasks on the large-scale COCO dataset. We subsequently showed how the sentence encodings that the sys-tem learns can be transferred to various NLP tasks, and that grounded universal sentence representations lead to improved performance. We analyzed the source of improvements from grounding, and showed that the increased performance appears to be due to the introduction of qualitatively different information (i.e., grounding), rather than simply having more parameters or applying ensemble methods. Lastly, we showed that our systems learned high-quality grounded word embeddings that outperform non-grounded ones on standard semantic similarity benchmarks. It could well be that our methods are even more suited for more concrete tasks, such as visual question answering, visual storytelling, or image-grounded dialoguean avenue worth exploring in future work. In addition, it would be interesting to explore multi-task learning for sentence representations where one of the tasks involves grounding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "2See https://github.com/facebookresearch/SentEval. The aim of SentEval is to encompass a comprehensive set of benchmarks that has been loosely established in the research community as the standard for evaluating sentence representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "3In fact, we found that we can achieve better performance on this task by reducing the dimensionality of the encoder. A lower dimensionality in the encoder also reduces the transferability of the features, unfortunately, so we leave a more thorough investigation of this phenomenon for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their helpful comments and suggestions.Part of Fig. 1 is licensed from dougwoods/CC-BY-2.0/flickr.com/photos/deerwooduk/682390157.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 90, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [ |
| "J" |
| ], |
| "last": "Anderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew J. Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017. Visually grounded and tex- tual semantic models differentially decode brain ac- tivity associated with concrete and abstract nouns. Transactions of the Association for Computational Linguistics 5.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Integrating experiential and distributional data to learn semantic representations", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Andrews", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriella", |
| "middle": [], |
| "last": "Vigliocco", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Vinson", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "116", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Andrews, Gabriella Vigliocco, and David Vinson. 2009. Integrating experiential and distributional data to learn semantic representations. Psychological re- view 116(3):463.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A simple but tough-to-beat baseline for sentence embeddings", |
| "authors": [ |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| }, |
| { |
| "first": "Yingyu", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tengyu", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Learning Representations (ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations (ICLR).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Grounding distributional semantics in the visual world", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Language and Linguistics Compass", |
| "volume": "10", |
| "issue": "1", |
| "pages": "3--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni. 2016. Grounding distributional seman- tics in the visual world. Language and Linguistics Compass 10(1):3-13.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Grounded cognition. Annual Review of", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [ |
| "W" |
| ], |
| "last": "Barsalou", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Psychology", |
| "volume": "59", |
| "issue": "1", |
| "pages": "617--645", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence W. Barsalou. 2008. Grounded cognition. An- nual Review of Psychology 59(1):617-645.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Automatic description generation from images: A survey of models, datasets, and evaluation measures", |
| "authors": [ |
| { |
| "first": "Raffaella", |
| "middle": [], |
| "last": "Bernardi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruket", |
| "middle": [], |
| "last": "Cakici", |
| "suffix": "" |
| }, |
| { |
| "first": "Desmond", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| }, |
| { |
| "first": "Aykut", |
| "middle": [], |
| "last": "Erdem", |
| "suffix": "" |
| }, |
| { |
| "first": "Erkut", |
| "middle": [], |
| "last": "Erdem", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "", |
| "issue": "", |
| "pages": "409--442", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from im- ages: A survey of models, datasets, and evaluation measures. Journal of Artificial Intelligence Research (JAIR) pages 409-442.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A large annotated corpus for learning natural language inference", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Samuel", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabor", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Potts", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Multimodal distributional semantics", |
| "authors": [ |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Nam-Khanh", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Artifical Intelligence Research", |
| "volume": "49", |
| "issue": "", |
| "pages": "1--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tifical Intelligence Research 49:1-47.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Concreteness ratings for 40 thousand generally known english word lemmas", |
| "authors": [ |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Brysbaert", |
| "suffix": "" |
| }, |
| { |
| "first": "Amy", |
| "middle": [ |
| "Beth" |
| ], |
| "last": "Warriner", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Kuperman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Behavior research methods", |
| "volume": "46", |
| "issue": "3", |
| "pages": "904--911", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc Brysbaert, Amy Beth Warriner, and Victor Ku- perman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods 46(3):904-911.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Mind's eye: A recurrent visual representation for image caption generation", |
| "authors": [ |
| { |
| "first": "Xinlei", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zitnick", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "2422--2431", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xinlei Chen and Lawrence C Zitnick. 2015. Mind's eye: A recurrent visual representation for image caption generation. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 2422-2431.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Learning language through pictures", |
| "authors": [ |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Chrupa\u0142a", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00c1kos", |
| "middle": [], |
| "last": "K\u00e1d\u00e1r", |
| "suffix": "" |
| }, |
| { |
| "first": "Afra", |
| "middle": [], |
| "last": "Alishahi", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grzegorz Chrupa\u0142a, \u00c1kos K\u00e1d\u00e1r, and Afra Alishahi. 2015. Learning language through pictures. In Pro- ceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Vector Space Models of Lexical Meaning", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantic Theory", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Clark. 2015. Vector Space Models of Lexical Meaning. In Shalom Lappin and Chris Fox, edi- tors, Handbook of Contemporary Semantic Theory, Wiley-Blackwell, Oxford, chapter 16.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Fast and accurate deep network learning by exponential linear units (ELUs)", |
| "authors": [ |
| { |
| "first": "Djork-Arn\u00e9", |
| "middle": [], |
| "last": "Clevert", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Unterthiner", |
| "suffix": "" |
| }, |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "In International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and accurate deep network learning by exponential linear units (ELUs). In In- ternational Conference on Learning Representations (ICLR).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Senteval: An evaluation toolkit for universal sentence representations", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of LREC.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Supervised learning of universal sentence representations from natural language inference data", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Lo\u00efc", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of EMNLP. Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning cooperative visual dialog agents with deep reinforcement learning", |
| "authors": [ |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Satwik", |
| "middle": [], |
| "last": "Kottur", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "F" |
| ], |
| "last": "Jos\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Moura", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhishek Das, Satwik Kottur, Jos\u00e9 M. F. Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learn- ing. In Proceedings of CVPR.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of ACL. page 350.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Imagination improves multimodal translation", |
| "authors": [ |
| { |
| "first": "Desmond", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00c1kos", |
| "middle": [], |
| "last": "K\u00e1d\u00e1r", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1705.04350" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Desmond Elliott and \u00c1kos K\u00e1d\u00e1r. 2017. Imagina- tion improves multimodal translation. arXiv preprint arXiv:1705.04350 .", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Vse++: Improved visual-semantic embeddings", |
| "authors": [ |
| { |
| "first": "Fartash", |
| "middle": [], |
| "last": "Faghri", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [ |
| "Ryan" |
| ], |
| "last": "Fleet", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanja", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.05612" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2017. Vse++: Improved visual-semantic embeddings. arXiv preprint arXiv:1707.05612 .", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "From captions to visual concepts and back", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Fang", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "N" |
| ], |
| "last": "Iandola", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Dollar", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "C" |
| ], |
| "last": "Platt", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "L" |
| ], |
| "last": "Zitnick", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Zweig", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Fang, S. Gupta, F.N. Iandola, R. Srivastava, L. Deng, P. Dollar, J. Gao, X. He, M. Mitchell, J.C. Platt, C.L. Zitnick, and G. Zweig. 2015. From captions to visual concepts and back. In CVPR.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 10th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th in- ternational conference on World Wide Web (WWW).", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Devise: A deep visual-semantic embedding model", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Frome", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Shlens", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, and T. Mikolov. 2013. Devise: A deep visual-semantic embedding model. In NIPS.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Image pivoting for learning multilingual multimodal representations", |
| "authors": [ |
| { |
| "first": "Spandana", |
| "middle": [], |
| "last": "Gella", |
| "suffix": "" |
| }, |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Spandana Gella, Rico Sennrich, Frank Keller, and Mirella Lapata. 2017. Image pivoting for learning multilingual multimodal representations .", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Generative adversarial nets", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Goodfellow", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Pouget-Abadie", |
| "suffix": "" |
| }, |
| { |
| "first": "Mehdi", |
| "middle": [], |
| "last": "Mirza", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Warde-Farley", |
| "suffix": "" |
| }, |
| { |
| "first": "Sherjil", |
| "middle": [], |
| "last": "Ozair", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of NIPS.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "The symbol grounding problem", |
| "authors": [ |
| { |
| "first": "Stevan", |
| "middle": [], |
| "last": "Harnad", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Physica D", |
| "volume": "42", |
| "issue": "", |
| "pages": "335--346", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stevan Harnad. 1990. The symbol grounding problem. Physica D 42:335-346.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE international conference on computer vision (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "1026--1034", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision (CVPR). pages 1026-1034.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Deep residual learning for image recognition", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "770--778", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pages 770-778.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Learning distributed representations of sentences from unlabelled data", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016a. Learning distributed representations of sen- tences from unlabelled data. In Proceedings of NAACL.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2016b. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics .", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and sum- marizing customer reviews. In Proceedings of SIGKDD. pages 168-177.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Object properties and knowledge in early lexical learning", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Susan", |
| "suffix": "" |
| }, |
| { |
| "first": "Linda", |
| "middle": [ |
| "B" |
| ], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Landau", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Child development", |
| "volume": "62", |
| "issue": "3", |
| "pages": "499--516", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Susan S Jones, Linda B Smith, and Barbara Landau. 1991. Object properties and knowledge in early lex- ical learning. Child development 62(3):499-516.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Learning visual features from large weakly supervised data", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "J P" |
| ], |
| "last": "Van Der Maaten", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Jabri", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Vasilache", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ECCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Joulin, L.J.P. van der Maaten, A. Jabri, and N. Vasi- lache. 2016. Learning visual features from large weakly supervised data. In ECCV.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Deep fragment embeddings for bidirectional image sentence mapping", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Karpathy", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Karpathy, A. Joulin, and L. Fei-Fei. 2014. Deep fragment embeddings for bidirectional image sen- tence mapping. In Proceedings of NIPS.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Deep visualsemantic alignments for generating image descriptions", |
| "authors": [ |
| { |
| "first": "Andrej", |
| "middle": [], |
| "last": "Karpathy", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "3128--3137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pages 3128-3137.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Deep embodiment: grounding semantics in perceptual modalities", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douwe Kiela. 2017. Deep embodiment: grounding semantics in perceptual modalities (PhD thesis).", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Douwe Kiela and L\u00e9on Bottou. 2014. Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics. In Proceed- ings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "Diederik", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Conference on Learning Representations (ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Raquel Urtasun, and Sanja Fidler", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "Yukun", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zemel", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Torralba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urta- sun, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of NIPS.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Associating neural word embeddings with deep image representations using fisher vectors", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Lev", |
| "suffix": "" |
| }, |
| { |
| "first": "Gil", |
| "middle": [], |
| "last": "Sadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Lior", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "4437--4446", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. 2015. Associating neural word embeddings with deep image representations using fisher vectors. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR). pages 4437-4446.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Distributed representations of sentences and documents", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Pro- ceedings of ICML.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Emergent translation in multi-agent communication", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. 2017. Emergent translation in multi-agent communication. CoRR abs/1710.06922.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Learning visual n-grams from web data", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Jabri", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "J P" |
| ], |
| "last": "Van Der Maaten", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Li, A. Jabri, A. Joulin, and L.J.P. van der Maaten. 2016. Learning visual n-grams from web data. In arxiv.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Microsoft COCO: Common objects in context", |
| "authors": [ |
| { |
| "first": "Tsung-Yi", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Maire", |
| "suffix": "" |
| }, |
| { |
| "first": "Serge", |
| "middle": [], |
| "last": "Belongie", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Hays", |
| "suffix": "" |
| }, |
| { |
| "first": "Pietro", |
| "middle": [], |
| "last": "Perona", |
| "suffix": "" |
| }, |
| { |
| "first": "Deva", |
| "middle": [], |
| "last": "Ramanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Doll\u00e1r", |
| "suffix": "" |
| }, |
| { |
| "first": "C Lawrence", |
| "middle": [], |
| "last": "Zitnick", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "European Conference on Computer Vision (ECCV)", |
| "volume": "", |
| "issue": "", |
| "pages": "740--755", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In European Confer- ence on Computer Vision (ECCV). Springer, pages 740-755.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Symbol interdependency in symbolic and embodied cognition", |
| "authors": [ |
| { |
| "first": "Max", |
| "middle": [ |
| "M" |
| ], |
| "last": "Louwerse", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Topics in Cognitive Science", |
| "volume": "59", |
| "issue": "1", |
| "pages": "617--645", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Max M. Louwerse. 2008. Symbol interdependency in symbolic and embodied cognition. Topics in Cogni- tive Science 59(1):617-645.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Better word representations with recursive neural networks for morphology", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "104--113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Richard Socher, and Christopher D Man- ning. 2013. Better word representations with recur- sive neural networks for morphology. In Proceedings of CoNLL. pages 104-113.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Generating images from captions with attention", |
| "authors": [ |
| { |
| "first": "Elman", |
| "middle": [], |
| "last": "Mansimov", |
| "suffix": "" |
| }, |
| { |
| "first": "Emilio", |
| "middle": [], |
| "last": "Parisotto", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [ |
| "Lei" |
| ], |
| "last": "Ba", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "International Conference on Learning Representations (ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. 2016. Generating im- ages from captions with attention. In International Conference on Learning Representations (ICLR).", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Deep captioning with multimodal recurrent neural networks", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Mao", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "L" |
| ], |
| "last": "Yuille", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Mao, W. Xu, Y. Yang, J. Wang, and A.L. Yuille. 2015. Deep captioning with multimodal recurrent neural networks. In Proceedings of ICLR.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "A SICK cure for the evaluation of compositional distributional semantic models", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Marelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Menini", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Luisa", |
| "middle": [], |
| "last": "Bentivogli", |
| "suffix": "" |
| }, |
| { |
| "first": "Reffaella", |
| "middle": [], |
| "last": "Bernardi", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Zamparelli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Reffaella Bernardi, and Roberto Zampar- elli. 2014. A SICK cure for the evaluation of compo- sitional distributional semantic models. In Proceed- ings of LREC.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Mental representations: A dual coding approach", |
| "authors": [ |
| { |
| "first": "Allan", |
| "middle": [], |
| "last": "Paivio", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Allan Paivio. 1990. Mental representations: A dual coding approach. Oxford University Press.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lillian", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summa- rization based on minimum cuts. In Proceedings of ACL. page 271.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lillian", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "115--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL. pages 115-124.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "GloVe: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "The limits of co-occurrence: Tools and theories in language research", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Charles", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Perfetti", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Discourse Processes", |
| "volume": "25", |
| "issue": "2&3", |
| "pages": "363--377", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charles A Perfetti. 1998. The limits of co-occurrence: Tools and theories in language research. Discourse Processes 25(2&3):363-377.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Jointly optimizing word representations for lexical and sentential tasks with the c-phrase model", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nghia The", |
| "suffix": "" |
| }, |
| { |
| "first": "German", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Angeliki", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Lazaridou", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nghia The Pham, German Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly opti- mizing word representations for lexical and senten- tial tasks with the c-phrase model. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Learning deep representations of fine-grained visual descriptions", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Reed", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Akata", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schiele", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Reed, Z. Akata, H. Lee, and B. Schiele. 2016. Learn- ing deep representations of fine-grained visual de- scriptions. In Proceedings of CVPR.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Redundancy in perceptual and linguistic experience: Comparing feature-based and distributional models of semantic representation", |
| "authors": [ |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Riordan", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael N Jones", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Topics in Cognitive Science", |
| "volume": "3", |
| "issue": "2", |
| "pages": "303--345", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brian Riordan and Michael N Jones. 2011. Redun- dancy in perceptual and linguistic experience: Com- paring feature-based and distributional models of se- mantic representation. Topics in Cognitive Science 3(2):303-345.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [ |
| "M" |
| ], |
| "last": "Saxe", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcclelland", |
| "suffix": "" |
| }, |
| { |
| "first": "Surya", |
| "middle": [], |
| "last": "Ganguli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew M. Saxe, James L. McClelland, and Surya Gan- guli. 2014. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. In International Conference on Learning Representa- tions (ICLR).", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1631--1642", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher Potts, et al. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP. pages 1631- 1642.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I. Sutskever, O. Vinyals, and QV. Le. 2014. Sequence to sequence learning with neural networks. In Pro- ceedings of NIPS.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "The semantic conception of truth: and the foundations of semantics", |
| "authors": [ |
| { |
| "first": "Alfred", |
| "middle": [], |
| "last": "Tarski", |
| "suffix": "" |
| } |
| ], |
| "year": 1944, |
| "venue": "Philosophy and phenomenological research", |
| "volume": "4", |
| "issue": "3", |
| "pages": "341--376", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alfred Tarski. 1944. The semantic conception of truth: and the foundations of semantics. Philosophy and phenomenological research 4(3):341-376.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "From Frequency to Meaning: vector space models of semantics", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Artifical Intelligence Research", |
| "volume": "37", |
| "issue": "1", |
| "pages": "141--188", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D. Turney and Patrick Pantel. 2010. From Fre- quency to Meaning: vector space models of se- mantics. Journal of Artifical Intelligence Research 37(1):141-188.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Lasse Espeholt, koray kavukcuoglu, Oriol Vinyals, and Alex Graves", |
| "authors": [ |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Van Den Oord", |
| "suffix": "" |
| }, |
| { |
| "first": "Nal", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, koray kavukcuoglu, Oriol Vinyals, and Alex Graves. 2016. Conditional image generation with pixelcnn decoders. In Proceedings of NIPS.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "Order-embeddings of images and language", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Vendrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanja", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| }, |
| { |
| "first": "Raquel", |
| "middle": [], |
| "last": "Urtasun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Conference on Learning Representations (ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2015. Order-embeddings of images and language. In International Conference on Learning Representations (ICLR).", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "Show and tell: A neural image caption generator", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Toshev", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Erhan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of CVPR.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Annotating expressions of opinions and emotions in language. Language resources and evaluation", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "39", |
| "issue": "", |
| "pages": "165--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language resources and evalua- tion 39(2):165-210.", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "Towards universal paraphrastic sentence embeddings", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Wieting", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Livescu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "International Conference on Learning Representations (ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. In International Conference on Learning Representations (ICLR).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "Model architecture: predicting either an image (Cap2Img), an alternative caption (Cap2Cap), or both at the same time (Cap2Both).", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "text": "Accuracy results on sentence classification and entailment tasks.", |
| "html": null, |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "text": "Mean and variance of dataset concreteness, over all words in the datasets.", |
| "html": null, |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "text": "Spearman \u03c1 s correlation on four standard semantic similarity evaluation benchmarks.", |
| "html": null, |
| "content": "<table/>", |
| "num": null |
| } |
| } |
| } |
| } |